Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm

Session at a glanceSummary, keypoints, and speakers overview

Summary

The speaker opened by celebrating India as the global hub for artificial intelligence talent, noting that virtually every leading AI professional is now based in the country [2-4]. He attributed this concentration to the vision of the Prime Minister, who previously launched Startup India and is now driving an “AI India” agenda [7-9]. He highlighted widespread everyday use of AI assistants and co-pilots, describing them as increasingly addictive tools that illustrate the technology’s reach [11-15].


Drawing on his background in financial services, he argued that AI can enhance credit assessment, enabling broader financial inclusion for previously underserved populations [32-36]. He suggested that just as smartphones expanded access to payments nationwide, AI will allow financial institutions to serve the “last person” and generate wealth across the country [23-26][33-36]. He extended this vision to agriculture, livestock and other sectors, claiming AI-driven solutions built in India could address both domestic and global challenges [36].


He emphasized that India must develop its own foundation models, not because they are inherently difficult, but to create sector-specific AI that solves problems for the Global South and beyond [36][43-45]. Using the analogy of engines versus vehicles, he said India will not only build the underlying models (“engines”) but also a variety of applications (“vehicles”) tailored to different use cases [44-48]. He predicted that India will become a “use-case capital” by producing many large language models and deploying them in areas such as call-center automation, healthcare support, and other industries [59].


He linked the country’s young demographic to a “technology dividend,” asserting that the youth’s capability will accelerate AI adoption and innovation [37-38]. While acknowledging concerns about AI-driven job displacement, he framed the transition as a shift toward AI-enabled abundance rather than loss, urging participants to ride the wave instead of being victimized [59]. He concluded by urging collective participation in the AI revolution, stating that India’s role as the AI “center of gravity” will reshape global perception of the nation [60][64-66]. The overall message was that coordinated effort and indigenous AI development will cement India’s leadership in the worldwide AI ecosystem [61-63].


Keypoints


India as the global AI hub and a source of national pride – Sharma repeatedly celebrates that “all the AI people in the world are in one city and one country” and credits the Prime Minister’s vision for “Startup India” and now “AI India” as the driving force behind this concentration of talent [2-5][7-9][64-66].


AI as a catalyst for financial inclusion and sectoral transformation – Drawing on his background in financial services, he argues that AI will enable better credit assessment, extend credit to the “last person,” and replicate this impact across agriculture, livestock and other sectors, turning local solutions into global ones [32-36].


Building indigenous foundation models and specialized AI agents – He stresses that India must not only adopt existing large-language models but also create its own “engines” (foundation models) and tailor-made agents for specific industries, positioning the country as a “use-case capital” of the world [42-48].


AI will generate abundance rather than merely displace jobs – Sharma acknowledges concerns about AI-driven job loss (e.g., call-center roles) but reframes the narrative toward “AI-led abundance,” urging stakeholders to ride the wave of new opportunities instead of fearing victimisation [59-60].


A rallying call to join the AI revolution – The speech concludes with an explicit invitation for all Indians to “join the revolution,” emphasizing collective effort to reshape how India is perceived globally and to cement its leadership in AI [61-63].


Overall purpose/goal


The discussion is a motivational address aimed at galvanising Indian entrepreneurs, policymakers, and the broader public to embrace and accelerate home-grown AI development. Sharma seeks to highlight India’s strategic advantage, advocate for the creation of indigenous models and sector-specific solutions, and mobilise collective action to position India as the world’s AI leader.


Overall tone


The tone is consistently high-energy, patriotic and forward-looking. It begins with celebratory pride, moves into optimistic exposition of AI’s transformative potential, adopts a persuasive stance when urging the development of domestic models, briefly adopts a cautionary yet hopeful note regarding employment impacts, and ends with a rally-cry that reinforces the earlier enthusiasm. Throughout, the speaker maintains an upbeat and inspirational demeanor.


Speakers

Speaker 1


– Role/Title: Event host or moderator who introduced the keynote speaker.


– Area of Expertise: (not specified)


– Source: [S1][S3]


Vijay Shekhar Sharma


– Role/Title: Founder & CEO of Paytm; Keynote speaker at the AI event.


– Area of Expertise: Fintech, artificial intelligence, entrepreneurship.


– Source: [S5][S6]


Additional speakers:


(None identified beyond the listed speakers)


Full session reportComprehensive analysis and detailed insights

The host welcomed Vijay Shekhar Sharma to the stage [1]. Sharma opened by declaring that every leading artificial-intelligence professional in the world is now based in a single Indian city, framing this concentration as a source of national pride and evidence that India has become the global “centre of gravity” for AI [2-5][64-66].


He credited this rapid aggregation of talent to the Prime Minister’s visionary policies, drawing a parallel between the earlier “Startup India” programme and the current “AI India” agenda, both of which he said have catalysed the country’s leadership in the sector [7-9]. According to Sharma, government support has turned India into a magnet for AI expertise, noting that “everybody who is somebody in AI is right now in this country” [6].


Sharma noted that personal agents or co-pilots are now commonplace, and that increasing use makes the technology increasingly addictive [11-15]. He stressed that AI’s future impact will go far beyond chat or image editing, extending into core industry processes [??-??].


To illustrate how new technology can spread from elite users to the masses, he recounted launching QR-code payments. He described a meeting with a government official who doubted public comprehension, and how a house-help in Aligarh could use Paytm by simply taking a photo of a QR-code [17-22][??-??]. He used this anecdote to show that once a technology is understood by a “common man,” it quickly achieves nationwide adoption, a pattern he expects AI to repeat [23].


Building on that momentum, Sharma declared that the next milestone is the integration of AI into every smartphone, allowing each device to harness AI capabilities [24-26]. He warned that the coming years will shift AI from an individual, experimental play (till 2025) to a business-wide capability (starting 2026) that can solve problems previously thought unsolvable [??-??][27-30].


Drawing on his background in financial services, he argued that AI can dramatically enhance credit assessment. By analysing vast data sets, AI can handle “corner cases,” determining where credit should or should not be extended, and thereby reach the “last person” to turn access to finance into a driver of wealth creation [32-36]. He likened this to the earlier smartphone-driven expansion of payments, suggesting that AI will enable financial institutions to serve previously unserved borrowers and promote inclusive growth [23-26][33-36].


He highlighted AI applications for agriculture, livestock, horticulture, and even machinery [36]. He recalled a recent discussion between Nandan sir and the Prime Minister on using AI to improve cattle-management, illustrating how the technology can be extended to livestock [??-??].


Central to his argument is the need for indigenous foundation models. While acknowledging that building large-language models is not “rocket science,” he insisted that India must develop its own “engines” and then layer specialised agents on top, rather than merely importing foreign models [42-45]. He expressed pride that his fellow entrepreneur Sarvam has built a foundation model in India, seeing this as evidence that the country has the capability to develop such models [??-??][43-45]. Using an automotive analogy, he likened foundation models to engines and sector-specific applications to vehicles, emphasizing that the value lies in the diverse “vehicles” built on the same “engine” [44-48].


He projected that India will become the world’s “use-case capital,” producing a multitude of sector-specific foundation models that power call-centre automation, remote health-monitoring for ageing populations in Europe, and other industry solutions [59-60][??-??]. This approach, he argued, will generate AI that works for particular segments rather than generic, one-size-fits-all models.


Addressing concerns about job displacement, Sharma reframed the narrative from “AI-led job reduction” to “AI-led abundance,” suggesting that call-centre roles may evolve into higher-value services such as health-monitoring, illustrating how AI can augment rather than replace human work [59-60][??-??].


He linked India’s demographic dividend to a “technology dividend,” asserting that the youthful population will accelerate AI adoption and innovation, multiplying the nation’s technological impact [37-38][??-??].


In his concluding remarks, Sharma called on all stakeholders to join the AI revolution, stressing that collective effort is essential for cementing India’s leadership in the global AI ecosystem [61-63]. He reiterated that the Prime Minister’s vision has already positioned India as the centre of AI gravity and urged the audience to help reshape how the world perceives the nation [60][64-66]. He ended with a repeated affirmation-“We are here, we are here, we are here”-underscoring India’s position as the AI centre of gravity [??-??].


Overall, the speech combined patriotic enthusiasm with a strategic roadmap: develop indigenous foundation models, apply AI across key sectors, leverage the youthful workforce, and transform potential disruption into widespread prosperity.


Session transcriptComplete transcript of the session
Speaker 1

Ladies and gentlemen, please welcome Mr. Vijay Shekhar Sharma.

Vijay Shekhar Sharma

Wow. First of all, I do believe that everybody who is an Indian must be very proud that all the AI people in the world are in one city and one country. For that, we need to clap for this event’s host. And I think this is the power of India, my friend. I don’t have to say this. Everybody who is somebody in AI is right now in this country. Our Prime Minister has been able to bring the excitement of AI. Just like 10 years back, he was able to do it for Startup India. So from the Startup India to the AI India, once again for our Honorable Prime Minister this time, guys. I don’t have to tell you how the powerful capability of AI all of us have experienced.

Many of you must be using a personal agent in every other day. And if not agent, you must be using a co -pilot. You must be asking questions to him. And the beauty is that… But the more you use it, the more it becomes addictive. It is where the technology is. When we launched the QR code, I still remember, I went to the government and I had a discussion with them that this is a matter of demonetization, that this can be paid in this way. So the person with whom I was talking, he asked me, do you think the common man will understand what to do? So I said, sir, I went to Aligarh and my house help said, brother, we also do Paytm.

So I asked, how do you do it? He said, you have to take a photo of it from Paytm. And when I told him, I said, sir, when a common man understood how to do Paytm, then this publicity has now become confirmed in the world. And now today, in every nook and corner of the country, we can see the payments reaching and completing itself. And now this takes us to the next milestone, where every one of us who uses it. Every smartphone can now use power of AI. Now, I don’t have to tell this once again. The capabilities that we will harness over the period will not be just limited by the. chat or let’s say the photo you are making or editing something or picking up a message from WhatsApp, it will go towards the industry.

So till 2025, AI was more of an individual experiential play, if you will. You know, you were trying to find out use case and the problem answers that you fundamentally believe that it will be. But 2026 begins with a commitment and confidence that AI will bring the capability in the business and the work and the problem that we typically would not have assumed that would be solved. And let me say this. Typically, I come from financial service industry and I fundamentally believe access to credit creates the wealth. But access to credit requires a lot of insights and abilities to confirm whether this money will come back or not. Many rules and regulations are allowing us to expand the reach of credit.

But by the capability of AI, we will be able to take care of corner cases where it should not go or it should go. So people will become more financially inclusive than ever before. as you are knowing the smartphone gave access to the financial system to the every nook and corner of the country now this time financial institutions will serve those customers so from access to the rich ability of financial system will reach financial systems bring wealth to the country bring access bringing access to the credit to the last person brings wealth to the person there and that is what i believe ai will be able to do let’s say in financial system you could talk about agriculture you could talk about husbandry i remember the conversation between nandan sir and prime minister sir yesterday was happening about let’s say how could you use the power of ability of cattle to use in ai and then a mull case was talked about now imagine the same thing could be done even for machines even for plants even for agriculture the capability of ai that we want to use will make it possible for us to build it in india for the problem and solution that we build for india and this time we while we are solving the problems here we will not solve a local problem we will solve a global problem because the capability of indians have been proven that we can make world -class technology the technology that falls at an order of magnitude scale and abilities that are globally renowned and capable once again i’m going to say that that this is not about foundation model only important a foundation model is a horizontal capable model i don’t mind saying that we must must and for sure have a foundation model in india all because we have a capability and resources to do it and i’m very proud that my fellow entrepreneur sarvam has done the job and i do believe that is an acknowledgement that we can build it it’s not something rocket science it’s not something that we cannot build it but the point is not about just building a foundation model point is about building the models that solve for us solve for global south solve for global problems and those models and the requirement of those models to bring in everyday life can only happen in a country where the demographic difference between the two countries is very important and i think that is the key to evidence belongs to us young people, if I tell them to use it, with whom you will be able to do it, your capability will increase, they will experience it.

So the first time our demographic dividend will also become the demographic technology dividend, if you will. The capability of our young, capability and ability and intent of our young will aid to the propagation of AI unlike ever before. It is not about just using, let’s say, a messaging platform or a payment platform. It is about adding the capability in your everyday life. And that is rare and possible only in this country. Again, there is a question of, for me, that will you build LLM models or will you build agents on top of it? I’m sure all of us have understood that models are the foundation and the, let’s say, on the top of it an agent.

It’s like asking, will you build vehicles or will you build engines? It’s not like when Daimler Benz made an engine, India didn’t make it and no other country made it. We will also make our engines. Our engines will be small, big, different, different, what will be the use? Our engines will be small, big, different, different, what will be the use? Our engines will be small, big, different, different, different, what will be the use? Our engines will be small, big, different, different, what will be the use? Our engines will be small, big, different, different, what will be the use? Our engines will be small, big, different, different, what will be the use? Our engines will different, what will be the use?

Our engines will be small, big, different, different, what will be the use? Our engines will be small, big, different, different, what will be the use? Our engines will be small, big, different, different, what will be the use? Our engines will be small, big, different, what will be the use? Our engines will be small, big, different, different, what will be the use? Our engines will be and many more fold than ever before imagine so right now what has happened in the world is that someone has made an engine which is called ICE and you are saying that can you make an engine, yes we can make an engine because we know the nuances of it but what is more important than that is the use case of that engine using it to make a passenger vehicle, using it to make a bus using it to make a truck, using it to make a trailer that is the use case the world wants to see, India not only will be the use case capital of the world but India will also be the capital of number of LLMs that India will build India will build more number of LLMs for the section of usage and ability of usage than ever before the fight is not about just the foundation model, fight is about AI that works for a sector, works for a segment and solves the problem of an agent for example like call center, call center, call center is a talked about thing that we will let’s say what will happen to the jobs of call centers I don’t mind saying that call center as a literally job may or may not be challenged yes but the capability is immense if we can solve the call of someone else’s country why can’t we solve the healthcare problem of someone else’s country if imagine a European there is there is a old age in Europe and you need to solve for their health care tracking and conditioning and requirements so a call center can evolve to become a healthcare provider because they can track the local knowledge of that country in the newest of that country and remotely somebody can humanly look at it and confirm yes you should take that action and that capability can only happen in a country that is embracing the change and embracing the technology it is not a question of whether there will be AI led job reduction it is rather a question of there will be AI led abundance and are you on the riding the wave or are you getting victimized on the wave I remember 2010 when this country had feature force I remember the business model that I used to run was feature phone led value added services business ringtone ringback tone many of you might have been the customer and you remember that and I want to tell you one thing I was going for IPO in 2010 and the challenge was that what will happen to the future forward because the smartphone I had seen in US and I was uncomfortable that we should do an IPO at that point of time because I was like the business model is going to change and the power of capability of smartphone was not about that they will be PCOS CDA and that is the power of AI that you should look at it and that is the power of AI that you should look at it and that is the power of AI that you should look at it and that is the power of AI that you should look at it and that is the capability that Indians will look at it some of us will embrace it as a ability and capability that we can extend and deliver even further set of services capabilities that are not yet seen and reached within ourselves and some of us will feel that we are victim of the capability this machine gives and that is the change my friend always continuous in the world and I think the India and the land of Gita, which has told that change is the only constant in the world, will not only embrace it and lead it, but it will lead it from the front and show the world the ability and capabilities of AI that will show up.

So ladies and gentlemen, I’m very proud to be in the country where we today are talking and the center of universe of AI gravity is. And from here onwards, we will, instead of looking at AI as a challenger to any problem that we see or any opportunity that we today yield, but to a larger opportunity and larger capability that India will make and all Indian will make India proud. So with this, I again and again say the ability of India can only be underestimated when we all together join our hands and join in the revolution. So I would say this once again, join the revolution and change the way India is perceived in the world.

And today, our Honorable Prime Minister has shown that the center of gravity of AI is India. We are here. We are here. We are here. We are here. We are here. We are here. Thank you so much, guys.

Related ResourcesKnowledge base sources related to the discussion topics (20)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The host welcomed Vijay Shekhar Sharma to the stage.”

The knowledge base notes that the event host or moderator formally introduced Mr. Vijay Shekhar Sharma to the audience, confirming the welcome statement [S4].

Additional Contextmedium

“Personal agents or co‑pilots are now commonplace, marking a shift toward more autonomous AI agents.”

Discussion in the knowledge base highlights a transition from human-in-the-loop co-pilots to autonomous agents that deliver business value, providing context for the claim about agents becoming common [S47] and notes on agent reliability [S45].

Confirmedhigh

“Sharma recounted launching QR‑code payments, describing how a house‑help in Aligarh could use Paytm by simply photographing a QR‑code, illustrating rapid mass adoption.”

The knowledge base explains that QR-code payment solutions work with a phone camera and USSD, requiring only static stickers, and that the launch led to billions of transactions and hundreds of millions of users, confirming the anecdote and its impact [S53] and [S54].

Additional Contextmedium

“India’s pattern of rapid technology adoption (e.g., UPI) shows that once a technology is understood by the “common man,” it quickly achieves nationwide use, a pattern Sharma expects AI to repeat.”

The knowledge base remarks that India has proven itself a phenomenal adopter of technology, citing UPI as an example that grew to become the world’s largest payment system, adding nuance to the claim about mass adoption patterns [S55].

Additional Contextmedium

“Government policies such as Startup India and AI India, including free GPUs and funding, have catalysed India’s AI leadership.”

The knowledge base records that the Indian government provides free GPUs to citizens and funds model development, reflecting strong policy support for AI initiatives, which aligns with Sharma’s attribution of leadership to government programmes [S41].

External Sources (56)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — Speaker 1 serves as the event host or moderator, formally introducing Mr. Vijay Shekhar Sharma to the audience. This rep…
S6
From Innovation to Impact_ Bringing AI to the Public — – Vijay Shekhar Sharma- Audience – Vijay Shekhar Sharma- Harinder Takhar
S7
AI Innovation in India — Bagla articulated a compelling vision of India’s unique advantages in the global AI landscape, asserting that India will…
S8
From Innovation to Impact_ Bringing AI to the Public — India has to build a foundation model. This is no compromise statement. Not because that we can make a better financial …
S9
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particul…
S10
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — India possesses several competitive advantages that position it well for AI innovation and startup growth. These include…
S11
Secure Finance Risk-Based AI Policy for the Banking Sector — Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mec…
S12
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — The financial inclusion sector is transforming in 2025, moving beyond mere access to financial services to focus onfinan…
S13
Technology Rewiring Global Finance: A Panel Discussion Summary — Economic | Infrastructure Hu argues that financial services, being data-rich, are ripe for AI transformation. He emphas…
S14
From India to the Global South_ Advancing Social Impact with AI — This comment directly addresses one of the most anxiety-provoking aspects of AI adoption – job displacement. By framing …
S15
Shaping the Future AI Strategies for Jobs and Economic Development — A central theme emerged around collaboration rather than displacement of human workers. Panelists emphasized that AI sho…
S16
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Rees-Jones takes an optimistic view that AI can provide personalized tutoring for reskilling in areas like coding, while…
S17
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S18
The Global Power Shift India’s Rise in AI & Semiconductors — High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different …
S19
Keynote Address_Revanth Reddy_Chief Minister Telangana — Good afternoon, friends. My pleasure to address this event because of some of the best of minds from all over the world …
S20
Welcome Address — The speech emphasizes that with proper direction, ethical frameworks, and global cooperation, artificial intelligence ca…
S21
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — Addressing debates about whether India should focus on building large language models or developing applications, Sharma…
S22
Welcome Address — Excellencies, Honorable Ministers, Industry Leaders, Innovators, Entrepreneurs, Researchers, Delegates, Delegates, Deleg…
S23
AI Innovation in India — Bagla articulated a compelling vision of India’s unique advantages in the global AI landscape, asserting that India will…
S24
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Julie Sweet from Accenture highlighted another crucial advantage: India’s human capital. With over 350,000 employees in …
S25
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particul…
S26
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — The financial inclusion sector is transforming in 2025, moving beyond mere access to financial services to focus onfinan…
S27
Technology Rewiring Global Finance: A Panel Discussion Summary — Economic | Infrastructure Hu argues that financial services, being data-rich, are ripe for AI transformation. He emphas…
S28
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Technology will drive financial inclusion by making services accessible through natural language interactions in local l…
S29
AI reshapes banking jobs, personalised service through avatars? — A recentreport from Citigrouppredicts a significant rise in banking profits, driven by the adoption of AI, with projecti…
S30
AI driving transformation in financial services — At YourStory’s Tech Leaders’ Conclave, Ankur Pal, Chief Data Scientist at Aplazo,discussedhow AI is transforming the fin…
S31
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — Addressing debates about whether India should focus on building large language models or developing applications, Sharma…
S32
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Rees-Jones takes an optimistic view that AI can provide personalized tutoring for reskilling in areas like coding, while…
S33
From Innovation to Impact_ Bringing AI to the Public — Sharma explains that AI will not eliminate jobs but will transform traditional work into more productive AI-enabled work…
S34
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — The argument asserts the importance of considering ethical implications in the development and use of AI to ensure it al…
S35
GermanAsian AI Partnerships Driving Talent Innovation the Future — Mr. Jaiswal uses the historical example of electricity to illustrate how disruptive technologies initially cause fear bu…
S36
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S37
Keynote Address_Revanth Reddy_Chief Minister Telangana — Good afternoon, friends. My pleasure to address this event because of some of the best of minds from all over the world …
S38
High Level Session 3: AI & the Future of Work — Junha Li: Thank you. Good morning. Good to see you again in this plenary hall. Before I’ll distinguish the panel, starti…
S39
Harnessing Collective AI for India’s Social and Economic Development — <strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in speci…
S40
Keynote Address_Revanth Reddy_Chief Minister Telangana — Reddy delivered a frank assessment of India’s historical relationship with global technological revolutions. “India miss…
S41
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — Sabharwal praises the Indian government’s AI initiatives, stating that no other country provides free GPUs to its citize…
S42
Building the AI-Ready Future From Infrastructure to Skills — I’d like to invite our next speaker, Paneerselvam M, CEO of the METI Startup Hub at Ministry of Electronics and IT, Gove…
S43
Subrata K. Mitra Jivanta Schottli Markus Pauli — economic and strategic partner in global affairs and the Gulf crisis of 1990- 1 deeply impacted India through …
S44
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Adding to what just was discussed, we have a tendency to overestimate the next two years and impact and underestimate wh…
S45
https://app.faicon.ai/ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — And that also varies significantly with the ability or the reliability of an agent. So as we move from agents that need …
S46
Designing Indias Digital Future AI at the Core 6G at the Edge — Sharma argues that the gap between pilots and scale is not technological but rather a lack of scalable and referenceable…
S47
https://dig.watch/event/india-ai-impact-summit-2026/agents-of-change-ai-for-government-services-climate-resilience — Yeah, so I think for me the big shift has been from co -pilot human in the loop to agents which can act and really provi…
S48
OEWG Chair releases Zero Draft and Rev 1 of the Final Report, setting stage for final talks  — The Chair of theOpen-ended Working Group (OEWG) on the security of and in the use of information and communications tech…
S49
Driving Indias AI Future Growth Innovation and Impact — Impact:This comment created a sobering moment in an otherwise optimistic discussion, forcing acknowledgment that technic…
S50
Driving Indias AI Future Growth Innovation and Impact — These key comments fundamentally shaped the discussion by expanding it beyond technical infrastructure to encompass trus…
S51
The Intelligent Coworker: AI’s Evolution in the Workplace — He emphasized this as a bigger question than quantitative job displacement, focusing on how professional development and…
S52
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Summary:The discussion reveals strong consensus on key strategic directions: comprehensive ecosystem development beyond …
S53
Fast-tracking implementation of eTrade Readiness Assessments — 88 QR codes hold particular promise because they can be offered without access to mobile Internet. At their most basic, …
S54
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — And when he launched that application, he started something where today we have 21 billion transactions a month with 500…
S55
https://app.faicon.ai/ai-impact-summit-2026/driving-indias-ai-future-growth-innovation-and-impact — Awesome. Great question, Midu. And, you know, we as a nation have proven ourselves to be phenomenal adopters of technolo…
S56
Nepal Engagement Session — The conversation began with a powerful personal anecdote from Shri Alok Prem Nagar, who described attending a Gram Sabha…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument119 words per minute9 words4 seconds
Argument 1
Welcome to Vijay Shekhar Sharma (Speaker 1)
EXPLANATION
The host opens the session by inviting the audience to welcome Mr. Vijay Shekhar Sharma, signalling the start of the discussion.
EVIDENCE
The opening line explicitly greets the audience and introduces Mr. Sharma as the main speaker [1].
MAJOR DISCUSSION POINT
Opening and Acknowledgment
V
Vijay Shekhar Sharma
12 arguments207 words per minute2135 words616 seconds
Argument 1
Concentration of AI talent in India makes the country the world’s AI centre (Vijay Shekhar Sharma)
EXPLANATION
Sharma asserts that India now hosts the majority of global AI expertise, positioning the nation as the central hub for AI development worldwide. He emphasizes national pride in this concentration of talent.
EVIDENCE
He states that “everybody who is an Indian must be very proud that all the AI people in the world are in one city and one country” and later repeats that “Everybody who is somebody in AI is right now in this country” [2][6]. He also declares India as “the centre of universe of AI gravity” towards the end of his speech [60-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sharma’s claim aligns with observations that India is poised to be the biggest benefactor of AI due to its large talent pool and population, as discussed in [S7].
MAJOR DISCUSSION POINT
India as a Global AI Hub
Argument 2
Government programmes like Startup India and AI India have catalysed this leadership (Vijay Shekhar Sharma)
EXPLANATION
Sharma credits the Indian government, especially the Prime Minister, for creating an enabling environment through initiatives such as Startup India and the newer AI India programme, which have accelerated the country’s AI leadership.
EVIDENCE
He references the Prime Minister’s role in “bringing the excitement of AI” and compares it to the earlier success of “Startup India” ten years ago, noting that the same leadership now drives “AI India” [7][8][9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He credits the Prime Minister’s initiatives such as Startup India and the newer AI India programme for driving AI momentum, which is documented in [S4] and [S3].
MAJOR DISCUSSION POINT
India as a Global AI Hub
Argument 3
AI can analyse credit risk and reach corner‑case borrowers, expanding credit to the “last person” (Vijay Shekhar Sharma)
EXPLANATION
Sharma argues that AI’s analytical power can assess creditworthiness for borrowers who are currently excluded, thereby extending financial services to the most underserved individuals.
EVIDENCE
He explains that “access to credit creates wealth” but requires “insights and abilities to confirm whether this money will come back” and that AI will handle “corner cases where it should not go or it should go,” leading to greater financial inclusion [32][33][34][35][36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of AI in handling credit-risk corner cases and expanding financial inclusion is described in [S4] and reinforced in [S3].
MAJOR DISCUSSION POINT
AI’s Role in Financial Inclusion
Argument 4
Smartphone‑driven finance proved mass adoption is possible; AI will extend that reach (Vijay Shekhar Sharma)
EXPLANATION
He uses the rapid uptake of QR‑code payments and Paytm as evidence that digital financial tools can achieve nationwide penetration, suggesting AI will follow a similar path of mass adoption.
EVIDENCE
Sharma recounts his meeting with the government about QR-code payments, the skepticism about common-man adoption, and the subsequent widespread use of Paytm across the country, noting that “now today, in every nook and corner of the country, we can see the payments reaching and completing itself” [17-23]. He also links this to the fact that “every smartphone can now use power of AI” [25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sharma’s analogy to QR-code and Paytm adoption and the democratisation of AI via smartphones is highlighted in [S4] and [S3].
MAJOR DISCUSSION POINT
AI’s Role in Financial Inclusion
Argument 5
India must build its own foundation models and specialised LLMs for the Global South (Vijay Shekhar Sharma)
EXPLANATION
Sharma stresses the strategic need for India to develop indigenous foundation models and domain‑specific large language models that address the unique challenges of the Global South, rather than relying solely on foreign models.
EVIDENCE
He states that “we must… have a foundation model in India” and that building such models is essential for solving local and global problems, emphasizing the importance of sector-specific LLMs for the Global South [36][46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for indigenous foundation models and sector-specific LLMs for the Global South are documented in [S4], [S3] and expanded in [S8].
MAJOR DISCUSSION POINT
Indigenous Foundation Models and Sector‑Specific LLMs
Argument 6
Success of domestic ventures (e.g., Sarvam) shows the capability to create world‑class models (Vijay Shekhar Sharma)
EXPLANATION
He cites the achievement of the Indian startup Sarvam in building a foundation model as proof that Indian firms possess the talent and resources to compete globally in AI model development.
EVIDENCE
Sharma mentions “my fellow entrepreneur Sarvam has done the job” and calls it an “acknowledgement that we can build it,” highlighting domestic capability to produce world-class technology [36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sharma cites Sarvam’s achievement as proof of Indian capability, which is recorded in [S4] and [S3].
MAJOR DISCUSSION POINT
Indigenous Foundation Models and Sector‑Specific LLMs
Argument 7
AI will generate abundance rather than merely cut jobs; the choice is to ride the wave or be victimised (Vijay Shekhar Sharma)
EXPLANATION
He argues that AI should be seen as a catalyst for new opportunities and economic abundance, and that societies must decide whether to harness it proactively or suffer its disruptive effects.
EVIDENCE
He explicitly says “it is not a question of whether there will be AI led job reduction it is rather a question of there will be AI led abundance and are you on the riding the wave or are you getting victimized on the wave” [59].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The abundance narrative and the choice between embracing AI or being victimised are outlined in [S4] and [S3].
MAJOR DISCUSSION POINT
AI‑Driven Industry Transformation and Job Landscape
Argument 8
Past tech shifts (feature‑phone to smartphone) illustrate the need to adapt early (Vijay Shekhar Sharma)
EXPLANATION
Sharma reflects on his own experience with feature‑phone value‑added services in 2010, noting that anticipating the smartphone revolution was crucial, thereby illustrating the importance of early adaptation to disruptive technologies.
EVIDENCE
He recounts his 2010 feature-phone VAS business, his hesitation to IPO because of the impending smartphone wave, and how the smartphone’s capabilities reshaped the market [59].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sharma’s reflection on the feature-phone to smartphone transition as a lesson for early adoption appears in [S4].
MAJOR DISCUSSION POINT
AI‑Driven Industry Transformation and Job Landscape
Argument 9
AI can be applied to cattle management, farming, and remote healthcare, turning call‑centres into service platforms (Vijay Shekhar Sharma)
EXPLANATION
He envisions AI extending beyond finance to sectors such as agriculture, livestock, and health, where AI‑enabled call centres could provide remote diagnostics and services, thereby creating new value chains.
EVIDENCE
Sharma references a conversation about using AI for “cattle” and “agriculture,” and then expands the idea to “machines… plants… agriculture” and describes how a call centre could evolve into a healthcare provider for remote monitoring [36][59].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Examples of AI for agriculture, livestock health, and evolving call centres into healthcare providers are provided in [S8] and [S4].
MAJOR DISCUSSION POINT
Diversified Use‑Cases: Agriculture, Livestock, Healthcare
Argument 10
Leveraging India’s demographic dividend turns it into a “technology dividend” (Vijay Shekhar Sharma)
EXPLANATION
He claims that India’s large, youthful population can be transformed into a technological advantage, accelerating AI adoption and innovation across the country.
EVIDENCE
He declares “the first time our demographic dividend will also become the demographic technology dividend” and notes that “the capability of our young… will aid to the propagation of AI unlike ever before” [37][38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of converting India’s demographic dividend into a technology dividend is discussed in [S7] and echoed in [S4].
MAJOR DISCUSSION POINT
Diversified Use‑Cases: Agriculture, Livestock, Healthcare
Argument 11
QR‑code and Paytm adoption demonstrated how new tech spreads from elite to mass users (Vijay Shekhar Sharma)
EXPLANATION
Sharma uses the rollout of QR‑code payments and Paytm as a case study showing how a technology initially perceived as complex can achieve widespread acceptance across all socioeconomic groups.
EVIDENCE
He narrates his discussion with a government official about QR-code payments, the skepticism about common-man understanding, and the eventual universal adoption of Paytm, illustrating the diffusion process [17-23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The rapid diffusion of QR-code payments and Paytm across all socioeconomic groups is detailed in [S4] and [S3].
MAJOR DISCUSSION POINT
Historical Analogies of Tech Adoption
Argument 12
AI is positioned to follow the same trajectory, becoming embedded in everyday life (Vijay Shekhar Sharma)
EXPLANATION
He predicts that AI will move from niche applications to being a routine part of daily activities, similar to how messaging and payment platforms became ubiquitous.
EVIDENCE
He states that AI will go beyond “a messaging platform or a payment platform” to “adding the capability in your everyday life,” and emphasizes that “every smartphone can now use power of AI” [39-41][25][59].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sharma’s prediction that AI will become a routine part of daily life, similar to messaging and payments, is noted in [S4] and [S3].
MAJOR DISCUSSION POINT
Historical Analogies of Tech Adoption
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Overall Assessment

The transcript contains only a brief opening remark by Speaker 1 welcoming Vijay Shekhar Sharma and a lengthy address by Sharma covering India’s AI leadership, government programmes, financial inclusion, sector‑specific AI applications, and the need for indigenous models. No substantive thematic overlap or shared arguments are evident between the two speakers beyond the procedural acknowledgment of the speaker’s presence.

Minimal consensus – the only point of convergence is the procedural welcome, which has limited implications for the substantive topics under discussion.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript shows virtually no substantive disagreement. Speaker 1’s role is limited to a ceremonial welcome, and Vijay Shekhar Sharma’s extensive remarks are unchallenged. The only point of interaction is a shared affirmation of India’s emerging AI prominence.

Minimal – the dialogue is essentially a one‑sided endorsement, implying strong consensus on the narrative of India as an AI hub and on the need for indigenous AI development.

Partial Agreements
Both speakers acknowledge the presence and importance of Vijay Shekhar Sharma in the event, signalling a shared goal of highlighting India’s AI leadership. Speaker 1 opens the session by welcoming Mr. Sharma [1], and Sharma proceeds to emphasize that India now hosts the majority of global AI expertise and is the centre of AI gravity [2][6][60-68]. While Speaker 1 does not explicitly comment on AI, the act of welcoming the speaker aligns with the broader objective of celebrating India’s AI stature.
Speakers: Speaker 1, Vijay Shekhar Sharma
Welcome to Vijay Shekhar Sharma (Speaker 1) Concentration of AI talent in India makes the country the world’s AI centre (Vijay Shekhar Sharma)
Takeaways
Key takeaways
India is positioned as the world’s AI hub, with a concentration of AI talent and government support through initiatives like Startup India and AI India. AI will significantly enhance financial inclusion by improving credit risk analysis and reaching “last‑person” borrowers, building on the mass adoption of smartphone‑driven finance. Building indigenous foundation models and sector‑specific large language models (LLMs) is crucial; domestic successes such as Sarvam demonstrate India’s capability to create world‑class models for the Global South. AI is expected to transform industries, generating abundance rather than merely cutting jobs; early adoption and adaptation are essential, as illustrated by past shifts from feature phones to smartphones. India’s demographic dividend can become a “technology dividend,” leveraging its young population to accelerate AI adoption and innovation. Diverse AI use‑cases are highlighted, including agriculture, livestock management, remote healthcare, and evolving call‑centres into service platforms. Historical examples like QR‑code and Paytm adoption show how new technology spreads from elite to mass users; AI is anticipated to follow a similar trajectory and become embedded in everyday life. The focus should be on creating AI solutions that address global challenges, not just local problems, positioning India as a leader for the Global South. A call to action for all stakeholders to join the AI revolution and reshape India’s global perception.
Resolutions and action items
None identified
Unresolved issues
Specific roadmap and timeline for developing indigenous foundation models and sector‑specific LLMs. Details on regulatory and policy frameworks needed to support AI‑driven financial inclusion and other industry transformations. Strategies to manage potential job displacement and ensure inclusive AI‑led economic abundance. Implementation plans for applying AI in agriculture, livestock, and remote healthcare at scale.
Suggested compromises
None identified
Thought Provoking Comments
I do believe that everybody who is an Indian must be very proud that all the AI people in the world are in one city and one country.
This bold claim reframes India from a consumer of AI technology to the global hub of AI talent, challenging the common perception that AI leadership resides mainly in the US or China.
Sets the tone of the entire talk, positioning India as the epicenter of AI. It primes the audience to view subsequent examples (QR code, financial inclusion, etc.) as evidence supporting this narrative, shifting the conversation from describing AI trends to asserting national pride and strategic advantage.
Speaker: Vijay Shekhar Sharma
When we launched the QR code, I went to the government and they asked if the common man would understand it. I showed my house‑help in Aligarh using Paytm with a photo, and that proved the technology could reach every corner of the country.
The anecdote illustrates how a seemingly sophisticated digital tool can be adopted at the grassroots level, highlighting the interplay between technology, policy, and cultural acceptance.
Provides a concrete, relatable example that validates the earlier claim about India’s AI readiness. It transitions the discussion from abstract national pride to tangible proof of mass adoption, reinforcing the argument that AI can similarly permeate everyday life.
Speaker: Vijay Shekhar Sharma
Till 2025 AI was more of an individual experiential play. 2026 begins with a commitment and confidence that AI will bring capability in business and problems we never assumed could be solved.
Marks a temporal turning point, forecasting a shift from consumer‑focused AI to enterprise‑wide transformation, and challenges listeners to rethink AI’s role beyond personal assistants.
Creates a forward‑looking pivot in the speech, moving the audience’s focus from current usage to future strategic planning. It opens space for discussing sector‑specific applications such as finance, agriculture, and healthcare.
Speaker: Vijay Shekhar Sharma
Access to credit creates wealth, but assessing credit risk is complex. AI will handle corner cases, making financial inclusion possible for the last person in the country.
Links AI directly to socioeconomic impact, proposing a solution to a long‑standing development challenge—financial inclusion—through advanced risk analytics.
Deepens the conversation by connecting technology to a concrete social outcome. It invites stakeholders (banks, policymakers) to envision AI‑driven credit models, thereby broadening the discussion from tech hype to public‑policy relevance.
Speaker: Vijay Shekhar Sharma
We must not only build foundation models; we must build models that solve problems for the Global South. India’s demographic dividend can become a ‘demographic‑technology dividend.’
Challenges the prevailing focus on generic large language models by emphasizing purpose‑built AI for emerging‑market challenges, and reframes demographic advantage as a source of AI innovation.
Introduces a nuanced perspective that shifts the narrative from merely replicating Western models to creating indigenous, problem‑oriented AI. This reorients the audience toward a mission‑driven research agenda.
Speaker: Vijay Shekhar Sharma
Building LLMs is like building engines; the real value is in the vehicles—applications such as call‑center AI, healthcare support, and sector‑specific agents.
Uses a vivid engineering analogy to differentiate between foundational technology and its end‑use products, emphasizing the importance of application development over model creation alone.
Redirects the discussion toward ecosystem development—start‑ups, product teams, and industry partners—highlighting where investment and talent should flow. It subtly critiques a model‑centric approach and encourages a more holistic AI strategy.
Speaker: Vijay Shekhar Sharma
It is not a question of AI‑led job reduction; it is a question of AI‑led abundance. Are you riding the wave or being victimised by it?
Poses a reframed narrative around AI’s impact on employment, shifting from fear of displacement to opportunity for productivity and new value creation.
Serves as a rhetorical turning point that addresses a common anxiety, thereby calming potential resistance and motivating the audience to view AI adoption as a growth opportunity rather than a threat.
Speaker: Vijay Shekhar Sharma
India will be the use‑case capital of the world and will build more LLMs than ever before, not just for India but for global problems.
Projects India as both a testing ground and a leading producer of AI models, merging the ideas of domestic relevance and global leadership.
Culminates the speech by reinforcing the earlier themes of national pride, capability, and responsibility. It leaves the audience with a clear, aspirational vision that ties together all previous points.
Speaker: Vijay Shekhar Sharma
Overall Assessment

The speech is structured around a series of high‑impact statements that progressively build a narrative: starting with a bold claim of India’s AI dominance, moving through relatable proof points of technology adoption, forecasting a strategic shift from individual to enterprise AI, and finally linking AI to societal challenges like financial inclusion and job creation. Each thought‑provoking comment acts as a pivot, either introducing a new thematic strand (e.g., AI for credit risk) or reframing existing concerns (e.g., AI‑led abundance vs job loss). Collectively, these comments steer the audience from admiration of past achievements toward a forward‑looking, purpose‑driven agenda, encouraging stakeholders to view AI not merely as a tool but as a catalyst for national and global transformation.

Follow-up Questions
Will India build its own foundation large language models (LLMs) or focus on building agents on top of existing models?
An explicit question raised about the strategic direction for AI development in India, crucial for determining investment and research priorities.
Speaker: Vijay Shekhar Sharma
How can AI be leveraged to improve financial inclusion, particularly in credit risk assessment and reaching underserved corner cases?
He highlighted AI’s potential to expand credit access, indicating a need for research into AI-driven credit evaluation methods.
Speaker: Vijay Shekhar Sharma
What are the potential applications of AI in agriculture, livestock, and horticulture, and how can they be developed within India?
He referenced discussions on using AI for cattle and agriculture, suggesting a research agenda for sector‑specific AI solutions.
Speaker: Vijay Shekhar Sharma
How can sector‑specific AI models (e.g., for call centers, healthcare) be created to solve problems for the Global South?
He emphasized building models that address specific industry challenges, indicating a need to explore tailored AI model development.
Speaker: Vijay Shekhar Sharma
What will be the impact of AI on call‑center jobs and broader employment, and how can AI‑driven abundance be harnessed rather than causing displacement?
He raised concerns about job disruption versus abundance, prompting research into labor market effects and reskilling strategies.
Speaker: Vijay Shekhar Sharma
How can India become a hub for building multiple LLMs tailored to various usage segments, and what infrastructure and ecosystem are required?
He projected India as a “use‑case capital” for LLMs, indicating a need to study the technical and policy infrastructure needed.
Speaker: Vijay Shekhar Sharma
How can the ‘demographic technology dividend’ be realized by leveraging India’s young population for AI adoption and innovation?
He linked demographic advantage to technology adoption, suggesting research into education, training, and adoption pathways.
Speaker: Vijay Shekhar Sharma
What regulatory and ethical frameworks are needed for deploying AI in financial services and other critical sectors in India?
His discussion of credit and financial inclusion implies the necessity of governance research to ensure responsible AI use.
Speaker: Vijay Shekhar Sharma
How can AI models be adapted to local languages and cultural contexts to serve Indian users effectively?
He stressed building AI for India’s unique demographic, highlighting the need for multilingual and culturally aware model research.
Speaker: Vijay Shekhar Sharma
What are the best practices for integrating AI capabilities into existing smartphone ecosystems to reach the last person in rural and underserved areas?
He mentioned AI being accessible via smartphones, indicating a research need on deployment strategies for widespread adoption.
Speaker: Vijay Shekhar Sharma

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Takahito Tokita Fujitsu

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Takahito Tokita Fujitsu

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Speaker 1 introducing Mr Takahito Tokita, President and CEO of Fujitsu, as the keynote speaker [1]. Tokita greeted the audience, expressed honor in sharing Fujitsu’s AI vision, and thanked the listeners [2-4]. He highlighted Fujitsu’s four-decade legacy of pioneering AI from research to practical applications, framing this within the company’s purpose to foster a sustainable world through trusted innovation [5-9]. Tracing its origins to 1935, Fujitsu evolved from communications equipment to Japan’s first computer, later delivering world-class supercomputers such as K-Computer and Fugaku, and is now advancing power-efficient CPUs and quantum computing with a goal of 1,000-qubit machines by March [10-16]. Throughout this evolution, Tokita emphasized a consistent human-centric philosophy that places people at the core of its innovations [17-18]. The firm’s current R&D concentrates on five technology pillars-computing, networking, AI, data and security, and converging technologies that integrate them [19]. AI is presented as a primary catalyst for addressing societal challenges, and Tokita repeatedly stressed that a powerful, trusted AI infrastructure is essential for fully integrating AI into society and business [22-31]. He described Fujitsu’s vision of an AI-driven society as one where AI augments, rather than replaces, uniquely human capabilities such as creativity, critical thinking, and complex judgment [36-39]. To realize this vision, Fujitsu commits to collaborating with industry leaders, academic researchers, and governments to develop standards, ethics, and governance frameworks that keep AI aligned with humanity’s best interests [40-41]. The company also expressed confidence that Japan would serve as an ideal host for an upcoming AI Summit, inviting global participants to discuss future AI-enabled societies [42-43]. Concluding his remarks, Tokita introduced Fujitsu’s Chief Technology Officer, Vivek Mahajan, signaling a transition to a deeper discussion of the AI strategy [44]. Speaker 1 then announced Mr Mahajan’s appearance, albeit with a repetitive listing of his title, underscoring his central role in the forthcoming technical presentation [45]. Overall, the discussion outlined Fujitsu’s historical achievements, its human-focused AI roadmap, and its intent to shape responsible AI adoption through partnerships and international collaboration [5-9][22-31][36-39].


Keypoints

Fujitsu’s long-standing technological pedigree and AI leadership – The company traces its roots back to 1935, highlights milestones such as Japan’s first computer, world-class supercomputers K-Computer and Fugaku, and current work on power-efficient CPUs and a 1,000-qubit quantum machine, underscoring a 40-year AI legacy [5-16].


Human-centric, sustainable AI vision requiring trusted infrastructure – Tokita stresses that Fujitsu’s philosophy centers on people, that AI must be a “powerful and trusted AI infrastructure” to be fully integrated into society and business, and that this infrastructure is essential for addressing societal challenges [17-24].


AI as an augmentative tool governed by ethics and standards – The CEO states that AI should not replace humans but should amplify uniquely human capabilities such as creativity and judgment, and calls for collaboration with industry, academia, and governments to establish standards, ethics, and governance that keep AI aligned with humanity’s best interests [36-41].


Invitation to Japan for an AI Summit and continuation of the discussion – Tokita expresses confidence that Japan is an ideal host for the upcoming AI Summit, invites global participants to join, and hands over to CTO Vivek Mahajan for deeper technical details [42-44].


Overall purpose/goal


The remarks aim to showcase Fujitsu’s AI heritage and technological foundation, articulate a responsible, people-first AI strategy, and rally global partners to co-create trustworthy AI solutions while promoting Japan as the venue for the forthcoming AI Summit [2-4].


Overall tone


The tone is formal, confident, and forward-looking throughout: it opens with a courteous greeting and pride in the company’s legacy, moves into earnest emphasis on trustworthy, human-centric AI, and concludes with an inviting, collaborative spirit toward an international summit. The tone remains consistently optimistic and collaborative, with a slight shift from descriptive (history and capabilities) to persuasive (ethical vision and invitation) toward the end [2][36][42].


Speakers

Takahito Tokita – President and CEO, Fujitsu; expertise in AI, technology strategy and leadership. [S1][S2]


Speaker 1 – Event host/moderator (announcer who introduced the keynote speaker). [S3][S5]


Additional speakers:


Vivek Mahajan – Chief Technology Officer (CTO), Fujitsu; expertise in AI strategy and technology development. (mentioned in transcript)


Full session reportComprehensive analysis and detailed insights

The session opened with the moderator inviting the audience to welcome Mr Takahito Tokita, President and CEO of Fujitsu [1].


Tokita greeted the listeners and expressed honor at sharing Fujitsu’s AI vision [2-4]. He stated the company’s purpose: to create a more sustainable world by building trust in society through innovation, a purpose that guides management, inspires employees, and shapes every product and service [8-9].


He traced Fujitsu’s history from its 1935 founding in communications equipment, through the development of Japan’s first computer in the 1950s, to the creation of world-class supercomputers K-Computer and Fugaku [10-15]. Today the firm is developing highly power-efficient CPUs and pursuing quantum-computing research, aiming to deliver a 1 000-qubit machine by the end of March [16].


Throughout, Fujitsu has followed a human-centric philosophy that places people at the core of innovation [17-18]. Its R&D focuses on five inter-linked pillars-computing, networking, AI, data & security, and convergent technologies that integrate them [19-20]. Building on this foundation, the company collaborates with partners and customers across industries to co-create solutions for societal challenges [21-24].


He repeatedly emphasized that a powerful and trusted AI infrastructure is indispensable for fully embedding AI into society and business [22-31].


Tokita’s vision for an AI-driven future is human-augmented, not human-replacing: AI must not threaten autonomy but should amplify uniquely human capabilities such as creativity, critical thinking, and complex judgment [36-39]. He stressed the need for global collaboration with industry, academia, and governments to establish standards, ethics, and governance that keep AI aligned with humanity’s best interests [40-41].


He noted that Japan would be an ideal host for an upcoming AI Summit and invited participants to discuss how AI can shape a future society [42-43].


Finally, he introduced Fujitsu’s Chief Technology Officer, Vivek Mahajan, who will detail the company’s AI strategy and underlying technologies [44]. The moderator then listed the CTO’s title repeatedly [45].


Session transcriptComplete transcript of the session
Speaker 1

Please welcome Mr. Takahito Tokita, the President and CEO of Fujitsu.

Takahito Tokita

Hello, hello everyone. I’m Takahito Tokita, CEO of Fujitsu. It’s a very honor to share our vision for AI to you, all of you today. Thank you very much. For 40 years, Fujitsu has pioneered AI from research and development to practical application. I will provide an overview of our technology and social commitment. Following my remarks, Our CTO, Vivek Mahajan, details our AI strategy and powerful technologies that underpin it. At Fujitsu, our purpose is to make the world more sustainable by building trust in society through innovation. This single purpose guides our management, inspires our people, and shapes our every product and the technologies and services we create. Our story began in 1935. We started by making communications. We started by making communication equipment.

and this expertise led to Japan’s first computer in the 1950s. Since then, we have powered economic growth with our critical technology and services. This long journey of innovation led to K -Computer and Fugaku, two of world -class supercomputers. This journey continues as we now develop highly power -efficient CPUs and pioneer the field of quantum computing. We are on track to develop 1 ,000 qubit machines by the end of March. Thank you. Throughout our history, one thing has remained constant, our focus on people. This human -centric philosophy has guided us as we adapt to the changing needs for society. To create a sustainable future, we focus our research and development on the five key technology areas, computing, networking, AI, data and security, and converging technology that brings all of them together.

Based on this strong technology, we have created a new technology foundation. We are working closely with our partners and customers across all industries to co -create solutions and address societal issues and challenges. As a key driver, AI is a key driver of these challenges. To fully integrate AI into our society and businesses, a powerful and trusted AI infrastructure is essential. Yes. Therefore, we have been working closely with our partners and customers across all industries to co -create solutions and address societal issues and challenges. To fully integrate AI into our society and businesses, a powerful and trusted AI infrastructure is essential. To fully integrate AI into our society and businesses, a powerful and trusted AI infrastructure is essential.

To fully integrate AI into our society and businesses, a powerful and trusted AI infrastructure is essential. To fully integrate AI into our society and businesses, a powerful and trusted AI infrastructure is essential. To fully integrate AI into our society and businesses, a powerful and trusted AI infrastructure is essential. To fully integrate AI into our society and businesses, a powerful and trusted AI infrastructure is essential. To fully integrate AI into our society and businesses, a powerful and trusted AI infrastructure is essential. To fully integrate AI into our society and businesses, a powerful and trusted AI infrastructure is essential. To fully integrate AI into our society and businesses, a powerful and trusted AI infrastructure is essential. To fully integrate AI into our society and businesses, a powerful and trusted AI infrastructure is essential.

To fully integrate AI into our society and businesses, a powerful and trusted AI infrastructure is essential. the powerful computing power. Our vision for an AI -driven society is precise. AI must not be a force that replaces people or becomes a threat to human autonomy. Its foundation, its fundamental role must be to augment the human capability that are uniquely human. Our creativity, our critical thinking, and our complex judgment. We are deeply committed to working with leaders across all industries, pioneering researchers in academia, and government bodies worldwide. With these strong partnerships, we can collectively establish standards, ethics, and governance needed to ensure that AI constantly serves the best interests of humanity. We believe Japan will be an ideal host for this AI Summit.

We would be delighted to welcome you all to our country to discuss the future society we can create with AI together. Now, I’d like to introduce our CTO, Vivek Mahajan.

Speaker 1

Vivek Mahajan, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, CTO, C

Related ResourcesKnowledge base sources related to the discussion topics (10)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Moderator invited the audience to welcome Mr Takahito Tokita, President and CEO of Fujitsu”

The knowledge base identifies Takahito Tokita as President and CEO of Fujitsu, confirming his role in the session [S1].

Confirmedhigh

“Tokita greeted the listeners and expressed honor at sharing Fujitsu’s AI vision”

In the keynote transcript Tokita says, “It’s a very honor to share our vision for AI…” confirming his greeting and expression of honor [S2].

Additional Contextmedium

“He stated the company’s purpose: to create a more sustainable world by building trust in society through innovation”

The knowledge base notes that Fujitsu’s AI vision is linked to creating a sustainable future, adding nuance to the reported purpose statement [S1].

Additional Contextmedium

“Historical overview of Fujitsu’s AI work (founding in 1935, early computers, supercomputers)”

The source highlights Fujitsu’s 40-year history of pioneering AI from research to practical application, providing additional background to the company’s long-term technological development [S2].

External Sources (33)
S1
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Takahito Tokita Fujitsu — -Announcer: Role as event announcer/host, expertise/title not mentioned -Vivek Mahajan: CTO (Chief Technology Officer) …
S2
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Takahito Tokita Fujitsu — 676 words | 101 words per minute | Duration: 400 secondss AI must not be a force that replaces people or becomes a thre…
S3
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S4
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S5
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S6
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — Suppose AI (as with previous technologies) frees educators from focusing solely on repetitive memorisation and routine p…
S7
Enhancing rather than replacing humanity with AI — The narrative around artificial intelligence has grown heavy with anxiety. Open any news site, and you’ll hear concerns …
S8
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — 1953 words | 157 words per minute | Duration: 741 secondss AI commerce. What I’m going to talk about is something that …
S9
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — No disagreements identified in the transcript These key comments shaped the discussion by transforming an abstract conc…
S10
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Ahmad Bhinder: Hello. Good afternoon, everybody. I see a lot of faces from all around the world, and it is really, re…
S11
Ethics and AI | Part 3 — In November 2021, UNESCO adopted theRecommendation on the Ethics of Artificial Intelligence, marking its first global st…
S12
Shaping the Future AI Strategies for Jobs and Economic Development — His Excellency Sokeng emphasizes that successful AI governance requires honest collaboration between industry, governmen…
S13
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Kyoko Yoshinaga:Thank you, Michael. Welcome to Japan. I’m Kyoko in Kyoto. Okay. So let me, first of all, give you a brie…
S14
AI Governance Dialogue: Presidential address — ## Summit Context and Speakers ### Summit Background – **LJ Rich**: Summit moderator/host
S15
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — No disagreements identified in the transcript These key comments shaped the discussion by transforming an abstract conc…
S16
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — Impact:This comment elevates the discussion from current AI infrastructure challenges to future computational paradigms….
S17
Keynote by Naveen Tewari Founder &amp; CEO, inMobi India AI Impact Summit — -Vivek Mahajan: CTO of Fujitsu (mentioned as the next keynote speaker but did not speak in this transcript) Tewari’s pr…
S18
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Takahito Tokita Fujitsu — Tokita begins by highlighting Fujitsu’s four decades of experience in artificial intelligence development, from initial …
S19
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Takahito Tokita Fujitsu — Fujitsu’s historical foundation and evolution in technology Started in 1935 making communication equipment, which led t…
S20
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — Mahajan establishes Fujitsu’s credibility by highlighting the company’s long history of technological innovation and lea…
S21
Ethics and AI | Part 2 — 7.Ethics is based on well-founded standards of right and wrong that prescribe what humans ought to do, usually in terms …
S22
What Is Sci-Fi, What Is High-Tech? / Davos 2025 — Vardi stresses the importance of maintaining human judgment and ethical considerations alongside technological advanceme…
S23
Ethical AI_ Keeping Humanity in the Loop While Innovating — Debjani questions the current focus on AGI (Artificial General Intelligence) as being about control rather than augmenta…
S24
Enhancing rather than replacing humanity with AI — Development is guided by principles of dignity, fairness, and flourishing, rather than solely by technical capabilities….
S25
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Kyoko Yoshinaga:Thank you, Michael. Welcome to Japan. I’m Kyoko in Kyoto. Okay. So let me, first of all, give you a brie…
S26
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — -Aman Khanna: Vice President of the Asia Group (mentioned as moderator for upcoming fireside chat session) -Moderator: …
S27
Keynote by Naveen Tewari Founder &amp; CEO, inMobi India AI Impact Summit — -Vivek Mahajan: CTO of Fujitsu (mentioned as the next keynote speaker but did not speak in this transcript) Tewari’s pr…
S28
How Trust and Safety Drive Innovation and Sustainable Growth — And I come from the IAPP. If you don’t know the IAPP, we are a global professional association, a not -for -profit but a…
S29
Reskilling for the Intelligent Age / Davos 2025 — Vimal Kapur emphasizes the social responsibility of companies to create jobs and provide internship opportunities. He ar…
S30
Open Forum #44 Building Trust with Technical Standards and Human Rights — Gbenga Sesan: Thanks, that’s that’s a fantastic question, actually, because one of one of the reasons this is fantasti…
S31
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — The technical requirements for trustworthy AI emerged through multiple perspectives. Valerian Ghez from photonic quantum…
S32
Catalyzing Cyber: Stimulating Cybersecurity Market through Ecosystem Development — In 2020, Malaysia established a cybersecurity strategy with a five-year plan to create a secure, trusted, and resilient …
S33
AI That Empowers Safety Growth and Social Inclusion in Action — “So we’ve engaged with member states and different stakeholders about their priorities, and let me bring to your attenti…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
T
Takahito Tokita
4 arguments101 words per minute676 words400 seconds
Argument 1
Fujitsu has a 40‑year AI pioneering legacy, development of world‑class supercomputers (K‑Computer, Fugaku), power‑efficient CPUs, and a roadmap to 1,000‑qubit quantum machines by March.
EXPLANATION
Tokita outlines Fujitsu’s long‑standing experience in artificial intelligence, highlighting key milestones such as pioneering AI for four decades, building world‑leading supercomputers, and advancing next‑generation hardware. He also signals a future quantum‑computing target, showing the company’s forward‑looking research agenda.
EVIDENCE
He stated that for 40 years Fujitsu has pioneered AI from research to practical application, highlighted the development of world-class supercomputers K-Computer and Fugaku, noted ongoing work on power-efficient CPUs, and announced a target to build 1,000-qubit quantum machines by the end of March [5][14-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Fujitsu’s 40-year AI history and its K-Computer, Fugaku supercomputers and plan for 1,000-qubit quantum machines are documented in [S2] and the 40-year legacy is noted in [S1].
MAJOR DISCUSSION POINT
Historical and technological foundation of Fujitsu
Argument 2
AI must augment uniquely human capabilities—creativity, critical thinking, complex judgment—rather than replace people or threaten autonomy.
EXPLANATION
Tokita stresses that AI should serve as a tool that enhances human strengths instead of displacing humans or undermining their freedom. The focus is on preserving human autonomy while leveraging AI to boost creativity, analytical thinking, and nuanced decision‑making.
EVIDENCE
Tokita emphasized that AI must not replace people or threaten human autonomy and should instead augment uniquely human capabilities such as creativity, critical thinking and complex judgment [37-39].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tokita’s claim that AI should augment rather than replace humans is supported by [S2]; additional context on AI enabling focus on creativity and critical thinking appears in [S6], and broader discussion of enhancing humanity is provided in [S7].
MAJOR DISCUSSION POINT
Human‑centric AI vision
Argument 3
Fujitsu is partnering with industry leaders, academia, and governments to co‑create solutions and jointly establish AI standards, ethics, and governance that serve humanity’s best interests.
EXPLANATION
The CEO describes a collaborative approach that brings together diverse stakeholders to develop AI applications responsibly. Through these partnerships, Fujitsu aims to shape common standards, ethical guidelines, and governance frameworks that align AI development with societal good.
EVIDENCE
He said Fujitsu is deeply committed to working with industry leaders, academia and governments worldwide, and that through these partnerships they aim to co-create solutions and jointly establish standards, ethics and governance for AI [40-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Collaboration with industry, academia, and governments to set AI standards and ethics is described in [S1] (also reiterated in [S2]).
MAJOR DISCUSSION POINT
Collaboration, standards, and ethical governance
Argument 4
Japan is proposed as an ideal host for an AI Summit to discuss and shape a future AI‑driven society together.
EXPLANATION
Tokita proposes that Japan host an international AI Summit, positioning the country as a venue for global dialogue on AI’s role in society. The invitation underscores Japan’s commitment to leading conversations on responsible AI development.
EVIDENCE
Tokita expressed that Japan would be an ideal host for an AI Summit and invited participants to come to Japan to discuss shaping an AI-driven society together [42-43].
MAJOR DISCUSSION POINT
Invitation to host an AI Summit in Japan
S
Speaker 1
2 arguments648 words per minute86 words7 seconds
Argument 1
Introduction of the CEO to present Fujitsu’s vision and background.
EXPLANATION
The moderator welcomes Takahito Tokita, establishing his role as President and CEO and setting the stage for his presentation of Fujitsu’s AI vision. This brief introduction signals the transition to the CEO’s remarks.
EVIDENCE
Speaker 1 opened the session by welcoming Mr Takahito Tokita, President and CEO of Fujitsu, thereby introducing the CEO to the audience [1].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The formal introduction of Takahito Tokita as President and CEO is recorded in [S2] and [S1].
MAJOR DISCUSSION POINT
Opening remarks and CEO introduction
Argument 2
Announces the upcoming remarks by CTO Vivek Mahajan on Fujitsu’s AI strategy and technologies.
EXPLANATION
The moderator signals the next part of the program, indicating that the CTO will elaborate on the technical aspects of Fujitsu’s AI strategy. This hand‑off prepares the audience for a deeper dive into the company’s technology roadmap.
EVIDENCE
Speaker 1 announced that the next speaker would be CTO Vivek Mahajan, who will present Fujitsu’s AI strategy and underlying technologies [44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vivek Mahajan’s role as CTO presenting Fujitsu’s AI strategy is confirmed in the keynote summaries [S8] and [S9].
MAJOR DISCUSSION POINT
Transition to technical AI strategy presentation
AGREED WITH
Takahito Tokita
Agreements
Agreement Points
Both speakers indicate that CTO Vivek Mahajan will present Fujitsu’s AI strategy and underlying technologies after the CEO’s remarks.
Speakers: Speaker 1, Takahito Tokita
Announces the upcoming remarks by CTO Vivek Mahajan on Fujitsu’s AI strategy and technologies. Following my remarks, Our CTO, Vivek Mahajan, details our AI strategy and powerful technologies that underpin it.
Speaker 1 introduces the CTO and states that he will speak on the AI strategy [44]; Tokita later confirms that the CTO will detail the AI strategy and technologies after his own remarks [7].
POLICY CONTEXT (KNOWLEDGE BASE)
The summit agenda and prior transcripts explicitly list Vivek Mahajan as the next keynote speaker to outline Fujitsu’s AI strategy, confirming this expectation [S15][S17].
Similar Viewpoints
Both see the CTO’s presentation as the next logical step in the session, highlighting the importance of a dedicated technical exposition on AI after the CEO’s overview [44][7].
Speakers: Speaker 1, Takahito Tokita
Announces the upcoming remarks by CTO Vivek Mahajan on Fujitsu’s AI strategy and technologies. Following my remarks, Our CTO, Vivek Mahajan, details our AI strategy and powerful technologies that underpin it.
Unexpected Consensus
Overall Assessment

The only clear consensus between the speakers concerns the procedural hand‑off to CTO Vivek Mahajan for a deeper discussion of Fujitsu’s AI strategy. No substantive policy or vision‑level agreement is evident beyond this logistical point.

Limited consensus – agreement is confined to session structure rather than content, implying that while the participants are aligned on the agenda, there is little substantive convergence on AI ethics, partnerships, or societal impact within the provided excerpt.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript shows a largely harmonious exchange. The CEO delivers a vision‑setting speech, and the moderator provides introductory and transition remarks. No substantive conflict or divergent viewpoints are evident.

Minimal – the interaction is collaborative and complementary, implying that any policy or strategic discussions about AI, standards, or partnerships are presented without contestation. This suggests smooth consensus building for the topics addressed.

Partial Agreements
Both speakers work toward the same goal of smoothly transitioning the audience from the CEO’s overview to the CTO’s technical presentation, and they both acknowledge the importance of the CTO’s forthcoming remarks. The moderator (Speaker 1) signals the hand‑off while Tokita explicitly states that the CTO will follow his remarks, showing coordinated sequencing rather than a methodological conflict [1][44].
Speakers: Speaker 1, Takahito Tokita
Introduction of the CEO to present Fujitsu’s vision and background. Announces the upcoming remarks by CTO Vivek Mahajan on Fujitsu’s AI strategy and technologies.
Takeaways
Key takeaways
Fujitsu has a 40‑year legacy in AI, including development of world‑class supercomputers (K‑Computer, Fugaku), power‑efficient CPUs, and a roadmap to 1,000‑qubit quantum computers by March. The company’s purpose is to build a sustainable, trustworthy AI‑driven society that augments uniquely human capabilities rather than replaces them. Fujitsu emphasizes a human‑centric AI vision, focusing on augmenting creativity, critical thinking, and complex judgment. Collaboration with industry leaders, academia, and governments is central to co‑creating solutions and establishing AI standards, ethics, and governance. Japan is proposed as the ideal host for an AI Summit to discuss and shape the future AI‑driven society. The upcoming segment will be presented by CTO Vivek Mahajan, covering Fujitsu’s AI strategy and underlying technologies.
Resolutions and action items
Proposal to host an AI Summit in Japan.
Unresolved issues
None identified
Suggested compromises
None identified
Thought Provoking Comments
Our purpose is to make the world more sustainable by building trust in society through innovation.
It frames Fujitsu’s entire AI agenda around a higher‑order societal goal rather than pure technology or profit, positioning sustainability and trust as the core metrics for success.
This statement set the overarching narrative for the talk, steering the audience away from a purely technical showcase toward a discussion of social impact. It primed listeners to evaluate subsequent technology announcements (e.g., supercomputers, quantum chips) through the lens of sustainability and trust.
Speaker: Takahito Tokita
AI must not be a force that replaces people or becomes a threat to human autonomy. Its foundation must be to augment the uniquely human capabilities of creativity, critical thinking, and complex judgment.
It directly challenges the common fear that AI will displace workers, and re‑positions AI as a collaborative tool that enhances human strengths, introducing an ethical stance into the technical discourse.
This pivot shifted the tone from a product‑centric description to an ethical dialogue, prompting the audience to consider governance, standards, and human‑centred design. It laid groundwork for later mentions of standards, ethics, and governance, and signaled a turning point toward responsible AI.
Speaker: Takahito Tokita
We are on track to develop 1,000‑qubit machines by the end of March.
Introducing a concrete, ambitious quantum‑computing milestone signals Fujitsu’s commitment to frontier research and positions the company as a leader in next‑generation computing infrastructure.
The announcement expanded the conversation from current AI workloads to future computational capabilities, hinting at how quantum advances could reshape AI performance. It sparked curiosity about timelines, feasibility, and potential applications, adding a forward‑looking dimension to the discussion.
Speaker: Takahito Tokita
We focus our research and development on five key technology areas—computing, networking, AI, data and security, and converging technology that brings all of them together.
By articulating a structured R&D portfolio, Tokita provides a clear roadmap that integrates disparate technology domains, emphasizing the importance of interdisciplinary convergence.
This clarified the strategic priorities for Fujitsu and helped the audience understand how various initiatives (e.g., supercomputers, AI platforms, quantum chips) fit into a cohesive ecosystem. It also guided subsequent questions toward how these pillars interact in practice.
Speaker: Takahito Tokita
We are deeply committed to working with leaders across all industries, pioneering researchers in academia, and government bodies worldwide to collectively establish standards, ethics, and governance needed to ensure AI constantly serves the best interests of humanity.
It underscores a collaborative, multi‑stakeholder approach to AI governance, moving beyond corporate self‑interest to a global responsibility framework.
This comment reinforced the earlier ethical stance, signaling that Fujitsu intends to be an active participant in shaping AI policy. It encouraged the audience to view Fujitsu as a partner in regulatory dialogue rather than just a technology vendor, potentially influencing future collaborations and policy discussions.
Speaker: Takahito Tokita
Overall Assessment

The discussion was driven almost entirely by Takahito Tokita’s opening remarks, which moved sequentially from Fujitsu’s historical achievements to a forward‑looking vision that intertwines cutting‑edge technology (supercomputers, quantum chips) with a strong ethical and societal narrative. Key comments—particularly those emphasizing sustainability, human‑centred AI, and collaborative governance—served as turning points that shifted the conversation from a technical showcase to a broader dialogue about responsibility and impact. Although there was little interactive exchange in the transcript, these pivotal statements shaped the audience’s expectations, framed the thematic scope for the upcoming CTO presentation, and positioned Fujitsu as both an innovator and a steward of AI’s societal role.

Follow-up Questions

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

How AI Is Transforming Diplomacy and Conflict Management

How AI Is Transforming Diplomacy and Conflict Management

Session at a glanceSummary, keypoints, and speakers overview

Summary

The MOVE 37 initiative, launched under the Belfer Center’s Emerging Tech Program, aims to examine how artificial intelligence can be integrated into diplomatic negotiation and governance processes [5][19-22]. Organizers emphasize that AI should augment, not replace, the fundamentally interpersonal nature of negotiations, which often involve dozens of parties, extensive documentation, and time-critical decisions [41-48][66-72].


They identify three technical challenges for AI in this domain: representing dynamic, strategic interactions, handling intentional misrepresentation, and defining success criteria for multi-party outcomes [94-100]. The panel stresses that human authority must remain central, with AI tools required to be modular, transparent, and scoped appropriately to support, rather than dominate, decision-making [117-121].


Gabriela Ramos described her experience negotiating the UNESCO recommendation on the ethics of AI, noting that AI helped process 55 000 public comments but that better tools for mapping country positions and understanding individual motivations would have been valuable [139-144][146-147]. Nandita Balakrishnan highlighted the public sector’s lag in AI adoption, the need for AI literacy across agencies, and ongoing projects that use AI to predict geopolitical events to demonstrate its practical value [165-173][194-202]. Robyn Scott presented survey data showing that over 90 % of public servants view AI as a major opportunity, yet most pilots lack evaluation and many officials are unfamiliar with ethical frameworks, underscoring a skills gap and the risk of over-reliance on opaque systems [223-230][240-247][252-259]; she also cited a Swiss-led multilingual LLM project as an example of building AI tools that respect linguistic diversity and diplomatic contexts [424-425].


Charlie Posniak outlined a four-stage AI workflow-research, analysis, strategizing, and execution-requiring robust computational infrastructure and autonomous research agents to handle the massive data generated by negotiations [102-108][112-114]. He reaffirmed the commitments to keep humans in the loop, ensure tool transparency, and tailor augmentation to the specific institutional setting [117-121]. Across the discussion, participants agreed that responsible deployment of AI in diplomacy demands clear accountability, safeguards against data poisoning, and mechanisms to preserve cultural and linguistic representation [350-357][391-403]. The session concluded that while AI offers substantial potential to improve efficiency and insight in negotiations, its integration must be guided by rigorous evaluation, human oversight, and inclusive design to avoid unintended biases or power imbalances [432-438].


Keypoints


Major discussion points


AI as an augment-tool for complex diplomatic negotiations – The panel stresses that modern negotiations involve massive information flows, multiple actors, and time pressure, and that AI can help manage documents, generate strategic options, and support real-time translation and analysis while keeping humans in the loop [19-26][41-70][78-85][102-110][113-115][117-121].


Technical and ethical challenges of AI deployment – Participants highlight the opacity of large language models, accountability, mis-representation, cultural bias, data-poisoning, and the danger of over-reliance (e.g., “sleeping at the wheel,” false-negative confidence) that must be mitigated through transparency, modular design, and rigorous validation [82-86][94-100][323-330][332-340][350-366][386-390].


Concrete capabilities and implementation pathways – The project aims to build tools for position-tracking, autonomous research agents, predictive geopolitics, and AI-literacy programs; it will develop evaluation methodologies, modular transparent systems, and training sandboxes to ensure responsible use [102-110][117-121][194-203][226-242][247-250].


Collaborative, interdisciplinary approach and stakeholder engagement – MOVE 37 brings together scholars, practitioners, diplomats, and technologists (UNESCO, SCSP, Apolitical, Swiss multilingual LLM effort) and conducts one-on-one interviews with negotiators worldwide to embed diverse cultural perspectives and practical insights into the design [4-7][9-15][18-23][267-274][424-425][391-403].


Overall purpose / goal


The discussion launches the MOVE 37 initiative within the Belfer Center’s Emerging Tech Program to explore how artificial intelligence can be responsibly integrated into diplomatic and negotiation processes. The team seeks to define research agendas, prototype AI-enabled tools, establish policy guidelines, and gather global input so that AI augments-not replaces-human decision-makers in high-stakes international affairs.


Overall tone and its evolution


Opening (0-30 min): Formal and optimistic, emphasizing opportunity and the ambition to pioneer a new policy frontier [4-7][19-26].


Middle (30-45 min): Technical and constructive, with detailed descriptions of potential AI applications and enthusiasm for interdisciplinary collaboration [78-85][102-110][194-203][226-242].


Later (45-55 min): Cautious and critical, foregrounding risks (opacity, bias, data-poisoning, over-reliance) and stressing the need for human authority and cultural sensitivity [323-340][350-366][386-390].


Closing (55-end): Hopeful yet realistic, acknowledging the complexity of the task, inviting continued partnership, and reaffirming the long-term, iterative nature of the project [430-442].


Overall, the conversation moves from a visionary introduction to a balanced appraisal that blends excitement about AI’s potential with sober recognition of the safeguards required for its use in diplomacy.


Speakers


J. Michael McQuade – Director of the Emerging Tech Program (MOVE 37 initiative) at the Belfer Center, Harvard Kennedy School; senior fellow in international policy and technology [S9].


Gabriela Ramos – Former Assistant Director-General for Social and Human Sciences at UNESCO; former co-chair of the UN AI Advisory Panel; Spain-India Ambassador for AI and High Commissioner for AI [S12].


Nandita Balakrishnan – Director of Intelligence at the Special Competitive Studies Project (SCSP); former academic and senior roles in public- and private-sector intelligence.


Robyn Scott – CEO and co-founder of Apolitical; collaborator with Stanford HAI on AI-for-government initiatives.


Charlie Posniak – Full-time fellow / research fellow with the Emerging Tech Program at the Belfer Center [S4].


Slavina Ancheva – MPP student and research fellow at the Belfer Center, working on the MOVE 37 initiative [S7].


Audience – Various participants (e.g., Sam Dawes, senior advisor to the Oxford University AI Governance Initiative; Devika Rao, Indian classical-dance teacher; Arman, JPL South Asia staff) [S13].


Additional speakers:


Carme Artigas – Former co-chair of the UN AI Advisory Panel; Spain-India Ambassador for AI; acknowledged in the introduction though not present at the session.


Full session reportComprehensive analysis and detailed insights

The session opened with J. Michael McQuade outlining his role at the Belfer Centre and the Emerging Tech Programme, which “teaches, trains and does research on subjects related to the applications of science and technology for international affairs” and brings together scholars, practitioners and students to explore the intersection of technology, science and geopolitics [1-3]. He announced the launch of the MOVE 37 initiative – a component of the Emerging Tech Programme created “to look at where emerging technologies are creating new policy frontiers, new opportunities to use technology to engage in policy, and the implications that technologies are creating for governance, geopolitics, global stability and global conflict” [4-5]. A panel of experts was introduced, including Gabriela Ramos (the chair-to-be, UNESCO), Nandita Balakrishnan (Special Competitive Studies Project), Robyn Scott (Apolitical), and Belfer-Centre researchers Charlie Posniak and Slavina Ancheva [6-15]. Carme Artigas (former co-chair of the UN AI Advisory Panel and Spain-India AI Ambassador) was acknowledged for her work on the UNESCO AI Ethics recommendation but could not attend in person, joining via livestream [31-33].


The purpose of the gathering was to launch a “major new project, specifically looking at the use of artificial intelligence in diplomacy and negotiation” and to invite collaborators, partners and community input because the work “is not something that is solely the purview of a small team in Cambridge” but requires global engagement [19-26][27-33]. After a brief overview of the problem, the panel would discuss how AI could be employed responsibly in high-stakes diplomatic negotiations [34-36].


Slavina Ancheva began by illustrating the hidden complexity of contemporary negotiations. She asked the audience to imagine a ten-item agenda that, in reality, is surrounded by “a lot of other factors that are happening both inside and outside that room” and noted that a negotiator may face “seven counterparts from seven political groups, seven different countries, … and … 27 other countries that you’re representing” [45-48]. She cited the experience of Carme Artigas, chief negotiator of the UNESCO AI Ethics recommendation, to show that even a single multilateral negotiation can generate “thousands of documents, transcripts, drafts” and must contend with “strategic elements… groupthink… time pressure” [66-73][74-78].


Building on this framing, the panel argued that AI can serve as an augment-tool to manage the massive information flows, generate strategic options and provide real-time translation and analysis while preserving the interpersonal nature of diplomacy [78-85][102-110][113-115].


Charlie Posniak described a four-stage workflow-research, analysis, strategising, execution-that forms a cyclical, re-entrant process supporting the full negotiation lifecycle [102-108]. He also noted that the team is exploring “autonomous research agents, source-validation pipelines, counterpart-biography generators, gap-analysis tools, strategy sandboxes, red-team training, and real-time transcription/translation services” [109-115].


Robyn Scott reinforced the potential, noting that “over 90 % think there is huge possibility in the public sector” yet most pilots lack evaluation and many officials are unfamiliar with ethical frameworks [223-230][240-247][252-259]. She introduced the “below/above the algorithm” heuristic, urging users to stay “above the algorithm” so that AI remains a tool rather than a driver of decisions [256-259].


The panel identified three technical challenges specific to diplomatic AI: (1) representing dynamic, strategic interactions that evolve over time; (2) handling intentional mis-representation and deception; and (3) defining success criteria for multi-party outcomes [94-100]. Charlie warned that relying solely on large language models (LLMs) is insufficient because “their fluency isn’t necessarily verifiable” and their opacity hampers accountability, prompting the need to integrate LLMs with the “80-year-old toolkit” of game theory, decision analysis and machine learning [80-88][89-92].


Human-in-the-loop governance was repeatedly stressed. The panel committed to keeping “human authority… central” and to designing tools that are “modular and transparent so that you can see what’s happening at each stage of the process” [117-121]. The three commitments were:


1. Human authority remains central[117-119]


2. Tools are modular and transparent[119-121]


3. AI provides scoped augmentation rather than replacement[121]


Gabriela Ramos offered a concrete case study from the UNESCO AI Ethics recommendation. She described how the negotiation involved “193 countries… during COVID” and generated “55 000 comments” that were integrated with AI [139-144]. She highlighted the lack of a “repository of what is the traditional position of certain countries” and argued that better AI-driven position-tracking would have helped map “where countries were positioning themselves” and understand individual motivations [145-147][391-403]. She also warned that AI tools must avoid “mis-representation… over-representation of certain cultures” and that cultural nuance is expressed through language, citing the Ubuntu philosophy as an example of a worldview that could be lost if models are trained only on individualistic data [401-403].


Nandita Balakrishnan contextualised the broader public-sector landscape, noting that “the public sector has been more in the passenger seat, if not the backseat” and that analysts still perform assessments “very, very manually” [170-173][178-180]. She argued that AI has “fundamentally changed the threat landscape” and must be embedded across agencies – from intelligence to the State Department – through AI-literacy programmes and workflow integration [194-200]. Her team is already piloting AI for “predicting geopolitical events” to demonstrate practical value and to persuade policymakers of AI’s relevance [201-203].


During the Q&A, Sam Dawes asked about both “data-poisoning and prompt-injection” risks associated with diplomatic AI systems [380-384]. Devika Rao then requested AI-supported cultural-education frameworks; Robyn responded by suggesting she contact the team behind the Swiss-led multilingual LLM, which was trained on over 100 languages and designed to respect linguistic diversity in diplomatic contexts [398-404][410-416].


The panel expressed broad consensus that AI should augment, not replace, negotiators; that tools must be transparent, modular and keep humans “above the algorithm”; and that cultural and linguistic inclusion is essential. Points of contention emerged: Charlie argued that LLMs’ opacity makes them unsuitable as stand-alone advisors, whereas Robyn stressed that even developers lack full legibility over model internals, creating a “ceiling” on user expectations [82-88][323-328]. On evaluation, Robyn highlighted the current “pilotitis” gap, while Michael clarified that the MOVE 37 project will develop tools and “evaluation methodologies, etc.” rather than having a finished plan already in place [241-242][127-128][31-33]. Regarding multilingual adequacy, Gabriela insisted that merely adding languages is insufficient without capturing deeper cultural philosophies, whereas Robyn pointed to the Swiss multilingual LLM as a concrete solution, indicating a divergence over whether existing initiatives are enough [391-403][424-425]. Finally, the audience raised concerns about asymmetric AI access reshaping power balances; Michael replied that the project will consider “competitive leverage” and aim for tools that are “dispersed actively and offensively not defensively”, though concrete safeguards remain open [208-210][430-432][431-432].


Key take-aways


– Diplomatic negotiations are highly complex, involving numerous actors, massive documentation and dynamic strategic considerations [45-78].


– AI can augment the full negotiation workflow-research, analysis, strategising, execution-provided it remains a supportive tool [102-121].


– Large language models alone are insufficient due to verifiability and accountability concerns [80-92].


– Human-in-the-loop, transparency and modularity are non-negotiable design principles [117-121].


– Building AI literacy across the public sector is essential to avoid a skills gap [170-200].


– Cultural and linguistic diversity must be embedded in training data to prevent bias [391-403][424-425].


– Risks such as data-poisoning, prompt-injection and over-reliance (“sleeping at the wheel”) require explicit safeguards [380-384].


– The MOVE 37 initiative will develop modular tools, position-tracking repositories, predictive geopolitics models, evaluation frameworks and training sandboxes while continuously engaging diplomats worldwide [102-110][117-121][194-202][223-230][391-403][424-425][430-438].


The meeting concluded with Michael inviting interested parties to join the effort, noting that “we are at the beginning of a long process” and that the Belfer Centre’s structure allows projects to evolve from “beginnings, middles and ends” into something “really important” [430-438]. He thanked the panel, the audience and the sponsors, underscoring the need for ongoing partnership, rigorous evaluation and inclusive design to ensure that AI augments diplomatic practice without compromising human agency or equity [439-442].


Session transcriptComplete transcript of the session
J. Michael McQuade

I’ve been a major figure in international policy for the United States and in education at the Belfer Center, where our objective is to teach, train, and do research on subjects related to the applications of science and technology for international affairs. We have scholars, practitioners, students, all working to address the gaps and the opportunities for technology, science, and geopolitics. I’m very delighted to have everybody here today. The Emerging Tech Program, which I have the honor of running, was launched about a year ago, specifically to look at where emerging technologies are creating new policy frontiers, new opportunities to use technology to engage in policy, and the implications that technologies are creating for governance, geopolitics, global stability, and global conflict.

And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in a program that’s relating technologies to modern issues around geopolitics, that artificial intelligence is one of the major aspects of our work. We have a terrific panel here today. It’s my pleasure to introduce them by name, and we’ll talk a little bit more about each one in just a moment. The missing chair, which we expect shortly, is Gabriela Ramos, who’s the former Assistant Director General for Social and Human Sciences at UNESCO. Nandita Balakrishnan is the Director of Intelligence at the Special Competitive Studies Project in Washington. And ⁠Robyn Scott is the CEO and co -founder of Apolitical.

And then at the far end, two of my colleagues, researchers on our program at the Belfer Center. Charlie Posniak is a full -time fellow with the program, research fellow with the program. And Slavina Ancheva is a current. student within our program, an MPP student at the Belfer Center. I also want to acknowledge a colleague of ours. who is not here, was not able to get to India in time for the conference, Carme Artigas, who is the former co -chair of the UN AI Advisory Panel, who has been an integral part of starting this work and the ongoing progress we are making. Carme is also the Spain -India Ambassador for AI, High Commissioner in Spain -India for AI, and is here with us in spirit and maybe even on the live stream that we’re doing here today.

So a big shout -out to Karame for all of her help. So why are we here? We are embarking on a major new project, specifically looking at the use of artificial intelligence in diplomacy and negotiation. We are here at a conference about the use of artificial intelligence and the implications of artificial intelligence in so many aspects of society. And our work, is looking at how one will engage non -human intelligences in the process. of diplomacy and negotiation. So we’re here because this is a broad -based project. It is not something that is solely the purview of a small team in Cambridge where we are located, but something that we are looking for collaborators, partners, and input from the community here in the community around the world as we build what will surely be a place where AI is used and surely will be a place where we need to be cognizant and careful about how AI will be used in that exercise.

So the work we do plays on an increasingly bigger role in shaping the relationships between states and within states. We want to have this conversation about the role for AI specifically because of the global nature and the integrated way that AI will play, and more specifically, how will we use artificial intelligence tools to augment humans in what is at its core a fundamentally human process of negotiation. Thank you. diplomacy. Diplomatic negotiations are very high stakes. They are very different, as you will hear from our team, very different than classic one -on -one win -lose negotiations or win -win negotiations. They are very much more complex than that, and that means they require both a unique human touch and a unique application of how artificial intelligence might be used in that process.

But it’s also an area where an enormous amount of potential exists to tackle resource constraints, to find better outcomes, and to use artificial intelligence to enable a more stable and prosperous world, and one for which the follow -up to negotiations can be a subject for the tools and applications of modern technology. How we go about that is crucial, and how we talk about it from the beginning is crucial. There are already a number of tools emerging in this space, but it’s our belief that a more rigorous approach is needed, and you’ll hear some of that in just a moment. So what are we going to do? So what are we going to do? Our team is going to do a brief overview of the way we are thinking about this problem.

And after that, we’re going to have this amazing panel that we will engage in a conversation about views from others who are involved in negotiation and or diplomacy writ large and their views on how technology can be used, be used well or be used not so well, and what the implications of that will be. So with that, let me turn it over to Charlie and Slavina to talk about the project itself. I think Slavina is up first.

Slavina Ancheva

Thank you, Michael. And thank you all for being here this morning. A big welcome. Over the next 10 minutes or so, Charlie and myself would like to present you with a little bit of a framing for the expert discussion that we’ll be getting into right after. We’ll broadly focus on three areas. How do negotiation processes currently look and the complexity that comes with them? The potential for AI to augment many of these challenges and processes. Thinking beyond just LLM. and the need for responsible deployment of these tools. Before we do that, I’d like you to close your eyes and imagine. You walk into a negotiation, you look down at the agenda, and there’s 10 items on it.

But as any good negotiator, you know that it’s not just about those 10 items. It’s a lot of other factors that are happening both inside and outside that room that are affecting how that negotiation process is happening. So for one example, you’re sitting across seven counterparts from seven political groups, seven different countries, and behind you, you have your own team, but also 27 other countries that you’re representing, that you’ve promised a certain outcome or a certain deal. Of course, this is not my story. It’s the story of Carme Artigas that Michael mentioned, who was one of the chief negotiators of the EU AI Act, and later the UN AI Advisory Body and many other negotiations. But it’s not just her story.

It’s the story of many of you. It’s the negotiations that you’ve engaged in at the UN, at COP, bilateral. It’s the negotiations that you’ve engaged in at the UN, interagency negotiations within your own organizations. So you know very well that negotiations are complex and they evolve over time. So what might look like just two states negotiating with each other bilaterally is actually a whole set of issues that are on the table. It could be natural resources, it could be AI, it could be climate, and a whole lot of external and internal stakeholders that are also trying to influence that process. We start to dive into some of this complexity. And more than that, there’s a lot of teams that are sitting behind these principal negotiators, the different departments and agencies that are supporting them with evidence, with documents.

And we’d really like to stress that this is a fundamentally interpersonal process. We’re not looking to replace diplomats or negotiators here, but just to give them the tools to manage these complexities much better. And finally, rarely in this world do we have just two states negotiating nowadays. There’s often a third state at the table. In the case of the EU, maybe 27 member states and, of course, a lot of hundreds of others that could be out there. So with that being said, what are some of the impacts of this complexity? Well, for one, there’s a whole lot of information that needs to be managed. There’s a simple negotiation can generate thousands of documents, transcripts, drafts. On top of that, there’s a certain amount of finite resources that any team has as they grapple with many other challenges throughout the day.

There’s a lot of strategic elements. Sometimes in groups, you might have a group think or herding that leads you in one direction as opposed to exploring your full set of options. And finally, there’s the time pressure. So most negotiations do have some sort of time element and handover element to future teams. So with that being said, how can AI help? And I’d like to turn over to Charlie.

Charlie Posniak

Thanks, Lavinia. So AI systems can now beat some of the best human players at Go, at chess, at video games. At board games, language models, as we’ve heard, have become increasingly competent at delivering a range of sophisticated. legal, academic, technical, software contributions, the pace of change has been staggering. And so what our interdisciplinary team has been looking at is we’re trying to envision a better future for diplomacy where computational methods can transform the practice of diplomatic negotiations and statecraft that Slavina just outlined. So supporting better communications, better resolutions, and processes between states can augment their functions. So we’re trying to chart existing technical tools, develop new ones, and provide a range of policy guidelines to ensure that this happens responsibly, safely, effectively.

So the classic question that we get in response is, why can’t you just ask an LLM? Lots of people are interested in trying to see if language models can simulate diplomacy or if chatbots can guide people through a negotiation all in one step. But ultimately, language models are remarkable, and they used to be carefully scoped for three key reasons. Firstly, their fluency isn’t necessarily verifiable in this international and world politics. Secondly, the opacity where you can’t tell what’s going on inside a model is not always verifiable, is not always viable. Because high -stakes negotiations require… accountability both democratically and internally to understand why recommendations shape treaties in certain ways. And additionally, there’s a toolkit that’s 80 years old here.

We have game theory, decision analysis, machine learning, a great range of theoretical developments that exist precisely to model strategic interactions under uncertainty. And so we see LLMs as playing this role at the heart of a really broad set of learning paradigms that’s tying together. We’re both like supervised and unsupervised, self -supervised learning here. But LLMs provide a really strong way to interact with all of these different learning paradigms and technical architectures that the best advances in AI have been built from. So whether that’s the systems that play chess or Go or board games, these are all pulling together lots of different methods. And if we just rely on chatbots at the heart of things, we miss out on all of the technical developments that the last 80 years have experienced.

But there are three key challenges with trying to expand these techniques into the world of diplomacy and world policy. One is to be able to do it in a way that’s not just a way to do it in a way that’s a way to do it in a way that’s not just a way to do it in a way that’s not just a way to do it in a way that’s not just a way to do it in a way that’s not just a way Firstly, representation. As Sabina was touching on, the game that’s being played here isn’t a board game. These are things that are these interactions are fundamentally changeable over time. The institutions that constrain the actions of states can be made and unmade over the course of a negotiation.

Inference like these are environments where there’s real strategic misrepresentation, where people are lying or deceiving or trying to shape outcomes for their own advantages in ways that the current methods aren’t quite well suited to handle. And finally, there’s as we’re touching on, there’s this sense of specifying success. How can you bring together all of the different counterparts and come up with a relatively set, coherent set of preferences and priorities over the course of a really massive negotiation? So these are three challenges that we’re trying to embark on. And one of the ways that we’re approaching this is by breaking down the the tasks of diplomacy and the tasks of negotiation for AI applications. So just broadly, one of the ways we’ve looked at this, is saying that there are some foundational tasks of research analysis.

analysis, strategizing, and execution that build this evidence base with research, that analysis processes the information that you’ve managed to gather. Strategizing relies on using the analysis and the research to come up with a map from your preferences to your outcomes. And then finally, in the room, executing a negotiation, you’ve got to be able to dynamically adapt and adjust over time. And this isn’t a linear process, but a re -entrant cyclical sense of you have all of these things as they change, feeding up through this knowledge base. And so you need this really strong computational infrastructure to be able to even begin to apply some of the really exciting and fascinating AI and ML methods we’re touching on.

So with this, we see a future where research can be done with autonomous research agents, and you can have source validations and get immediately generated counterpart biographies, analysis of gaps and preferences and evidence bases, strategy sandboxes, red team training, and trying to simulate how both the public and the public can interact with each other. And so you need this really strong data set to be able to do that. And then finally, in the room, executing a negotiation, you’ve got to be able to be able to identify the best way to do that. And so you need this really strong data set to be able to identify the best way to do that. And so you need this really strong data set to be able to identify the best way to do that.

And so and then in real time having transcription and translation services that AI and ML methods are doing a really phenomenal job at. All of these things we think will play a role in this multi -model, multi -method world of computational support for diplomacy negotiations. And so this is just a sense of how we’ve tried to break down this problem and get a grasp on the existing and future technical developments. Finally, we want to end on these three commitments that are central to a lot of the stuff that we’re talking about. One, human authority has to remain central. We can’t have any objection of responsibility over decisions of war and peace. We have to make sure that the tools themselves are modular and transparent so that you can see what’s happening at each stage of the process and which parts of which computational systems are supporting analysis.

And then finally, making sure that augmentation is appropriately scoped for the team, the institution, and the setting that it’s in. So with that, I’d like to hand over to Michael in the panel, our director of the program. And what I hope will be a wonderfull discussion.

J. Michael McQuade

Great. Thanks, Charlie. Thanks, Slavina. So just as everybody’s sort of getting settled in, so we have a plan for a project. We have a vision of how one has a set of signposts and goalposts in what is essentially the ability to augment with intelligence, human intelligence and participation. So there are lots of technical elements of that. We’ll be developing tools. We’re looking at evaluation methodologies, et cetera, et cetera, et cetera. It’s the whole technical side. But one of the benefits of the approach we’re taking is that we have access to a large body of people for whom the day -to -day practice of negotiation diplomacy, not necessarily constrained by the definition of diplomacy, meaning state -to -state to get to an answer, but organization -to -organization, people -to -people, negotiation -to -negotiation.

And I am delighted then to have three people here who can talk a little bit about their views on how artificial intelligence will be used in the process of their work. That allows us to then learn from that experience and how we map that into the Move 37 project. So Gabriela, let me start with you. Welcome. Thank you for negotiating all the traffic to get here. So you’ve been at the center of international policy design and negotiations on issues such as climate change, international taxation, gender equality, artificial intelligence, a whole list of things in a brilliant career. You’ve done this through key roles at UNESCO, but also at the G20 and G7 and at the OECD.

We’re delighted to have you here. And let me ask you just to sort of start the discussion, if you would, to just talk a little bit about what it’s like to sit in the driver’s seat as a mediator trying to bridge sides, and how you would think about AI capability augmenting you in that process.

Gabriela Ramos

Well, thank you. Thank you so much for inviting me. Is it working? For inviting me to this early morning. And I find this topic fascinating. because when you are a diplomat and when you have negotiated many standards or agreements you don’t think about this taxonomy you never think about the taxonomy you just think that you need to get it someday and that you need to find consensus and that you need to find where the problems will be and therefore it’s very interesting that you asked me something to structure better how we do things and I’m going to refer to the negotiation of the recommendation on the ethics of artificial intelligence because that was a very difficult one, 193 countries negotiating during COVID and actually it was very helpful to have a zoom where I could see where all the countries were positioning themselves which actually helped a lot but the interesting thing is that it was about artificial intelligence we were negotiating and we had to map out where countries were and it was very interesting to see that some of the usual suspects that are always blocking the effectiveness of international instruments were aligning with countries that are very supportive of those but that didn’t want to see UNESCO playing a role in this field so I have Russia and I have UK in the same position that helped me because I called the UK and say are you happy to be in the same position and then they just hold one second but the interesting thing is it’s a very heavy document it’s very very because there are so many cultures we have to almost define the step by step and the interesting thing is that when thinking how can AI help us organize better at the moment it did not provide with so many inputs it was 2021 but UNESCO has this idea of being super inclusive we developed the recommendation with all the regions in the world represented and all the disciplines but then we put it out to the world and we receive 55 thousand comments therefore we use AI to integrate them.

That was, no? But then when you think about how do you map the positioning of countries, I think that would have been super useful to have more AI. I used to have full teams providing me with briefings for the people we were going to talk because you need to be conducting a lot of, one thing is the negotiation in the room, another thing is all the legwork that you need to be doing, talking to the different actors, knowing where they stand. And that would have been amazing, just to have a repository of what is the traditional position of certain countries or certain negotiations, which had to do with the substance, but probably has to do also with the positioning of that country in the international context and how much they abide by the rules and how much they support these things.

And then what I find fascinating, but this is always, as my colleagues here said, how you keep the woman, the woman in the loop, I love it. Yes, woman in the loop, not human in the loop, human in the loop. it was a lapsus it’s not lost on me my panelists here it was a lapsus but I like my lapsus the whole point is when you are in front of a person and you’re trying to convince that person that he’s alone nobody’s supporting his position and therefore he should not continue blocking the negotiation how would it be that you can have more information about that person what moves them how can you offer something that will be important for them because this is the kind of things that we do negotiating what would you want to have out of it I know you have your bosses in your shoulder and you need to bring them something to the table but tell them you’re alone that you’re blocking it and imagine you can have the information about that person but that’s also risky because it deals with privacy and all of those things but I feel it would be fantastic because this is strategic thinking and using the right words to get the countries to agree that will bring you on some places and that I think is a very important thing that’s a capacity that can be augmented by AI.

Thank you very much.

J. Michael McQuade

Yeah. Is it on? Yeah. Thank you very much for, actually it’s a terrific transition because Charlie was talking about the complexity of these negotiations, about how they’re not dynamic. I can think of nothing more dynamic than a UNESCO negotiation. Just trying to understand where people’s positions are by itself is a complexity. Trying to integrate 20 or 30 of those positions or 190 of those positions and then trying to find what are the right levers that I might be able to pull. We do this now all the time with people, with you. And the question is how can modern tools help in that process without removing or absolving responsibility for people. So thank you. So Nandita, you have had an amazing career in academia, the public, the private sector.

You’ve been at Stanford. You’ve worked in intelligence and advisory. And you’ve seen all the different things sides of these negotiations, both from the government side, the private sector side, inside and outside. You are currently at the Director of Intelligence at the Special Competitive Studies Project. For those of you who don’t know, SCSP is a major effort funded by Eric Schmidt, sponsored by Eric Schmidt, after the conclusion of the National Security Commission on Artificial Intelligence in the U .S. in the way that technology will be used in competition for economics, national security, et cetera. So it’s a big, broad role. Every day you are negotiating. Every day in your life you have negotiating. So can I just ask you to talk a little bit about your view, both from an SCSP point of view, but also from your career about how you would see this evolving?

Nandita Balakrishnan

Absolutely. And thank you so much for having me. And good morning, everyone. So my career, as you mentioned, has sort of spanned three distinct sectors. And they kind of came at different times, academia, public sector, and private sector. Now, there was a time, I would say, that all… three of these groups would have, the particular ways they would have leveraged technology would have obviously its variations, but the access and adoption of it were much more similar. This is just fundamentally not true of AI. The public sector has been more in the passenger seat, if not the backseat, especially over the last decade. And so what was really interesting to me is I started in academia, then went to the public sector, came out of the private sector, and I saw that dip in my access to AI.

Now, I have been in intelligence, and one important thing about intelligence and maybe a misconception of it is that it is primarily used for military, feeding information for military applications. It’s not true. We are just as vital to diplomatic efforts because every opportunity you’re looking for that something bad could happen, you’re just as much looking at what are the opportunities for something positive to happen. How can you open the negotiating space? So we’re looking at everything from both sides. So I wanted to give that perspective. As an analyst, I can say personally, it was very valuable. valuable to have the rigorous training I had to do things very, very manually. So learning how to write an assessment without the access to AI.

But now that I’m on the back end of it, I can tell you every day I ask myself, like, if I had access to these tools as an analyst, how could I have worked much faster and much smarter? Because at the end of the day, and something that Gabriella was mentioning, there’s a lot of data out there, but a human analyst is never going to be able to manually process most of that by themselves. The story I always like to tell is the very first time I wrote this intelligence piece, I was so proud of it. I thought the argumentation was great. The data I had used was great. I showed it to a mentor, and they said, this is awesome, but you didn’t consider this one piece of data from 10 years ago that completely negates your argument.

And here’s the thing. It’s not that I didn’t do good work. It’s just that there was no way I was going to know that that piece of information exists. Now imagine a tool that can help you not only identify that that data exists, but learn how to synthesize it. Okay. Thank you. Obviously, as Charlie and Slavina mentioned, human in the loop is always going to be important because you want these assessments to have a human level at the end of it. But there is a way to move better and smarter. And this is something that SESP is really advocating for in sort of three distinct ways. So first, at a very meta level, we make the argument that AI has fundamentally changed the threat landscape and the scope for global competition.

It is now kind of the foundational way we need to think about geopolitics, especially as this technology is rapidly evolving. So you really cannot divorce AI and AI adoption when you’re trying to understand geopolitics and foreign policy. Number two, in order to have an ability to make assessments about this emerging technology, to understand geopolitics, you have to have a public sector that is actually leveraging these tools to the best of their ability. Now, there are a lot of ways that AI is being adopted at the public sector. sector, you know, you’re obviously thinking about sort of the, again, military application of drones, but you need to have your day -to -day workflows integrating this technology.

And this is something that we are really focusing on, especially about how to build up AI literacy within the public sector, not just at the military level, but within the intelligence community, within the State Department, and even within, like, commerce, OPM, all the, like, any federal sector employee at some point needs to be moving smarter and faster with AI. And third, we’re looking at the, like, specific use cases. So one of the projects that we were working on last year is looking at how AI can be used for predicting geopolitical events, both for military applications, but also for State Department applications. So, and the reason we do that is because in order to convince the public sector that they should be using AI, you almost need to show them how it could look like 10 years from now as we’re moving to that future.

So by kind of demystifying its use and showing them targeted ways that you can use it, it actually solves your meta -problem of understanding why AI is so important to geopolitics. .

J. Michael McQuade

So I’m hearing a couple of things. I’m hearing this general statement that much of the world is going to be about AI and much of the world is going to be AI creating that world. And that’s the metaphor that comes directly to the project we’re looking at, which is we’re negotiating with AI tools. We have to have a baseline of capability. And yet the landscape in which negotiation and diplomacy are happening is being fundamentally changed by AI itself. And so that whole issue around preparedness and around setting the ground rules. I also heard not just the thought process around artificial intelligence as a trusted agent to accumulate information. Both of you mentioned that. But also as an agent to help understand new pathways for success, new pathways for leverage, whether those are national security or whether those are economic vitality.

The scope of the negotiations doesn’t really change that. So. In the area of. sort of preparation, let me come to you, Robin. Robin is the co -founder and CEO of Apolitical. Apolitical is a global platform for policymakers that specializes in government innovation. She’ll talk about that in just a second. In particular, you have courses in helping governments prepare their workforces for the modern world in which they live. Your AI courses have reached hundreds of thousands of people around the world. And much of what you are trying to do is to prepare the world for the kind of things that we are talking about in Move 37, obviously much broader than just that topic. So let me ask you to talk a little bit about that, if you wouldn’t mind.

What sort of lessons you’ve learned from the field and how you think about policymakers’ willingness or how you change policymakers’ willingness to embark on journeys with new tools and new capabilities.

⁠Robyn Scott

Stanford HAI is one of our collaborators. So we are more context experts than content experts, and we bring the content experts into the middle. So where are we at? Let me give you some data. And this is from a 5 ,000 -person survey that we’ve recently run. Overall, public servants are incredibly optimistic about AI. North of 90 % think there is huge possibility in the public sector. And there are lots of paradoxes here. They’re also wary of it, right? There is a huge value creation opportunity. One figure from BCG estimates that there’s 1 .75 trillion of public sector value to be unlocked if we harness AI in the right way, because AI loves bureaucracy, all these repeatable processes. And about a third of most public officials daily watch And I’m going to do something about it.

And I’m going to do something about it. is research and writing related. AI is great at that. So the prize is very, very big and that’s just the painkiller prize. When you get to the vitamin prize, when you get to what AI could do in terms of predictive policy making and responsive policy and adaptive policy, et cetera, then you get into a space that’s only really bounded by the imagination. So there’s lots of AI talk. There’s less AI action. Increasingly, we’re in a pilotitis zone where almost everyone’s got pilots. 70 % of leaders say they’ve either got AI pilots or plan to launch them this year, but only 45 % of them say they have any plan to evaluate their pilots.

So that’s a pretty big gap to close and we see gaps like this all the time. One of the biggest gaps is leaders not using the technology themselves, which is a real problem because you can’t understand this technology in the abstract. You cannot look over your grandson’s shot. older and see them using it. You’ve got to use it. You’ve got to feel the speed of change. Of the public servants who are implementing AI in the public sector globally, these are people who self -identify of having AI in their jobs. Only 26 % of them say they understand their own country’s ethical frameworks. So approximately three -quarters of all the people rolling out this technology are freestyling. That’s terrifying.

So that’s a skills and knowledge gap not even closed within an institution. It’s not even getting to how do we actually understand the basics of this technology. Just to close, and there’s a whole lot more fascinating data, but one of the things that is increasingly worrying me talking to leaders around the world working on this is that we are now getting quite drunk on the idea of AI agency. Thank you. but we’re not talking about human agency in the process and maintaining it. So I think we risk getting into a zero sum dynamic where, and I think this is relevant to diplomacy, where the agency drains away to AI and that all comes at a cost to humans.

So we need to be building up humans at the same time. And the framing and heuristic I found most helpful for this overall is this idea of this recently merged of being below or above the algorithm. If you’re below the algorithm, you might be an Uber driver being dispatched, an Amazon packing worker being allocated to put stuff into boxes. If you’re above the algorithm, you are using tools to further your goals. We need, when we think about closing that capability gap, and I think in diplomacy, to keep moving people up above the algorithm.

J. Michael McQuade

Great, fantastic comments from all of you, thank you. I’m gonna ask Charlie and Sylvina just a couple of quick comments to make comments on the project. All right. Because then I’m going to come back to the three of you and I’ll prompt you with the question now, which is what would you want to be comfortable with knowing about the tools and the capabilities that you will be asked to use or be offered to use? So, Slavina, can you just follow on Robin’s comments? One of the benefits we have of doing this program at the Belfer Center and the Kennedy School is a large set of people who have done this for a living. This is what diplomats and negotiators have done.

Can you talk a little bit about how we engage that group and what we’re trying to get from that?

Slavina Ancheva

Definitely. So I think, as Michael put it, obviously at the Belfer Center we have quite a variety of current, former diplomats, practitioners, not just from the U .S., but from all over the world. And a large part of the work that we’ve been doing is sitting down for one -on -one interviews with all of them and really getting a sense of how they think. Thank you. just about the content of the major negotiations that they’ve been leading, but more about the process. So very similar to the panel discussion we’re having here today, what are some of the uses that you see one day you could be using AI for? So a lot of what we’ve heard so far, I mean, the position tracking that I think Gabriella referenced.

We’ve heard a lot about historical precedent, the generating of options and strategic options, and really uncovering the deepest interests. And I think where this ties really well to what Robin was saying is a lot of them are also expressing their hesitancy. So they’re being very forthcoming in that, and I think that allows us to take a really sober look at what are the risks of integrating these tools. One of the main questions we get is if you’re using these tools, we ask them, what would you like to know? So exactly what Michael’s saying now. So a lot of these interviews have really been integrated across the different work streams of our project, and we really put diplomats and practitioners at the heart of the rest of the work that we’re doing.

All right. Thank you.

J. Michael McQuade

And Charlie, you talked a little bit before about, you know, there’s an obvious role that LLS has. I think that’s a really good point. And I think that’s a really good point. in helping people accumulate and synthesize a very large amount of information. But there are many more aspects of a negotiation. Can you talk just a little bit about some of the other ways the tools are going to be used in the project that we’re doing?

Charlie Posniak

Yeah, absolutely. Thanks so much, and thanks so much for the comments as well so far. The panelists have touched on a couple of really interesting applications, whether it’s, especially as Robin was talking about, with predictive or adaptive policies, if you’re looking at, and Anita as well, with the predictive geopolitical events. We have a really fascinating array of algorithms that are incredibly competent at these sorts of predictive tasks. And what we now have with the current computational ability and also the ability of language models is we can process vast amounts of unstructured data in ways that makes these algorithms even more accessible to a wider range of people. So I think that that’s one area that I’m particularly excited about.

There’s also a bunch of stuff, as Gabriella was touching on, is how do you take these vast unstructured transcripts and come up with natural… natural language processing additions on top, and can we represent positions that they track? and change over time. I think that is one of the big parts of cognitive load that I think diplomats have spoken about a lot.

J. Michael McQuade

Great, thank you. We’re going to see if questions from the audience are in just a moment, but Nandita, I’m going to start with you. I’m going to, forgive me for doing this, I’m going to characterize your career as an analyst. Every job that I read and see about has been an analyst of some kind. You’re constantly in a place where people are suggesting new tools and new capabilities. So think about AI in the world you’ve inhabited. What’s going to make you most comfortable when somebody shows up and says, here’s the thing, it’s going to make your life better?

Nandita Balakrishnan

That they can explain what the outputs are. So one of the things, like when we were looking at, for example, could we be using AI for predicting geopolitical events, a lot of people, both in industry, in academia, and in the public sector, that are working on these types of projects, they all say the same thing, which is this should be seen as a data point or a shaper of the way you should view the world, not as finished intelligence. Finished intelligence, should always… ultimately be done by a human who is accountable to their policymakers. Now, if you are working in policy, you understand actually how this works. You’ll have your head of state or your head of government come to you and ask you to explain exactly how you got to your assessment.

Right now, we have a human who ultimately has to do that. But what happens when we’re starting to rely more and more on AI tools is you never want to lose that ability to explain the outputs and particularly demonstrate that you’ve looked at all the counterpoints that you could possibly do. Now, this is where I think AI is super helpful because oftentimes when I was trying to figure out, especially in academia, what are all the things I could have done wrong? How could I have measured this differently? You’re always prepared to thinking about all the decisions that you made and how to justify and validate them. But as the scenarios get broader, as they get more complicated, your ability to figure out what the counterarguments are are going to just dwindle over time.

Oftentimes, the argument we make is that humans are biased at the end of the day. We make our decisions differently. We make our decisions differently. We make our decisions differently. We make our decisions differently. We make our decisions differently. We make our decisions based on how we got to where we are today. the experience that you have. So AI can be really, really helpful in helping you sort out the counterarguments, but you still need to understand how those counterarguments work and why ultimately you’ve come to the assessment that you have. So where I would feel super comfortable is this is how I relied on AI. This is how it came to the output that it did.

This is fundamentally why I made the assessment that I did.

J. Michael McQuade

Great. Thank you. And Robyn, I think this is the world you live in every day of helping governments and government officials and civil service workers. So project that onto an AI for diplomacy landscape. What do you think is going to be important to get people to say, I’m going to trust this, I’m going to work it, but I’m at least going to try?

⁠Robyn Scott

Well, at the risk of stating the obvious, I think we should just acknowledge that the people developing these models don’t even have full legibility over how they’re working. So that’s where we’re starting from. So that’s the kind of ceiling. On where we can get to. You can break down the thinking process as it were, but you still have that black box. I don’t think it’s insurmountable. I think some of the things I’m worried about relate to the more psychological aspect of this, and in particular, sleeping at the wheel, this phenomenon where we have this strange relationship with AI where we get false negatives too quickly. So it does a bunch of clever things, except it didn’t do this one thing, and therefore we can’t use it for anything.

And if you check back in in like a month’s time, often it can do the thing. So you have that, the false negative, and then the phenomenon of sleeping at the wheel is where it starts to get very, very good, creeping upwards of like 85%, 90 % accuracy, and then you assume it’s 100 % accurate. And it’s really quite hard to edit your assumptions and say, no, it’s not. And you’ve probably all found this. Because if they’re power users of AI, and I’m one of them, some of them are not. Sometimes it comes across as so smart and so brilliant and comes up with a whole lot of counter -arguments. I use it for sort of kicking the tires from different perspectives all the time on stuff I’m doing.

That it’s almost overwhelmingly smart, and you’re like, it must have covered everything. That’s a default. So I think giving us the human tools and the psychological sort of counter -arguments and weaponry to deal with this is really, really important. I already have a heuristic that whenever I open my phone and I’m dealing with anything with an algorithm, I am in opposition to that algorithm because its interests don’t generally coincide with mine. So I try and get all algorithmic stuff off my phone as a starting point. But the dynamic with AI is a bit different, but I still think you have to have that sort of battle mentality with the technology. So that would be my…

There are many other things to consider, but that’s top of mind.

J. Michael McQuade

I think that’s terrific. And I think this idea of calibrating on the… Like what I want from completeness when I ask for analysis may be very different than what I want from can you just give me some different ideas that I haven’t thought about before. And different stages in a negotiation are going to require different levels of calibration. So, Gabriela, what do you think?

Gabriela Ramos

Well, it’s very difficult to follow these two girls. But the fact is that when you know a little bit more about how these things work, I’m not a technologist. But I have been looking at all of what can go wrong. Misrepresentation, over -representation of certain cultures, certain languages, assumptions. Therefore, if I am negotiating and you’re going to offer me a tool to improve my negotiating skills, I need to be sure that the assumptions that you use to build that tool are not just to beat the person in front of me. or not just to maximize efficiency or not just to do the kind of things that we are teaching the AI to do. And therefore, it’s much more complex.

Because what you want to do is to open a space of human understanding. How do you do that? And therefore, I will be questioning, as Robin said, always questioning, but it’s not what we do. And the other point is that the AI, what is amazing, is that it’s just reproducing cognitive abilities that humans do. So when you go into the using whatever chat box you use to get information, you take for granted what it comes out. What you would never do if you hire somebody in the first week. Even if you have done all the checkpoints for that person to have the capacities that you are looking in the market. So I feel that there is this question of, first, really bringing to the table the AI.

tools that are going to be reliable and trustworthy, and I know that these words are almost a cliche, but the reality is that sometimes they’re not. And the other point is that you can become very lazy. And how do you avoid just to grab the thing and say, that’s perfect? How do you keep that space for ourselves to take the decisions and be not only in the driver’s seat, but actually to think of AI as a supporter cast. And if we get the Oscar, it’s us and not the AI.

J. Michael McQuade

That’s fantastic. I have this mental picture in my mind for those of you who’ve done negotiations in any kind. The first thing you do is you grab a bunch of your team in a room and you say, let’s talk about strategy, and what are we going to do? And Bob in the corner says, here’s an option, and you realize Bob had a bad night last night, so maybe you discount what Bob says. So what happens when the AI says, here’s a thing? I don’t just trust it, it’s a priority. I have to apply a human judgment to what I’m hearing. So terrific points. Okay, we have time for a question or two, and I see one.

Just say who you are, where you’re from, and a quick question.

Audience

Thanks so much, Michael.

J. Michael McQuade

We have a microphone. Thank you.

Audience

Thanks so much. I’m Sam Dawes. I’m a senior advisor to the Oxford University AI Governance Initiative and director of multilateral AI. But my background is in diplomacy, working for Kofi Annan when he was Secretary General, and then for the Foreign Office and Cabinet Office. I wish we had had AI tools back then. So I was really inspired. It’s such a timely, rich panel, so thank you all for that. Something that Gabriella said around culture I think is so important, and I’m thinking about the positives and the risks with applying AI in this space. How can we ensure that the diverse cultural inputs of the world’s most diverse countries, of different societies, are… embedded in the data sets and the models which inform negotiations.

So is that something that UNESCO is working on in the long term and connects to the tools we use? And the second question is around the flip side. If AI is to be a useful neutral mediator in disputes or an assistant to a mediator, a human mediator, then what do we do about data poisoning and prompt injection and those kinds of risks? Thank you.

Gabriela Ramos

Very fast, not on the question, the question of culture. Culture is expressed by language. And therefore the more we can try to represent those languages in the models we use, I think the best we will be prepared to understand it. And I’m fascinated by that. I’m not a linguist, but if I… I would choose another life, I would do that. Because when you hear, for example, there was this Namibian representative during the negotiations of the ethics of AI, and she was saying, I find your draft very individualistic. It’s always about the human. It’s always about the outcomes for people, improving their welfare. And at the end, what I’m thinking about is the Ubuntu philosophy, which is I am because you are, and we are because it’s nature, and we are interlinked.

And therefore, how do you capture this when the models that we are developing are maximizing individual welfare? And so the only answer I have is try to be representative, and I think this is nothing new. We have seen how much these tools can discriminate if you are just built in one language or with the representation of certain characteristics of people. or countries. but really to be sure that you are capturing the richness that comes through language and opening up the sources and that’s the other point the sources this is one thing that I would always ask the answer you’re giving me is based on what sources and that would might help but these are checkpoints that we always need to be testing on the ground

J. Michael McQuade

I think you also if I just raise one other thing I think you raise one other really important point which is you know there’s a whole spectrum of things here there is negotiation because we have a set of interested parties to get to a common good understanding we also have very adversarial negotiations so adversarial negotiations open up this whole possibility of data poisoning of training set differentiation etc. so it’s a very complex world I really appreciate you bringing it up we have time for one more question I think let’s go right here can somebody tell me are we counting down to zero or are we counting down to zero or are we counting down to five Are we okay to keep going to zero here?

Okay, good. We’re going to go to zero no matter what.

Audience

Good morning. Good morning. Namaste. My name is Devika Rao. I meet 300 to 600 people per day, and I work around different languages. So basically I’m an Indian classical dance teacher. Okay. So I have data. I have a human connection. So, and what we want to do, how this cultural education can be supported by AI. So what is the step I can take further? Presently I’m actually working on a framework, cultural framework, which is India and UK POCC 2025, 2030. So I’m also interested in NEP and national health policy because people connected to their health and education. And education, which is the center point. So where I can go. and what kind of co -creation, co -collaboration can happen in this?

J. Michael McQuade

Robyn, is this something you want to jump in on? Maybe give her your email address.

⁠Robyn Scott

I wish I had an immediate response to that. I don’t think there is any default place to go, but I do think this is where the conversation is evolving, and there’s more and more recognition of the cultural oversight and importance. So I would just encourage you to please keep making those points. And I will just make one comment on the first question. The Swiss have built sort of a quasi -Swiss government, quasi -multilateral initiative to build an LLM that is trained from the outset on more than 100 languages, and it is actually run by a friend of mine who’s a former Swiss diplomat, so she’s coming at it very much with a diplomatic context. I’m very happy to make that connection.

Gabriela Ramos

Education. Super. Super complex. Don’t look at the technology. because we always focus on the technology the countries that have introduced so much technology in their educational systems didn’t get better student outcomes because of content we go to the internet and we go to the systems and we try to bring tools to help kids and we never see if they are contextually relevant culturally linked and therefore if you don’t produce the content the tools will not make it.

Nandita Balakrishnan

I’ll just add one last thing I think the way to think about AI is also is it actually solving a problem or are you just trying to introduce it to create a new problem I think this is where you have to think about the point of AI augmentation I think there are a lot of ways we can think about how AI can augment the problem sets that we have but sometimes you don’t actually have the problem that AI is going to solve and you don’t need to force AI to fix it

J. Michael McQuade

Thank you very much Okay we’re going to negotiate If you have a really quick question you can ask it No behind you Thank you it’s got to be quick though

Audience

My name is Arman I’m working for JPL South Asia just a quick question on how do you think this would impact balance of power like given that every country has different access to the kind of data sets that they have and as we saw there can be three states also in the play how it would look like state A knows everything about the rest of the players and the others don’t.

J. Michael McQuade

So we think a lot about I’ll answer this if you guys are okay we think a lot about this in the project which is what’s the evolution of a set of AI tools it’s like everything else that we are here at this conference which is where will tools provide competitive leverage where are the kind of tools in the world we live in ones that should be dispersed actively and offensively not defensively in a world where some of the negotiation is about getting everybody to a positive and some of the negotiation is adversarial so I think it is a huge element of how it will change power structures not just because it’s a thing we think about from a negotiation diplomacy but because the general AI tool.

Okay with that we are just about out of time. I want to thank this amazing panel. Gabriela, Nandita and Robin. I want to thank my colleagues Slavina and Charlie for and I want to thank all of you. We are at the beginning of a long process. When you work at a place like I do at the Belfer you think about projects that have beginnings, middles and ends and you think about projects that can grow into something really really important. So any of you who have interest in what we’re doing please let us know if you feel like you have questions that we ought to be asking or you have answers or you have answers to questions we have asked.

We would love to hear from you as we begin to build what we think is a really important discipline. So thank you and thank you to the sponsors and hosts. I appreciate everybody joining us. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (38)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Panel of experts included Gabriela Ramos (the chair‑to‑be, UNESCO) and Nandita Balakrishnan (Special Competitive Studies Project).”

The knowledge base lists Gabriela Ramos as former Assistant Director General for UNESCO and Nandita Balakrishnan as a director at the Special Competitive Studies Project, confirming their expertise and affiliation [S2].

Confirmedmedium

“Charlie Posniak was a Belfer‑Centre researcher on the panel.”

Charlie Posniak is quoted in the knowledge base discussing AI’s role in negotiations, confirming his participation as a researcher in the project [S8] and [S4].

Additional Contexthigh

“AI can serve as an augment‑tool to manage massive information flows, generate strategic options and provide real‑time translation while preserving interpersonal diplomacy.”

The knowledge base highlights that large language models help negotiators accumulate and synthesize large amounts of information, offering additional nuance on how AI supports information management and analysis in diplomatic settings [S8] and [S4].

Additional Contextmedium

“Carme Artigas was chief negotiator of the UNESCO AI Ethics recommendation.”

Carme Artigas is referenced in the knowledge base as speaking about the UNESCO AI Ethics recommendation, confirming her involvement with the effort, though the source does not specify her as the chief negotiator [S109].

External Sources (111)
S1
How AI Is Transforming Diplomacy and Conflict Management — I’ve been a major figure in international policy for the United States and in education at the Belfer Center, where our …
S2
How AI Is Transforming Diplomacy and Conflict Management — And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in…
S3
How AI Is Transforming Diplomacy and Conflict Management — Great. Thanks, Charlie. Thanks, Slavina. So just as everybody’s sort of getting settled in, so we have a plan for a proj…
S4
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-is-transforming-diplomacy-and-conflict-management — And then at the far end, two of my colleagues, researchers on our program at the Belfer Center. Charlie Posniak is a ful…
S5
How AI Is Transforming Diplomacy and Conflict Management — And then at the far end, two of my colleagues, researchers on our program at the Belfer Center. Charlie Posniak is a ful…
S6
How AI Is Transforming Diplomacy and Conflict Management — Great. Thanks, Charlie. Thanks, Slavina. So just as everybody’s sort of getting settled in, so we have a plan for a proj…
S7
How AI Is Transforming Diplomacy and Conflict Management — -Slavina Ancheva- Research fellow and MPP student at the Belfer Center, working on the MOVE 37 initiative
S8
https://app.faicon.ai/ai-impact-summit-2026/how-ai-is-transforming-diplomacy-and-conflict-management — And then at the far end, two of my colleagues, researchers on our program at the Belfer Center. Charlie Posniak is a ful…
S9
How AI Is Transforming Diplomacy and Conflict Management — – Robyn Scott- J. Michael McQuade- Charlie Posniak
S10
How AI Is Transforming Diplomacy and Conflict Management — Speakers:Robyn Scott, J. Michael McQuade, Charlie Posniak
S11
How AI Is Transforming Diplomacy and Conflict Management — I’ve been a major figure in international policy for the United States and in education at the Belfer Center, where our …
S12
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — Gabriela Ramos, Assistant Director General for Social and Human Sciences at UNESCO, has highlighted the unique mandate o…
S13
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S14
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S15
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S16
How AI Is Transforming Diplomacy and Conflict Management — I’ve been a major figure in international policy for the United States and in education at the Belfer Center, where our …
S17
How AI Is Transforming Diplomacy and Conflict Management — And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in…
S18
Global challenges for the governance of the digital world — The provision of information is another key issue—the speaker cautions against the perils of information overload and sc…
S19
Why will AI enhance, not replace, human diplomacy? — AI tools are already here to assist certain aspects of negotiations, from language translation to data analysis. However…
S20
Oman: Nexus between traditional and tech diplomacy — There is much discussion about AI and digital tools changing diplomacy. Yet, it is clear that technology will not replac…
S21
Negotiations — Artificial Intelligence (AI)has various applications in diplomacy. It can be used for data analysis to predict the outco…
S22
Diplomatic policy analysis — Negotiation support:Evidence-based analysis strengthens a country’s position in bilateral and multilateral discussions, …
S23
Al and Global Challenges: Ethical Development and Responsible Deployment — Alfredo Ronchi:Most interesting presentation from the standpoint of China. Thanks a lot for this date. And now we will t…
S24
Artificial Intelligence &amp; Emerging Tech — On the other hand, challenges in AI are also identified. One such challenge is the manifestation of unintended behaviors…
S25
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Amal El Fallah Seghrouchni: Thank you very much for the question. Yes, Morocco is Arabic-African. We have we are close t…
S26
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — Furthermore, transparency issues were identified regarding web content and LLMs. The analysis noted that creative common…
S27
Advancing Scientific AI with Safety Ethics and Responsibility — -Global South Perspectives and Adaptation: A significant focus was placed on how emerging scientific powers can shape AI…
S28
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — It’s designed to equip them with the knowledge to effectively use the tools they already have to turn policy frameworks …
S29
Safe and responsible AI — – Study of the proposal of gradual transformation of education with respect to AI impacts, including quantification of …
S30
Shaping the Future AI Strategies for Jobs and Economic Development — The path forward requires continued focus on infrastructure development, particularly renewable energy integration, tale…
S31
2021: The emergence of digital foreign policy — AmultiDISCIPLINARYapproach reflects the cross-cutting nature of digital issues, particularly among the technical, econom…
S32
A bottom-up approach: IG processes and multistakeholderism | IGF 2023 Open Forum #23 — The United Nations (UN) has made progress in recognising the importance of a multi-stakeholder process. The UN acknowled…
S33
High Level Leaders Session 3 | IGF 2023 — By engaging stakeholders from various technical and non-technical backgrounds, a comprehensive and holistic approach can…
S34
Opening of the session — France highlighted their participation in developing this mechanism, emphasising their inclusive method over the years. …
S35
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S36
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S37
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S38
Negotiations — Artificial Intelligence (AI)has various applications in diplomacy. It can be used for data analysis to predict the outco…
S39
How AI Is Transforming Diplomacy and Conflict Management — The researchers emphasized that this complexity creates several specific challenges: information overload that exceeds h…
S40
Cybermediation: What role for blockchain and artificial intelligence? — After explaining in further detail some aspects of NLP, she suggested that these tools can be used to support the work o…
S41
How AI Is Transforming Diplomacy and Conflict Management — And Charlie, you talked a little bit before about, you know, there’s an obvious role that LLS has. I think that’s a real…
S42
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S43
The mismatch between public fear of AI and its measured impact — Serious engagement with AI begins where exaggerated certainty ends. It requires patience, evidence, and a willingness to…
S44
Agents of Change AI for Government Services & Climate Resilience — Tiedrich advocates for cautious and strategic AI adoption in government, emphasizing the need to select use cases based …
S45
Main Topic 3 –  Identification of AI generated content — In their closing remarks, the speakers reiterated the importance of maintaining trust in traditional news media as a mar…
S46
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Fadi Daou:Okay, thank you. Thank you Michel and this is definitely a tension and maybe a balance at some point between t…
S47
Leveraging the UN system to advance global AI Governance efforts — The current difficulties in achieving consensus in multilateral systems underscore the necessity for inclusive negotiati…
S48
Multistakeholder digital governance beyond 2025 — Linguistic, cultural, and regional diversity must be meaningfully included in global frameworks
S49
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — “We have to make sure that we can protect the agents from that happening.”[68]. “And then the second thing we have to do…
S50
Microsoft expands software security lifecycle for AI-driven platforms — AI is widening the cyber risk landscape and forcing security teams to rethink established safeguards. Microsoft has upda…
S51
Emerging Shadows: Unmasking Cyber Threats of Generative AI — Data poisoning and technology evolution have emerged as significant concerns in the field of cybersecurity. Data poisoni…
S52
Building the Workforce_ AI for Viksit Bharat 2047 — Evidence:Provided specific statistics: ‘72% say they have a pilot or will have one this year, but only 45% of them say t…
S53
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S54
Overview of AI policy in 15 jurisdictions — Summary China remains a global leader in AI, driven by significant state investment, a vast tech ecosystem and abundant …
S55
Building Population-Scale Digital Public Infrastructure for AI — Summary:All speakers agree that moving from fragmented pilot projects to systematic, coordinated approaches is essential…
S56
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Policy frameworks and public vs private sector dynamics
S57
Closing remarks – Charting the path forward — Al Mesmar highlights that as AI systems become more powerful, governing access to computational infrastructure and large…
S58
Ateliers : rapports restitution et séance de clôture — Joseph Nkalwo Ngoula Merci. C’est toujours difficile de restituer la parole d’experts de haut vol. sans courir le risque…
S59
AI That Empowers Safety Growth and Social Inclusion in Action — A recurring theme was the critical importance of moving beyond English-centric AI development toward truly inclusive app…
S60
AI@UN: Navigating the tightrope between innovation and impartiality — 5.Multilingual: Represent diverse linguistic and cultural traditions, focusing on capturing and preserving wisdom from o…
S61
Setting the Rules_ Global AI Standards for Growth and Governance — Evidence:OpenAI conducts MMLU evaluations for various languages and specific tests for Indian dialects, ML Commons’ role…
S62
AI diplomacy — For centuries, power was defined by territory, armies, and economic might. Today, a new element is paramount: data and t…
S63
Artificial intelligence (AI) and cyber diplomacy — A key point raised was the need for clarity in defining and discussing AI governance. This encompasses various elements,…
S64
The role of diplomacy in AI geopolitics | AGDA — He also advised diplomatic services to start AI transformation through small projects such as the automation of administ…
S65
Negotiations — Artificial Intelligence (AI)has various applications in diplomacy. It can be used for data analysis to predict the outco…
S66
AI diplomacy — However, we must remain masters of our tools. The final analysis, the subtle art of negotiation, the building of trust; …
S67
Why will AI enhance, not replace, human diplomacy? — AI tools are already here to assist certain aspects of negotiations, from language translation to data analysis. However…
S68
[Event summary] The impact of AI on diplomacy and international relations — Panel 2: AI as a cognitive tool for diplomatic practice:Andrew Tony Camilleri, Technical Attaché, Permanent Representati…
S69
Al and Global Challenges: Ethical Development and Responsible Deployment — Alfredo Ronchi:Most interesting presentation from the standpoint of China. Thanks a lot for this date. And now we will t…
S70
Artificial Intelligence &amp; Emerging Tech — On the other hand, challenges in AI are also identified. One such challenge is the manifestation of unintended behaviors…
S71
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Amal El Fallah Seghrouchni: Thank you very much for the question. Yes, Morocco is Arabic-African. We have we are close t…
S72
Advancing Scientific AI with Safety Ethics and Responsibility — Large language models are typically trained on Western data, creating gaps when deployed in different cultural contexts….
S73
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — In conclusion, the analysis emphasized the significance of developing and using technology responsibly to prevent harm a…
S74
Embedding Human Rights in AI Standards: From Principles to Practice — 5. **Practical Implementation**: Continuing to develop concrete tools and methodologies that can bridge the gap between …
S75
Safe and responsible AI — – Study of the proposal of gradual transformation of education with respect to AI impacts, including quantification of …
S76
AI and international peace and security: Key issues and relevance for Geneva — Capacity-Building Initiatives: Capacity-building initiatives are vital for equipping states with the knowledge and skill…
S77
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — The sandboxes enable a learning process where regulators can understand practical implementation challenges whilst devel…
S78
Advancing Scientific AI with Safety Ethics and Responsibility — “So for example, we are going to launch a global south network for trustworthy AI and we are going to launch a global so…
S79
WS #226 Strengthening Multistakeholder Participation — Guilherme Canela De Souza from UNESCO positioned multilingualism within UNESCO’s Internet Universality concept, explaini…
S80
Stronger together: multistakeholder voices in cyberdiplomacy | IGF 2023 WS #107 — Marie:Thank you. I think I will just go a bit further than just the UN and the first committee. But looking at, first, a…
S81
High Level Leaders Session 3 | IGF 2023 — Garza advocates for reinforcing the multi-stakeholder model in internet policy and regulation. However, she notes that t…
S82
WS #157 Driving MS Engagement: Lessons from Lebanon and Canada — David Bedard: Thanks, Shafiq. That’s a great question. So I’ll just first start off by saying that within sort of the…
S83
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S84
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S85
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S86
WAIGF Opening Ceremony &amp; Keynote — The overall tone was formal yet optimistic. Speakers expressed enthusiasm about the potential of digital technologies wh…
S87
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S88
Next-Gen Education: Harnessing Generative AI | IGF 2023 WS #495 — We are seeing a surge in AI-enabled applications across sectors. In education, one particular branch of AI, generative A…
S89
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — Robin Zuercher:Thank you very much, sir. All right, we have exactly two minutes left. So I’d like to just address the sa…
S90
Toward Collective Action_ Roundtable on Safe & Trusted AI — 627 words | 103 words per minute | Duration: 362 secondss The first share of the research team, I believe, is here with…
S91
WS #232 Innovative Approaches to Teaching AI Fairness &amp; Governance — Ayaz Karimov: if whoever wants to go. Yes, very, very, very good question. Actually, it has a name in the academia as …
S92
Main Topic 3 –  Identification of AI generated content — Aldan Creo:Great. Hello. How are you, everyone? Well, it’s a pleasure to be able to have this session. I hope we’ll make…
S93
Internet Governance Forum 2024 — During theWS #82 A Global South perspective on AI governance, Jenny Domino raised concerns about the reliance on AI-powe…
S94
From summer disillusionment to autumn clarity: Ten lessons for AI — In contrast, the focus on existing harms – education, discrimination, job loss, etc. – frames the problem in terms of ac…
S95
Cognitive Vulnerabilities: Why Humans Fall for Cyber Attacks — Attackers are becoming increasingly sophisticated and can use personal data shared on social media platforms to imperson…
S96
How AI Is Transforming Indias Workforce for Global Competitivene — There are risks of over-automation without adequate human oversight and potential bias issues
S97
HIGH LEVEL LEADERS SESSION I — Maintaining ethical and cultural sensitivity Therefore, it is important to consider ethical and cultural sensitivity wh…
S98
Closing Session  — Wrottesley emphasized that the momentum generated at the summit must continue beyond the event itself, requiring long-te…
S99
Keynote-Nikesh Arora — Overall Tone:The tone begins optimistically, celebrating AI’s rapid progress and potential, then shifts to a more cautio…
S100
Leaders TalkX: Towards a safer connected world: collaborative strategies to strengthen digital trust and cyber resilience — The tone was consistently professional, collaborative, and solution-oriented throughout. Speakers maintained an optimist…
S101
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S102
Science as a Growth Engine: Navigating the Funding and Translation Challenge — The answer will be different in different countries, different maturity levels, but I think it’s just simply too importa…
S103
Commission on Science and Technology for Development — (United Nations – A. Shaping the enabling environment 39. There are policy implications that arise from the char…
S104
Creating Eco-friendly Policy System for Emerging Technology — Furthermore, there is an emphasis on inculcating global consciousness, forging new partnerships, and pushing for innovat…
S105
Opening of the session — Singapore: Thank you Mr. Chair on behalf of my delegation I’d like to express our thanks to you and your team for the p…
S106
Digital policy issues emphasised at the G20 Leaders’ Summit — It is acknowledged that continuous progress is made in areas such as the Internet of Things, big data, cloud computing, …
S107
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Additionally, IFAP focuses on building capacities to address the ethical concerns arising from the use of frontier techn…
S108
DC-Sustainability Data, Access &amp; Transparency: A Trifecta for Sustainable News | IGF 2023 — Gabriela Ramos:Thank you very much, Benga. Words to live by there, and I’m sure we’ll go back to each of those three poi…
S109
A Digital Future for All (afternoon sessions) — Carme Artigas: Absolutely. So let’s talk also about opportunities, let’s have scientific panel inform us, not only o…
S110
From Principles to Practice: Operationalizing Multistakeholder Governance — A significant theme emerged around recognising that multi-stakeholder governance extends far beyond formal negotiations….
S111
HUMANITARIAN NEGOTIATION — A great deal of the conflict in any negotiation is often played out in a struggle over process. It may sound ridiculous …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Slavina Ancheva
8 arguments193 words per minute880 words273 seconds
Argument 1
Managing information overload and stakeholder complexity (Slavina Ancheva)
EXPLANATION
Negotiations involve massive amounts of data, documents, and participants, creating a heavy cognitive load for negotiators. AI can help manage this overload by organizing information, tracking stakeholder positions, and alleviating resource constraints.
EVIDENCE
Slavina described how a single negotiation can generate thousands of documents, transcripts, and drafts, and highlighted limited team resources, strategic groupthink, and strict time pressures that negotiators must handle [66-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The challenge of information overload and the need to balance abundant versus scarce information for stakeholders is highlighted in [S18], while the discussion of massive documentation and many participants in negotiations aligns with the complexity described in [S2].
MAJOR DISCUSSION POINT
Managing information overload and stakeholder complexity
AGREED WITH
Charlie Posniak, Nandita Balakrishnan, Robyn Scott
Argument 2
AI should augment, not replace, diplomats, preserving the fundamentally interpersonal nature of negotiations.
EXPLANATION
The panel stresses that AI tools are intended to give negotiators better management of complexity while keeping the human touch central to diplomatic processes.
EVIDENCE
Slavina explicitly said, “We’re not looking to replace diplomats or negotiators here, but just to give them the tools to manage these complexities much better” [60-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Both [S19] and [S20] argue that AI tools are meant to enhance diplomatic work without substituting the human element, emphasizing augmentation over replacement.
MAJOR DISCUSSION POINT
AI as augmentation rather than replacement for diplomats
AGREED WITH
Charlie Posniak, J. Michael McQuade, Gabriela Ramos, Robyn Scott
Argument 3
Modern negotiations increasingly involve multiple parties beyond bilateral settings, requiring AI systems that can handle multi‑state dynamics.
EXPLANATION
Negotiations now often include dozens of stakeholders, such as the EU’s 27 member states plus additional actors, which adds layers of complexity that AI must be able to model and track.
EVIDENCE
She noted that “rarely in this world do we have just two states negotiating nowadays” and cited the EU example with 27 member states and “hundreds of others” [63-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The multilateral nature of modern negotiations and the involvement of dozens of actors are described in [S2] and reinforced by the MOVE 37 overview in [S1].
MAJOR DISCUSSION POINT
Need for AI tools that support multi‑party negotiation contexts
Argument 4
AI can generate strategic options and uncover deep interests to support negotiators.
EXPLANATION
Beyond managing information overload, AI can assist diplomats by proposing alternative negotiation pathways and revealing underlying stakeholder motivations, thereby enriching the decision‑making toolbox.
EVIDENCE
Slavina remarks that interviewees talk about “position tracking… generating options and strategic options, and really uncovering the deepest interests” [271-273].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Scenario-modeling and the generation of alternative negotiation pathways are discussed as AI applications in [S21], which aligns with the strategic-option claim.
MAJOR DISCUSSION POINT
AI‑enabled strategic option generation for negotiations
Argument 5
Groupthink and herding can bias negotiation outcomes; AI tools can provide alternative perspectives and counter‑arguments to mitigate these dynamics.
EXPLANATION
When negotiators fall into a consensus or herd mentality, they may overlook alternative strategies or interests. AI can surface divergent options and highlight blind spots, helping teams avoid collective bias and explore a fuller set of possibilities.
EVIDENCE
Slavina observed that “sometimes in groups, you might have a group think or herding that leads you in one direction as opposed to exploring your full set of options” [70].
MAJOR DISCUSSION POINT
Mitigating groupthink in negotiations through AI‑generated alternative viewpoints
Argument 6
Negotiations evolve over time and require adaptable AI tools.
EXPLANATION
Negotiation processes are not static; they change as new information and dynamics emerge, so AI systems must be flexible enough to adjust to shifting contexts and stakeholder positions.
EVIDENCE
Slavina notes that negotiations are complex and evolve over time, highlighting that they can involve changing issues such as natural resources, AI, climate, and various internal and external stakeholders influencing the process [55-57].
MAJOR DISCUSSION POINT
Dynamic nature of negotiations demands adaptable AI
Argument 7
AI can support continuity and handover between negotiation teams.
EXPLANATION
Because negotiations often span multiple sessions and involve handover to future teams, AI tools that capture and preserve knowledge can ensure seamless transitions and maintain strategic coherence.
EVIDENCE
She points out that most negotiations have a time element and a handover element to future teams, indicating the need for tools that manage this continuity [72-73].
MAJOR DISCUSSION POINT
Ensuring knowledge continuity across negotiation phases
Argument 8
AI can support the supporting teams behind principal negotiators by automating evidence gathering and document preparation.
EXPLANATION
Negotiations involve extensive back‑office work such as compiling briefs and evidence; AI can streamline these processes, reducing workload for the teams that assist diplomats.
EVIDENCE
Slavina described that many teams sit behind principal negotiators providing evidence and documents, emphasizing the need for tools to aid this support function [59-60].
MAJOR DISCUSSION POINT
AI assistance for supporting negotiation teams
C
Charlie Posniak
7 arguments211 words per minute1310 words371 seconds
Argument 1
Task decomposition: research, analysis, strategy, execution supported by AI (Charlie Posniak)
EXPLANATION
The negotiation process can be broken into distinct stages—research, analysis, strategizing, and execution—each of which can be augmented by AI tools. This decomposition allows computational infrastructure to support diplomats throughout the entire workflow.
EVIDENCE
Charlie outlined a four-step framework where AI assists in building an evidence base (research), processing information (analysis), mapping preferences to outcomes (strategy), and dynamically adapting during the negotiation (execution) [102-107].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The four-step framework that breaks diplomatic work into research, analysis, strategy and execution is outlined in [S2].
MAJOR DISCUSSION POINT
Task decomposition: research, analysis, strategy, execution supported by AI
AGREED WITH
Slavina Ancheva, Nandita Balakrishnan, Robyn Scott
Argument 2
LLMs lack verifiability and transparency; need integration with game‑theoretic and decision‑analysis methods (Charlie Posniak)
EXPLANATION
Large language models are fluent but their outputs cannot be easily verified, and their internal workings are opaque, which is problematic for high‑stakes diplomatic decisions. Therefore, AI systems should be combined with established game‑theoretic and decision‑analysis tools to ensure accountability and rigor.
EVIDENCE
Charlie noted that LLM fluency is not necessarily verifiable, their opacity hampers accountability, and that existing 80-year-old toolkits such as game theory and decision analysis are needed to model strategic interactions under uncertainty [82-88].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Limitations of large language models-lack of verifiable fluency and opacity-are highlighted in [S2], which calls for integration with established game-theoretic tools.
MAJOR DISCUSSION POINT
LLMs lack verifiability and transparency; need integration with game‑theoretic and decision‑analysis methods
Argument 3
High‑quality, curated data sets are essential for AI‑driven transcription, translation, and position‑tracking in diplomacy.
EXPLANATION
Effective AI support for negotiations depends on reliable data sources that can feed autonomous research agents, generate accurate transcripts, and map evolving positions of parties.
EVIDENCE
Charlie described the need for “strong data set” to enable autonomous research agents, source validation, real-time transcription and translation services, and position tracking across negotiations [108-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of strong, curated data sets for autonomous research agents, real-time transcription, translation and position tracking is emphasized in [S2].
MAJOR DISCUSSION POINT
Critical role of robust data infrastructure for diplomatic AI tools
Argument 4
AI can provide scenario simulation, red‑team training, and public‑interaction sandboxes for diplomatic negotiations.
EXPLANATION
By creating simulated environments where negotiators can test strategies, conduct adversarial red‑team exercises, and model public reactions, AI helps prepare diplomats for complex, high‑stakes talks.
EVIDENCE
Charlie describes “strategy sandboxes, red team training, and trying to simulate how both the public and the public can interact with each other” as part of the envisioned AI support suite [108-110].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-enabled scenario simulation and red-team exercises for diplomatic preparation are described in [S21].
MAJOR DISCUSSION POINT
AI‑enabled simulation and red‑team exercises for diplomacy
Argument 5
AI can automatically generate counterpart biographies and highlight evidence gaps for negotiators.
EXPLANATION
By processing large volumes of unstructured data, AI can produce concise profiles of negotiation participants and flag missing information, helping diplomats prepare more effectively.
EVIDENCE
Charlie describes autonomous research agents that can provide source validation, immediately generated counterpart biographies, and analysis of gaps and preferences in the evidence base [108-110].
MAJOR DISCUSSION POINT
Automated generation of counterpart profiles and gap analysis
Argument 6
Real‑time transcription and translation services can reduce language barriers during diplomatic talks.
EXPLANATION
AI‑driven speech‑to‑text and multilingual translation tools enable participants speaking different languages to follow discussions instantly, fostering inclusivity and more efficient negotiations.
EVIDENCE
He mentions that in the negotiation room, AI can provide real-time transcription and translation services, which are already achieving phenomenal performance [113-114].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The potential of AI to deliver real-time speech-to-text and multilingual translation in negotiation rooms is noted in [S19].
MAJOR DISCUSSION POINT
AI‑enabled multilingual support for live negotiations
Argument 7
AI tools should be designed as modular, transparent components to keep human authority central in diplomatic decision‑making.
EXPLANATION
By building AI systems in modular pieces whose operation can be inspected, negotiators retain control and can understand how each part contributes to recommendations, preserving accountability.
EVIDENCE
Charlie emphasized that “human authority has to remain central” and that tools need to be “modular and transparent so that you can see what’s happening at each stage” [117-120].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Modular, transparent AI design that preserves human authority is a core principle of the MOVE 37 project described in [S1].
MAJOR DISCUSSION POINT
Modular and transparent AI design to preserve human authority
AGREED WITH
Robyn Scott, Gabriela Ramos, J. Michael McQuade
G
Gabriela Ramos
7 arguments164 words per minute1455 words531 seconds
Argument 1
Need for position‑tracking and historical precedent repositories (Gabriela Ramos)
EXPLANATION
Negotiators would benefit from AI systems that can map where each country stands on issues and retrieve historical negotiation positions. Such repositories would streamline briefings and help identify leverage points.
EVIDENCE
Gabriela recounted her experience with the UNESCO AI ethics recommendation, explaining how visualising country positions on a Zoom call helped her, and she expressed a wish for a repository of traditional positions and negotiation histories [141-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for systematic position-tracking across many countries and a repository of past negotiation stances is discussed in [S2].
MAJOR DISCUSSION POINT
Need for position‑tracking and historical precedent repositories
Argument 2
Risks of mis‑representation and over‑representation; humans must stay in the driver’s seat (Gabriela Ramos)
EXPLANATION
AI tools may misrepresent cultures or over‑represent certain languages, leading to biased outcomes. Gabriela stresses that negotiators must retain control and critically assess AI assumptions rather than letting the technology dictate strategy.
EVIDENCE
She warned that AI assumptions could over-represent certain cultures or languages, emphasizing the need for human drivers to question tools and avoid letting AI dominate the negotiation process [352-357].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The warning that AI must not dominate decision-making and that human negotiators must remain in control is echoed in [S19].
MAJOR DISCUSSION POINT
Risks of mis‑representation and over‑representation; humans must stay in the driver’s seat
Argument 3
Language diversity and cultural nuance must be reflected in models (Gabriela Ramos)
EXPLANATION
For AI to be effective in global negotiations, it must capture linguistic diversity and cultural philosophies such as Ubuntu. Without multilingual representation, models risk bias and exclusion of non‑dominant perspectives.
EVIDENCE
Gabriela highlighted that culture is expressed through language, cited the need to represent many languages, referenced a Namibian representative’s critique of individualistic language, and discussed Ubuntu as an example of cultural nuance that models must capture [391-403].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of embedding cultural nuance and linguistic diversity into AI systems for diplomatic work is highlighted in [S20].
MAJOR DISCUSSION POINT
Language diversity and cultural nuance must be reflected in models
Argument 4
Transparency about the data sources underlying AI outputs is essential for trust and accountability.
EXPLANATION
Negotiators need to know which datasets and documents an AI system draws on in order to assess bias, verify reliability, and make informed decisions, especially when AI informs high‑stakes diplomatic positions.
EVIDENCE
Gabriela says she would always ask “the answer you’re giving me is based on what sources” when evaluating AI-generated recommendations [402-403].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for source transparency to build trust in AI-generated recommendations are made in [S19].
MAJOR DISCUSSION POINT
Source transparency for AI‑generated insights
AGREED WITH
Charlie Posniak, Nandita Balakrishnan, Robyn Scott
Argument 5
AI can help identify aligned countries and facilitate coordinated diplomatic outreach.
EXPLANATION
By mapping where each country stands on specific issues, AI can reveal natural allies, allowing negotiators to quickly reach out and build coalitions that strengthen their position.
EVIDENCE
Gabriela recounts using a Zoom view to see country positions, noting that she called the UK to align with Russia’s stance, which helped her leverage that shared position during the UNESCO AI ethics recommendation negotiations [141-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Mapping country positions to find natural allies and facilitate coalition-building is described in [S2].
MAJOR DISCUSSION POINT
AI‑driven ally identification and coalition building
Argument 6
A centralized repository of historical negotiation positions would streamline briefing and strategy formulation.
EXPLANATION
Having quick access to each country’s past stances enables negotiators to prepare more targeted arguments and anticipate objections, reducing preparation time and improving strategic depth.
EVIDENCE
She expresses a wish for a repository that captures traditional positions and negotiation histories, which would aid in understanding both substantive and contextual factors of each country’s behavior [144-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The desire for a central archive of historical negotiation positions to aid briefing is reflected in the discussion of position-tracking in [S2].
MAJOR DISCUSSION POINT
Historical precedent repository for diplomatic preparation
Argument 7
AI can help structure complex negotiation processes by providing step‑by‑step guidance and procedural checklists.
EXPLANATION
Complex negotiations involve many procedural steps; AI can break down the process into manageable stages, ensuring consistency and reducing the cognitive burden on negotiators.
EVIDENCE
Gabriela explained that the UNESCO AI ethics recommendation required a step-by-step approach due to its complexity, and she expressed a desire for tools that could map traditional positions and procedural steps [141-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for AI-driven procedural structuring of multilateral negotiations is mentioned in [S2].
MAJOR DISCUSSION POINT
AI‑driven procedural structuring for complex negotiations
R
Robyn Scott
21 arguments0 words per minute0 words1 seconds
Argument 1
Keep humans “above the algorithm” to avoid over‑reliance and preserve agency (Robyn Scott)
EXPLANATION
Users should view AI as a tool that supports their goals rather than a driver that dictates actions. Maintaining a “above the algorithm” stance ensures human agency and prevents complacency.
EVIDENCE
Robyn described a heuristic where being “below the algorithm” is like being a driverless worker, whereas being “above the algorithm” means using tools to further personal goals, urging a shift of capability above the algorithm [256-259].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The recommendation to keep human users “above the algorithm” and preserve agency is a central theme in [S19].
MAJOR DISCUSSION POINT
Keep humans “above the algorithm” to avoid over‑reliance and preserve agency
Argument 2
High optimism but low evaluation; leaders need hands‑on experience with AI (Robyn Scott)
EXPLANATION
Public servants are enthusiastic about AI’s potential, yet many pilot projects lack proper evaluation and leaders often do not use the technology themselves, creating a skills gap. Direct, hands‑on experience is essential to bridge this gap.
EVIDENCE
Robyn cited a 5,000-person survey showing >90 % optimism, noted that 70 % of leaders have pilots but only 45 % evaluate them, and highlighted that leaders not using AI themselves is a major obstacle [228-242].
MAJOR DISCUSSION POINT
High optimism but low evaluation; leaders need hands‑on experience with AI
Argument 3
Multilingual LLM initiatives (e.g., Swiss project) illustrate practical steps (Robyn Scott)
EXPLANATION
Concrete projects, such as the Swiss‑led multilingual LLM trained on over 100 languages, demonstrate how the AI community can embed linguistic diversity from the outset, providing a model for diplomatic applications.
EVIDENCE
Robyn mentioned that the Swiss have built a quasi-governmental initiative to train an LLM on more than 100 languages, run by a former Swiss diplomat, as a practical example of multilingual AI development [424-425].
MAJOR DISCUSSION POINT
Multilingual LLM initiatives (e.g., Swiss project) illustrate practical steps
Argument 4
Pilotitis: proliferation of AI pilots without systematic evaluation creates a critical gap.
EXPLANATION
Many public sector leaders have launched AI pilot projects, but a minority have put in place mechanisms to assess their effectiveness, leading to unverified outcomes and wasted resources.
EVIDENCE
Robyn reported that about 70 % of leaders say they have AI pilots or plan to launch them, yet only 45 % have any plan to evaluate those pilots, highlighting a substantial evaluation gap [241-242].
MAJOR DISCUSSION POINT
Pilotitis and the need for robust evaluation frameworks
AGREED WITH
J. Michael McQuade
Argument 5
Widespread skills and ethical knowledge gaps among public servants hinder responsible AI adoption.
EXPLANATION
A large proportion of civil servants lack familiarity with their own country’s ethical AI frameworks, resulting in ad‑hoc implementations that risk misuse or bias.
EVIDENCE
Robyn noted that only 26 % of public servants implementing AI say they understand their national ethical frameworks, while roughly three-quarters are “freestyling” their deployments, which she described as a terrifying skills gap [247-250].
MAJOR DISCUSSION POINT
Skills and ethical knowledge gaps in the public sector
Argument 6
AI holds the potential to unlock roughly $1.75 trillion of public‑sector value, but this requires addressing bureaucratic inertia.
EXPLANATION
AI can dramatically improve efficiency in repetitive bureaucratic processes, yet realizing this economic upside depends on overcoming entrenched administrative obstacles.
EVIDENCE
Robyn cited a BCG estimate that $1.75 trillion could be unlocked in the public sector if AI is harnessed correctly, emphasizing that AI “loves bureaucracy” and repeatable processes [233-236].
MAJOR DISCUSSION POINT
Economic upside of AI for the public sector and the need to tackle bureaucracy
Argument 7
Interdisciplinary collaboration, such as the partnership with Stanford HAI, is essential for effective AI deployment in government.
EXPLANATION
Bringing together context experts and content experts enables the design of AI solutions that are both technically sound and policy‑relevant.
EVIDENCE
Robyn mentioned that Stanford HAI is one of their collaborators and that they act as “more context experts than content experts, bringing the content experts into the middle” of their work [223-224].
MAJOR DISCUSSION POINT
Importance of interdisciplinary partnerships for AI in the public sector
Argument 8
Over‑trust in AI due to perceived high accuracy can lead to dangerous complacency, so users must maintain critical oversight.
EXPLANATION
Robyn warns that users may assume AI systems are near‑perfect after observing high accuracy rates, which can cause them to stop questioning outputs. This “sleeping at the wheel” effect risks decisions being made on flawed AI recommendations.
EVIDENCE
She describes a phenomenon where AI reaches 85-90 % accuracy and users begin to treat it as 100 % accurate, leading to over-reliance and missed errors, exemplified by false-negative incidents where the model fails to perform a task that later works after a short period [330-334].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The risk of users treating high-accuracy AI as flawless and becoming complacent is warned about in [S19].
MAJOR DISCUSSION POINT
Risk of over‑trust and complacency with AI outputs
Argument 9
Adopting a ‘battle‑mentalities’ stance toward AI—viewing oneself as opposed to the algorithm—helps maintain human agency and provides psychological safeguards against over‑reliance.
EXPLANATION
Robyn suggests that users should treat AI as an adversary whose interests may not align with theirs, fostering a habit of critical questioning and the development of counter‑argument tools to preserve decision‑making autonomy.
EVIDENCE
She explains her personal heuristic of positioning herself “in opposition to that algorithm” because its interests differ, and stresses the need for human tools and psychological counter-arguments to avoid becoming overly dependent on AI [335-344].
MAJOR DISCUSSION POINT
Psychological safeguards and critical stance toward AI to preserve human agency
Argument 10
The black‑box nature of AI models limits trust; developers must acknowledge limited legibility and communicate constraints transparently.
EXPLANATION
Robyn points out that model creators often cannot fully explain how their systems work, which creates a ceiling on what users can expect and necessitates clear communication about these limitations.
EVIDENCE
She notes that “the people developing these models don’t even have full legibility over how they’re working,” highlighting the opacity and the resulting ceiling on user expectations [323-328].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The limited legibility of AI models and the need for transparent communication of constraints are discussed in [S19].
MAJOR DISCUSSION POINT
Need for transparency about AI model limitations
AGREED WITH
Gabriela Ramos, Charlie Posniak, Nandita Balakrishnan
Argument 11
The optimism‑wary paradox among public servants indicates that policy must balance enthusiasm for AI with careful risk management.
EXPLANATION
While a large majority of civil servants are excited about AI’s potential, they simultaneously express concerns, suggesting that strategies should address both the promise and the pitfalls of AI adoption.
EVIDENCE
Robyn cites a 5,000-person survey showing over 90 % of public servants see huge AI possibilities, yet she also notes “there are lots of paradoxes” and that officials are wary of AI despite their optimism [227-232].
MAJOR DISCUSSION POINT
Balancing optimism and caution in AI policy for the public sector
Argument 12
Moving from ‘painkiller’ AI applications (automation of bureaucracy) to ‘vitamin’ uses (predictive and adaptive policy) is essential for unlocking higher societal value.
EXPLANATION
Robyn differentiates low‑hanging, efficiency‑focused AI tasks from higher‑impact, forward‑looking applications that can transform policy making, urging a shift toward the latter to realize AI’s full potential.
EVIDENCE
She describes AI’s current role as a “painkiller” that automates repeatable bureaucratic processes, and contrasts it with the “vitamin” prize of predictive, responsive, and adaptive policy capabilities that are only bounded by imagination [237-239].
MAJOR DISCUSSION POINT
Transitioning AI focus from automation to strategic, predictive policy tools
Argument 13
AI could create a zero‑sum dynamic that drains human agency in diplomatic negotiations.
EXPLANATION
Robyn warned that if AI is treated as the primary driver, it may absorb the agency that belongs to human negotiators, turning the process into a zero‑sum game where humans lose decision‑making power. Preserving human agency requires keeping AI as a supportive tool rather than a dominant actor.
EVIDENCE
She described a scenario where the growing reliance on AI could lead to a situation where “the agency drains away to AI and that all comes at a cost to humans,” emphasizing the need to maintain human agency in diplomacy [252-255].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concern that AI could absorb decision-making agency, creating a zero-sum situation for humans, is raised in [S19].
MAJOR DISCUSSION POINT
Risk of AI displacing human agency and creating zero‑sum dynamics
Argument 14
AI‑driven training platforms can rapidly scale capacity development for public officials, delivering personalized learning at massive scale.
EXPLANATION
By leveraging AI to curate and adapt educational content, governments can upskill large numbers of civil servants efficiently. This approach addresses the skills gap and ensures that leaders acquire hands‑on experience with AI tools.
EVIDENCE
Robyn notes that Apolitical’s AI courses have already reached hundreds of thousands of people worldwide, demonstrating the potential for AI-enabled learning to be disseminated at scale [219-220].
MAJOR DISCUSSION POINT
Scaling AI‑based training to close public‑sector skills gaps
Argument 15
Incorporating cultural oversight and local context into AI development is essential for diplomatic applications to avoid bias and ensure relevance.
EXPLANATION
AI tools used in negotiations must reflect the cultural nuances and values of diverse stakeholders; otherwise they risk producing ineffective or biased outcomes. Embedding cultural expertise from the outset helps create models that respect different worldviews.
EVIDENCE
Robyn acknowledges a growing recognition of cultural oversight importance, stating that “there’s more and more recognition of the cultural oversight and importance” and urging continued focus on it [420-424].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of cultural oversight and embedding local context into AI tools for diplomacy is highlighted in [S20].
MAJOR DISCUSSION POINT
Need for cultural and contextual sensitivity in AI for diplomacy
Argument 16
AI can provide real‑time multilingual translation and language support during diplomatic negotiations, reducing language barriers and enhancing inclusivity.
EXPLANATION
Deploying AI‑powered translation services enables participants speaking different languages to engage directly, fostering more equitable negotiations. Such capabilities are crucial given the multilingual nature of international diplomacy.
EVIDENCE
Robyn references the Swiss-led multilingual LLM project trained on over 100 languages, illustrating a concrete effort to embed linguistic diversity into AI systems for diplomatic contexts [424-425].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Real-time multilingual translation to lower language barriers in negotiations is described in [S19] and reinforced by the multilingual focus in [S20].
MAJOR DISCUSSION POINT
Real‑time multilingual AI support to facilitate inclusive negotiations
Argument 17
The gap between AI enthusiasm and concrete implementation must be closed; policymakers need to move from discussion to actionable projects.
EXPLANATION
Although many public servants express optimism about AI, there is a noticeable shortfall in actual deployment and measurable outcomes. Bridging this gap requires translating AI talk into concrete actions and systematic evaluation.
EVIDENCE
Robyn notes that “there’s lots of AI talk” followed immediately by “there’s less AI action,” indicating a disparity between rhetoric and implementation [239-240].
MAJOR DISCUSSION POINT
Closing the gap between AI discourse and implementation
Argument 18
AI deployments must be explicitly aligned with national ethical frameworks to ensure responsible and trustworthy use.
EXPLANATION
A large proportion of civil servants lack familiarity with their own country’s AI ethical guidelines, which risks ad‑hoc and potentially harmful implementations. Embedding ethical framework awareness into AI projects is essential for responsible adoption.
EVIDENCE
Robyn reports that only 26 % of public servants implementing AI say they understand their own country’s ethical frameworks, while roughly three-quarters are “freestyling,” highlighting a critical skills and governance gap [247-250].
MAJOR DISCUSSION POINT
Aligning AI tools with national ethical guidelines
Argument 19
AI can automate routine research and writing tasks for public servants, freeing capacity for higher‑order policy work.
EXPLANATION
AI tools can handle large‑scale information gathering and drafting, allowing civil servants to focus on analysis and decision‑making rather than repetitive documentation.
EVIDENCE
Robyn noted that a third of public officials spend much of their day on research and writing, and that AI excels at these tasks, describing AI as a “painkiller” for bureaucracy and highlighting its ability to automate such work [234-236].
MAJOR DISCUSSION POINT
Automation of research and writing to free civil servant capacity
Argument 20
Effective AI deployment in government requires strong context expertise, with the team acting as contextual guides and integrating content experts to ensure relevance and policy alignment.
EXPLANATION
Robyn emphasizes that her organization positions itself as a context expert rather than a pure content expert, bringing subject‑matter specialists into the process to tailor AI solutions to governmental needs. This approach ensures that AI tools are grounded in real‑world policy contexts.
EVIDENCE
She states that Stanford HAI is a collaborator and that “we are more context experts than content experts, and we bring the content experts into the middle,” highlighting the team’s role in framing AI projects within the appropriate policy environment [223-224].
MAJOR DISCUSSION POINT
Context‑driven AI design for government
Argument 21
AI adoption must be paired with parallel human capacity development to prevent skill erosion and maintain agency, ensuring people are kept “above the algorithm” while AI tools are introduced.
EXPLANATION
Robyn argues that as AI tools become more capable, it is essential to simultaneously build human skills and keep users in a supervisory role, preventing dependence on the technology and preserving decision‑making authority.
EVIDENCE
She notes the need to “keep moving people up above the algorithm” and stresses that “we need to building up humans at the same time,” linking AI capability growth with human capacity enhancement [254-259].
MAJOR DISCUSSION POINT
Coupling AI rollout with human capacity building
N
Nandita Balakrishnan
8 arguments214 words per minute1355 words379 seconds
Argument 1
Explainability of AI outputs is essential for accountability (Nandita Balakrishnan)
EXPLANATION
Decision‑makers must be able to trace how AI arrived at a recommendation to justify it to policymakers and the public. Without clear explanations, AI‑generated assessments cannot be trusted in diplomatic contexts.
EVIDENCE
Nandita emphasized that AI outputs need to be explainable, stating that analysts must be able to show how conclusions were reached and that AI should be treated as a data point, not a finished intelligence product [297-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for explainable AI to ensure accountability and traceability of recommendations is emphasized in [S19].
MAJOR DISCUSSION POINT
Explainability of AI outputs is essential for accountability
Argument 2
Public‑sector AI literacy and workflow integration are critical for effective use (Nandita Balakrishnan)
EXPLANATION
For AI to improve diplomatic analysis, government employees need training and integrated workflows that embed AI tools into daily tasks. Building AI literacy across agencies ensures consistent, responsible adoption.
EVIDENCE
She argued that AI reshapes the threat landscape, that the public sector must adopt AI to stay competitive, and that building AI literacy within intelligence, State Department, commerce, and other federal agencies is essential [194-200].
MAJOR DISCUSSION POINT
Public‑sector AI literacy and workflow integration are critical for effective use
Argument 3
AI reshapes threat landscape; must avoid creating new problems through uneven adoption (Nandita Balakrishnan)
EXPLANATION
AI fundamentally changes geopolitical competition, making it a core factor in security analyses. Unequal access to AI could generate new risks, so policymakers must consider these dynamics when designing strategies.
EVIDENCE
Nandita stated that AI has fundamentally changed the threat landscape and is now a foundational element for understanding geopolitics, warning that the world cannot be divorced from AI considerations [194-197].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The transformative impact of AI on geopolitics and the risk of uneven adoption creating new security challenges are discussed in [S20].
MAJOR DISCUSSION POINT
AI reshapes threat landscape; must avoid creating new problems through uneven adoption
Argument 4
AI can surface overlooked historical data, improving the completeness and accuracy of intelligence assessments.
EXPLANATION
By automatically retrieving and synthesizing older or obscure information, AI helps analysts avoid blind spots that could undermine policy recommendations.
EVIDENCE
She recounted an experience where a mentor pointed out a ten-year-old data point that contradicted her analysis, noting that without AI she would never have discovered it, and suggested a tool could have identified and synthesized that data [185-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s ability to uncover hidden or forgotten historical evidence for more complete assessments is noted in [S21].
MAJOR DISCUSSION POINT
AI as a tool for uncovering hidden historical evidence
Argument 5
AI can be employed for predictive geopolitical forecasting to anticipate events and inform policy decisions.
EXPLANATION
Using AI to forecast geopolitical developments enables policymakers to prepare proactively, demonstrate concrete future benefits of AI adoption, and shape strategic planning with data‑driven foresight.
EVIDENCE
Nandita mentions a project “looking at how AI can be used for predicting geopolitical events” for both military and State Department applications [201-202].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Predictive AI for geopolitical forecasting and scenario anticipation is described in [S21].
MAJOR DISCUSSION POINT
Predictive AI for geopolitics
Argument 6
AI can dramatically increase the speed and efficiency of intelligence analysis, enabling analysts to produce assessments faster and with broader evidence bases.
EXPLANATION
By automating data retrieval, synthesis, and pattern detection, AI reduces the manual workload of analysts, allowing them to incorporate more information and generate insights more quickly while still retaining human oversight.
EVIDENCE
Nandita noted that “if I had access to these tools as an analyst, I could have worked much faster and much smarter” and recounted a past experience where a mentor identified a ten-year-old data point that contradicted her analysis, which she said would have been discovered with AI assistance [179-180][185-188].
MAJOR DISCUSSION POINT
AI‑enhanced productivity for intelligence assessments
Argument 7
Cross‑sector collaboration is essential for effective AI integration in diplomatic analysis.
EXPLANATION
Bringing together expertise from academia, the public sector, and the private sector ensures that AI tools are both technically robust and aligned with policy needs, fostering more comprehensive and trustworthy solutions.
EVIDENCE
Nandita describes her career spanning three distinct sectors-academia, public, and private-and emphasizes the importance of integrating insights from all three to advance AI adoption in intelligence and diplomacy [167-169].
MAJOR DISCUSSION POINT
Importance of interdisciplinary, cross‑sector collaboration for AI in diplomacy
Argument 8
Embedding AI into routine workflows across government agencies is essential for consistent and effective use.
EXPLANATION
For AI to improve diplomatic analysis, it must be integrated into daily processes of intelligence, State Department, commerce, and other agencies, ensuring that staff regularly use AI tools rather than treating them as occasional aids.
EVIDENCE
Nandita highlighted the need to build AI literacy and integrate AI into day-to-day workflows across the public sector, noting that many agencies-from intelligence to commerce-need to move smarter and faster with AI [198-200].
MAJOR DISCUSSION POINT
Integration of AI into daily governmental workflows
J
J. Michael McQuade
7 arguments182 words per minute2899 words954 seconds
Argument 1
Tools must be modular and transparent to keep human judgment central (J. Michael McQuade)
EXPLANATION
AI systems used in diplomacy should be built in modular components that are openly visible, allowing negotiators to understand and control each part of the decision‑making pipeline.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The MOVE 37 initiative stresses modular, transparent AI tools that keep human judgment central, as outlined in [S1].
MAJOR DISCUSSION POINT
Tools must be modular and transparent to keep human judgment central
Argument 2
Engaging diplomats directly to capture needs and hesitancies (J. Michael McQuade)
EXPLANATION
The project conducts one‑on‑one interviews with current and former diplomats to gather insights on how AI could support their work, ensuring that tool development reflects real‑world needs and concerns.
EVIDENCE
Michael explained that the team conducts individual interviews with diplomats worldwide to understand their processes, needs, and hesitations, integrating these insights across work streams [267-278].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The project’s methodology of conducting one-on-one interviews with current and former diplomats to inform tool design is described in [S1].
MAJOR DISCUSSION POINT
Engaging diplomats directly to capture needs and hesitancies
Argument 3
Project considers competitive leverage and the impact of AI on power structures (J. Michael McQuade)
EXPLANATION
The MOVE 37 initiative examines how AI tools can provide strategic advantage in negotiations and how uneven AI access might reshape global power dynamics, advocating for dispersed, non‑defensive deployment.
EVIDENCE
Michael highlighted that AI tools can offer competitive leverage and affect power structures, noting the need for dispersed, offensive AI capabilities rather than defensive ones [208-210][431-432].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion of how AI tools can shift competitive leverage and affect global power dynamics appears in [S2].
MAJOR DISCUSSION POINT
Project considers competitive leverage and the impact of AI on power structures
Argument 4
The MOVE 37 project aims to establish baseline AI capability and clear signposts for negotiators to guide responsible augmentation.
EXPLANATION
The initiative seeks to define foundational capabilities, goalposts, and evaluation metrics that enable negotiators to integrate AI tools while preserving human judgment.
EVIDENCE
Michael described a vision of “a set of signposts and goalposts” for augmenting human intelligence and participation, and outlined plans to develop tools and evaluation methodologies as part of the project’s technical side [124-129].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The core goal of MOVE 37 to define baseline AI capabilities and provide signposts for responsible augmentation is presented in [S1].
MAJOR DISCUSSION POINT
Defining baseline AI capability and guidance for negotiators
Argument 5
Systematic evaluation frameworks are essential for AI tools used in diplomatic negotiations.
EXPLANATION
To ensure AI applications achieve their intended outcomes and avoid unintended consequences, the MOVE 37 project must develop rigorous evaluation methodologies that can assess tool effectiveness throughout the negotiation process.
EVIDENCE
Michael notes that the project will be “developing tools” and “looking at evaluation methodologies, et cetera, et cetera” [127-128] and earlier stresses that “a more rigorous approach is needed” for AI in diplomacy [31].
MAJOR DISCUSSION POINT
Need for systematic evaluation of AI tools in diplomatic contexts
AGREED WITH
Robyn Scott
Argument 6
Global, open collaboration is crucial for the MOVE 37 initiative to avoid a narrow, Cambridge‑centric perspective.
EXPLANATION
The project seeks input and partnership from a worldwide community of diplomats, scholars, and practitioners to ensure that AI tools reflect diverse needs and contexts, rather than being limited to a single institution’s viewpoint.
EVIDENCE
Michael stresses that the work is not solely the purview of a small team in Cambridge; they are actively looking for collaborators, partners, and input from the global community to build a robust AI-augmented negotiation platform [23-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on worldwide collaboration and avoiding a single-institution perspective is highlighted in [S1].
MAJOR DISCUSSION POINT
Need for broad, international collaboration in AI‑for‑diplomacy projects
Argument 7
AI augmentation must preserve human judgment, especially in high‑stakes diplomatic decisions.
EXPLANATION
While AI can provide data‑driven insights, ultimate decision‑making authority should remain with human negotiators to maintain accountability, legitimacy, and ethical oversight.
EVIDENCE
Michael repeatedly emphasizes the importance of being careful about how AI is used, noting that human judgment must stay central to avoid relinquishing responsibility in complex, high-stakes negotiations [23-26][372-374].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The imperative that human judgment remain central in AI-augmented diplomatic decision-making is reiterated in [S19] and [S1].
MAJOR DISCUSSION POINT
Ensuring human judgment remains central in AI‑augmented diplomacy
A
Audience
2 arguments158 words per minute399 words151 seconds
Argument 1
Question on embedding diverse cultural inputs and guarding against data poisoning (Audience – Sam Dawes)
EXPLANATION
The audience member asked how UNESCO and related tools can ensure that negotiation models incorporate diverse cultural perspectives and protect against malicious data manipulation such as poisoning or prompt injection.
EVIDENCE
Sam Dawes asked whether UNESCO is working on embedding diverse cultural inputs into datasets and models, and raised concerns about data poisoning and prompt-injection risks for AI mediators [386-390].
MAJOR DISCUSSION POINT
Question on embedding diverse cultural inputs and guarding against data poisoning
Argument 2
Asymmetric data access could shift balance of power; need for dispersed, not defensive, AI tools (Audience – Arman)
EXPLANATION
The audience member highlighted that unequal data availability could give some states overwhelming advantage in negotiations, prompting a discussion on ensuring AI tools are widely accessible and not used solely for offensive advantage.
EVIDENCE
Arman asked about the impact of unequal data sets on balance of power, and Michael responded that the project is thinking about competitive leverage, advocating for dispersed AI tools to avoid power imbalances [430][431-432].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concerns about unequal data sets giving some states overwhelming advantage and the call for dispersed AI tools are discussed in [S2].
MAJOR DISCUSSION POINT
Asymmetric data access could shift balance of power; need for dispersed, not defensive, AI tools
Agreements
Agreement Points
AI should augment, not replace, diplomats; human authority must remain central
Speakers: Slavina Ancheva, Charlie Posniak, J. Michael McQuade, Gabriela Ramos, Robyn Scott
AI should augment, not replace, diplomats, preserving the fundamentally interpersonal nature of negotiations. AI tools should be designed as modular, transparent components to keep human authority central in diplomatic decision‑making. Tools must be modular and transparent to keep human judgment central Risks of mis‑representation and over‑representation; humans must stay in the driver’s seat Keep humans “above the algorithm” to avoid over‑reliance and preserve agency
All speakers emphasized that AI is intended to support negotiators while keeping humans in control of decisions, stressing modularity, transparency, and the need for human judgment throughout the negotiation process [60-62][117-120][23-26][352-357][256-259].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with calls for human-centered AI in diplomatic services, emphasizing oversight and avoiding overreliance, as highlighted in discussions on cautious AI adoption and the need for human oversight in diplomatic contexts [S42][S44][S64].
Robust, transparent data and source provenance are essential for trustworthy AI in diplomacy
Speakers: Gabriela Ramos, Charlie Posniak, Nandita Balakrishnan, Robyn Scott
Transparency about the data sources underlying AI outputs is essential for trust and accountability. Critical role of robust data infrastructure for diplomatic AI tools Explainability of AI outputs is essential for accountability The black‑box nature of AI models limits trust; developers must acknowledge limited legibility and communicate constraints transparently.
Speakers agreed that AI systems must be built on strong, well-documented data sets, with clear source attribution and explainability, to ensure accountability and trust in high-stakes diplomatic contexts [401-403][108-115][297-304][323-328].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on data provenance reflects concerns about data integrity and security, echoing warnings about data-poisoning attacks and the need for secure AI development lifecycles [S51][S50].
Multilingual and cultural inclusion must be built into AI models for global negotiations
Speakers: Gabriela Ramos, Robyn Scott
Language diversity and cultural nuance must be reflected in models Multilingual LLM initiatives (e.g., Swiss project) illustrate practical steps
Both highlighted the necessity of representing many languages and cultural philosophies (e.g., Ubuntu) in AI tools, citing UNESCO’s experience and the Swiss multilingual LLM project as examples [391-403][424-425].
POLICY CONTEXT (KNOWLEDGE BASE)
This requirement is consistent with international calls for linguistic and cultural diversity in AI governance, such as UN-led multilingual AI frameworks and cross-cultural AI initiatives [S46][S48][S59][S60].
AI can alleviate information overload and support the full negotiation workflow
Speakers: Slavina Ancheva, Charlie Posniak, Nandita Balakrishnan, Robyn Scott
Managing information overload and stakeholder complexity (Slavina Ancheva) Task decomposition: research, analysis, strategy, execution supported by AI (Charlie Posniak) AI‑enhanced productivity for intelligence assessments AI can automate routine research and writing tasks for public servants
All agreed that AI tools can manage massive documents, assist in research, analysis, strategy, and execution, and speed up assessments, thereby reducing cognitive load on negotiators and analysts [66-73][102-107][179-180][185-188][234-236].
POLICY CONTEXT (KNOWLEDGE BASE)
Addressing information overload mirrors identified challenges in negotiations and the role of LLMs to synthesize large data volumes, as discussed in AI-for-diplomacy research [S39][S41].
Systematic evaluation frameworks are needed for AI pilots in the public sector
Speakers: Robyn Scott, J. Michael McQuade
Pilotitis: proliferation of AI pilots without systematic evaluation creates a critical gap. Systematic evaluation frameworks are essential for AI tools used in diplomatic negotiations.
Both stressed that many AI pilots lack proper assessment, calling for rigorous evaluation methods to ensure effectiveness and avoid wasted resources [241-242][31][127-128].
POLICY CONTEXT (KNOWLEDGE BASE)
The push for systematic evaluation is supported by evidence that many pilots lack robust evaluation plans and by recommendations for coordinated evaluation roadmaps and sandbox approaches in the public sector [S52][S55][S44][S53].
Caution is required to avoid over‑trust and complacency with AI outputs
Speakers: Robyn Scott, Charlie Posniak, Gabriela Ramos, J. Michael McQuade
Over‑trust in AI due to perceived high accuracy can lead to dangerous complacency. AI tools should be designed as modular, transparent components to keep human authority central in diplomatic decision‑making. Risks of mis‑representation and over‑representation; humans must stay in the driver’s seat We have to be careful about how AI is used, keeping human judgment central
Speakers warned that high perceived accuracy can cause users to ignore errors, emphasizing the need for modular, transparent designs and continuous human oversight to prevent misuse [330-334][117-120][352-357][23-26].
POLICY CONTEXT (KNOWLEDGE BASE)
This caution aligns with warnings about overreliance on algorithms and the necessity of critical human oversight in diplomatic decision-making [S42][S43][S44].
Similar Viewpoints
Both highlighted the need to break down the negotiation process into distinct tasks and use AI to handle data‑intensive aspects, thereby reducing cognitive burden [66-73][102-107].
Speakers: Slavina Ancheva, Charlie Posniak
Managing information overload and stakeholder complexity (Slavina Ancheva) Task decomposition: research, analysis, strategy, execution supported by AI (Charlie Posniak)
Both stressed that negotiators must be able to trace AI‑generated recommendations back to their data sources to maintain accountability and trust [297-304][401-403].
Speakers: Nandita Balakrishnan, Gabriela Ramos
Explainability of AI outputs is essential for accountability Transparency about the data sources underlying AI outputs is essential for trust and accountability.
Both called for rigorous evaluation of AI initiatives to move beyond pilot projects and ensure effective deployment [241-242][31][127-128].
Speakers: Robyn Scott, J. Michael McQuade
Pilotitis: proliferation of AI pilots without systematic evaluation creates a critical gap. Systematic evaluation frameworks are essential for AI tools used in diplomatic negotiations.
Both identified a skills and literacy gap in the public sector, emphasizing that hands‑on experience and integration into daily workflows are needed to realize AI’s potential [228-242][194-200].
Speakers: Robyn Scott, Nandita Balakrishnan
High optimism but low evaluation; leaders need hands‑on experience with AI Public‑sector AI literacy and workflow integration are critical for effective use
Unexpected Consensus
Cultural and linguistic inclusion in AI for diplomacy
Speakers: Audience (Sam Dawes), Gabriela Ramos, Robyn Scott
Question on embedding diverse cultural inputs and guarding against data poisoning (Audience – Sam Dawes) Language diversity and cultural nuance must be reflected in models Multilingual LLM initiatives (e.g., Swiss project) illustrate practical steps
An audience member raised concerns about cultural representation and data poisoning, and both panelists independently emphasized the need for multilingual, culturally aware AI models, showing an unanticipated alignment between audience concerns and panel expertise [386-390][391-403][424-425].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis mirrors broader policy discussions on ensuring linguistic and cultural diversity in AI systems, as reflected in UN and multistakeholder initiatives [S46][S48][S59][S60].
Overall Assessment

There is strong consensus among the panelists that AI should serve as an augmentative, transparent, and accountable tool that supports negotiators without supplanting human judgment. Common ground exists on the necessity of robust data governance, multilingual/cultural inclusivity, managing information overload, and establishing systematic evaluation frameworks. The shared emphasis on human‑centered design and capacity building signals a coherent direction for the MOVE 37 initiative.

High consensus – the speakers largely agree on principles, risks, and implementation priorities, which bodes well for coordinated development of AI‑enabled diplomatic tools.

Differences
Different Viewpoints
Trustworthiness and role of large language models (LLMs) in diplomatic AI tools
Speakers: Charlie Posniak, Robyn Scott
LLMs lack verifiability and transparency; need integration with game‑theoretic and decision‑analysis methods The black‑box nature of AI models limits trust; developers must acknowledge limited legibility and communicate constraints transparently
Charlie argues that LLMs are fluent but their outputs cannot be verified and their opacity hampers accountability, so they must be combined with established game-theoretic and decision-analysis methods [82-88]. Robyn points out that even the developers of AI models do not have full legibility over how they work, creating a ceiling on user expectations and requiring transparent communication of these limits [323-328]. Both acknowledge the need for caution, but differ on whether LLMs can be used at all versus being integrated with other methods.
POLICY CONTEXT (KNOWLEDGE BASE)
Highlights the dispute over LLM reliability, with experts questioning their trustworthiness and calling for stricter validation, as seen in critiques of overconfidence in AI outputs [S42][S45][S51].
Current state of evaluation for AI pilots versus planned systematic evaluation
Speakers: Robyn Scott, J. Michael McQuade
Pilotitis: proliferation of AI pilots without systematic evaluation creates a critical gap Systematic evaluation frameworks are essential for AI tools used in diplomatic negotiations
Robyn highlights that 70 % of leaders have AI pilots but only 45 % have any plan to evaluate them, indicating a large gap in systematic assessment [241-242]. Michael states that the MOVE 37 project will develop tools and look at evaluation methodologies as part of its technical work, implying that rigorous evaluation is already being built into the initiative [127-128][31]. The disagreement lies in the perception of how mature the evaluation processes are: Robyn sees a present deficiency, while Michael emphasizes forthcoming frameworks.
POLICY CONTEXT (KNOWLEDGE BASE)
Points to the gap between pilot implementation and evaluation planning, documented in pilot statistics and reinforced by calls for systematic evaluation frameworks [S52][S55][S44].
Adequacy of multilingual AI models for capturing cultural nuance
Speakers: Gabriela Ramos, Robyn Scott
Language diversity and cultural nuance must be reflected in models Multilingual LLM initiatives (e.g., Swiss project) illustrate practical steps
Gabriela stresses that culture is expressed through language and that models must represent many languages and philosophies such as Ubuntu to avoid mis-representation [391-403]. Robyn cites the Swiss-led multilingual LLM trained on over 100 languages as a concrete step toward linguistic inclusion, suggesting that such projects address the cultural challenge [424-425]. The disagreement is whether existing multilingual initiatives are sufficient (Robyn) or whether deeper cultural embedding is still lacking (Gabriela).
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects ongoing debate about whether current multilingual LLMs capture cultural subtleties, with evidence of English-centric bias and efforts to broaden language coverage [S59][S61][S46].
Impact of AI on power dynamics and need for equitable access
Speakers: J. Michael McQuade, Audience (Arman)
Project considers competitive leverage and the impact of AI on power structures Asymmetric data access could shift balance of power; need for dispersed, not defensive, AI tools
Michael explains that the project is thinking about how AI tools can provide competitive leverage and affect power structures, advocating for tools that are dispersed and not purely defensive [208-210][431-432]. Arman raises a concern that if some states have superior data sets, they could dominate negotiations, asking how the project will address such asymmetries [430][431-432]. The disagreement centers on the extent to which the project’s current approach addresses the risk of power imbalances.
POLICY CONTEXT (KNOWLEDGE BASE)
Ties to discussions of AI as a new source of geopolitical power and the importance of inclusive governance to avoid widening digital divides [S62][S47][S57].
Unexpected Differences
Audience raises data‑poisoning and prompt‑injection risks that speakers do not directly address
Speakers: Audience (Sam Dawes), Panel speakers (Gabriela Ramos, Robyn Scott, J. Michael McQuade)
Question on embedding diverse cultural inputs and guarding against data poisoning Risks of mis‑representation and over‑representation; humans must stay in the driver’s seat The black‑box nature of AI models limits trust; developers must acknowledge limited legibility and communicate constraints transparently Project considers competitive leverage and the impact of AI on power structures
Sam Dawes asks how UNESCO can ensure diverse cultural inputs are embedded and how to mitigate data-poisoning and prompt-injection risks [386-390]. None of the panelists explicitly respond to data-poisoning; Gabriela focuses on cultural representation, Robyn on model opacity, and Michael on power dynamics. The emergence of a technical security concern not covered by the speakers constitutes an unexpected disagreement.
POLICY CONTEXT (KNOWLEDGE BASE)
Directly relates to identified cybersecurity threats of data poisoning and prompt-injection attacks in generative AI systems [S51][S50].
Different views on whether multilingual LLM projects sufficiently address cultural nuance
Speakers: Gabriela Ramos, Robyn Scott
Language diversity and cultural nuance must be reflected in models Multilingual LLM initiatives (e.g., Swiss project) illustrate practical steps
Gabriela argues that merely adding languages is insufficient without capturing deeper cultural philosophies such as Ubuntu [391-403]. Robyn points to the Swiss multilingual LLM as a concrete solution, implying that language coverage addresses the cultural gap [424-425]. The contrast between a cultural-depth requirement and a language-coverage solution was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Mirrors divergent opinions on multilingual model adequacy, with some projects praised for language coverage while others note persistent cultural gaps [S59][S61][S46].
Overall Assessment

The panel largely agrees on the principle that AI should augment rather than replace human negotiators and that human authority must remain central. However, clear disagreements emerge around the trustworthiness of LLMs, the adequacy of current evaluation practices, the sufficiency of multilingual models for cultural representation, and how to prevent AI‑driven power imbalances. Audience questions introduce additional concerns about data poisoning that were not addressed by the speakers.

Moderate – while there is consensus on high‑level goals, substantive divergences on technical trust, evaluation, cultural inclusion, and geopolitical equity could hinder coordinated progress unless reconciled. These disagreements suggest the need for clearer standards on model transparency, systematic evaluation frameworks, and inclusive multilingual‑cultural design to ensure the MOVE 37 initiative can achieve its objectives without exacerbating existing inequities.

Partial Agreements
All three agree that human authority must remain central and that AI systems need to be transparent or positioned as tools beneath human control. Charlie proposes modular transparency, Gabriela emphasizes questioning assumptions and retaining driver control, while Robyn advocates a “above the algorithm” stance to maintain agency. Their shared goal is preserving human judgment, but they differ on the mechanism (modular design vs critical questioning vs psychological stance) [117-120][352-357][256-259][335-344].
Speakers: Charlie Posniak, Gabriela Ramos, Robyn Scott
AI tools should be designed as modular, transparent components to keep human authority central in diplomatic decision‑making Risks of mis‑representation and over‑representation; humans must stay in the driver’s seat Keep humans “above the algorithm” to avoid over‑reliance and preserve agency
The speakers concur that AI is meant to augment diplomatic work without supplanting the human element. Slavina explicitly states they are not looking to replace diplomats but to give them better tools [60-62]. Michael repeatedly stresses that human judgment must stay central in AI‑augmented negotiations [23-26][372-374]. Charlie adds that modular, transparent design keeps human authority central [117-120]. They differ on implementation: Slavina focuses on tool provision, Michael on signposts and guidelines, Charlie on modular architecture.
Speakers: Slavina Ancheva, J. Michael McQuade, Charlie Posniak
AI should augment, not replace, diplomats, preserving the fundamentally interpersonal nature of negotiations AI augmentation must preserve human judgment, especially in high‑stakes diplomatic decisions AI tools should be designed as modular, transparent components to keep human authority central in diplomatic decision‑making
Takeaways
Key takeaways
Diplomatic negotiations are highly complex, involving many stakeholders, massive information flows, and dynamic strategic considerations. AI can augment negotiation processes (research, analysis, strategy, execution) but must remain a tool that supports, not replaces, human judgment. Large language models alone are insufficient due to lack of verifiability, transparency, and accountability; they need to be integrated with established game‑theoretic and decision‑analysis methods. Human‑in‑the‑loop (or “above the algorithm”) is essential to preserve agency, avoid over‑reliance, and ensure ethical governance. Transparency, modularity, and explainability of AI outputs are critical for accountability in high‑stakes diplomatic contexts. Building AI literacy and hands‑on experience within the public sector is a prerequisite for effective adoption; many officials are optimistic but lack evaluation frameworks. Cultural and linguistic diversity must be reflected in training data and model design to avoid bias and mis‑representation. Data integrity risks such as poisoning, prompt injection, and asymmetric data access could reshape geopolitical power balances; safeguards are needed. The MOVE 37 project will develop tools, evaluation methodologies, and a repository of position‑tracking and historical precedent, while continuously engaging diplomats and practitioners for feedback.
Resolutions and action items
Develop a suite of modular, transparent AI tools for each stage of negotiation (research agents, position‑tracking, strategy sandbox, real‑time transcription/translation). Create and maintain a centralized repository of historical negotiation positions and cultural/linguistic metadata to support future AI assistance. Implement an ongoing interview program with current and former diplomats to capture needs, hesitations, and use‑case requirements. Integrate AI literacy training and hands‑on pilot programs for public‑sector officials, with explicit evaluation plans. Adopt a governance framework that keeps human authority central, ensures explainability of AI outputs, and defines the scope of AI augmentation for each institutional context. Explore collaboration with existing multilingual LLM initiatives (e.g., the Swiss diplomatic‑focused model) to address language diversity. Invite interested stakeholders to join the MOVE 37 effort and provide feedback on tool design, data sources, and ethical safeguards.
Unresolved issues
Concrete methods for guaranteeing that AI models adequately capture the full spectrum of cultural and linguistic nuances across all negotiating parties. Specific technical and policy safeguards against data poisoning, prompt injection, and other adversarial attacks in diplomatic AI tools. Clear metrics and evaluation criteria for AI pilots in the public sector; how success will be measured and reported. How to manage asymmetric access to data and AI capabilities that could shift the balance of power among states. Detailed processes for integrating AI recommendations into existing decision‑making workflows without undermining accountability. The extent to which AI should be used in adversarial versus cooperative negotiation settings.
Suggested compromises
Maintain humans “above the algorithm” – AI provides options and analysis, but final decisions remain with human negotiators. Scope AI augmentation to fit the specific team, institution, and negotiation context rather than applying a one‑size‑fits‑all solution. Use AI as a transparent, modular aid (e.g., for data synthesis, position tracking) while keeping the core strategic judgment human‑driven. Balance optimism about AI’s potential with rigorous evaluation and pilot testing before large‑scale deployment.
Thought Provoking Comments
Close your eyes and imagine… you walk into a negotiation with 10 agenda items, but it’s not just those items—there are dozens of internal and external factors, multiple counterparties, and hundreds of stakeholders influencing the process.
She vividly illustrated the hidden complexity of modern diplomatic negotiations, moving the conversation beyond a simplistic two‑party view and highlighting the massive informational and strategic load negotiators face.
Set the stage for the panel to discuss AI as a tool for managing complexity; prompted later speakers to reference the need for data handling, position‑tracking, and multi‑layered analysis.
Speaker: Slavina Ancheva
Why can’t you just ask an LLM? … Language models are remarkable, but their fluency isn’t verifiable, they’re opaque, and high‑stakes negotiations need accountability. We also have an 80‑year‑old toolkit—game theory, decision analysis, ML—that we must integrate rather than replace with chatbots.
He challenged the naïve assumption that large language models alone can solve diplomatic problems, emphasizing verification, transparency, and the value of established analytical frameworks.
Shifted the discussion from enthusiasm about LLMs to a more cautious, nuanced view; led to subsequent emphasis on explainability, human‑in‑the‑loop, and the three challenges (representation, strategic misrepresentation, specifying success) he outlined.
Speaker: Charlie Posniak
We received 55,000 public comments on the UNESCO AI ethics recommendation and used AI to integrate them. It would have been amazing to have an AI repository of each country’s historical positions and negotiation style.
Provided a concrete, real‑world example of AI already aiding a massive, multilingual policy process, while also identifying a clear gap—systematic, AI‑driven tracking of country positions.
Validated the potential of AI in diplomatic workflows; inspired other panelists to discuss data repositories, position‑tracking tools, and the need for cultural sensitivity.
Speaker: Gabriela Ramos
AI has fundamentally changed the threat landscape and the scope for global competition. We must build AI literacy across the entire public sector—not just the military—so analysts can use these tools effectively.
She framed AI as a strategic geopolitical factor, not merely a technical tool, and highlighted the systemic challenge of upskilling a diverse bureaucracy.
Expanded the conversation from negotiation‑specific tools to broader institutional readiness; prompted discussion on training, adoption barriers, and the importance of embedding AI across agencies.
Speaker: Nandita Balakrishnan
70 % of leaders have AI pilots, but only 45 % evaluate them; only 26 % understand their country’s ethical frameworks. We risk a ‘zero‑sum’ dynamic where agency drains away to AI.
She presented striking survey data that exposed a gap between AI enthusiasm and responsible implementation, and introduced the “below/above the algorithm” heuristic.
Moved the dialogue toward governance and ethical oversight; led participants to stress evaluation, accountability, and keeping humans ‘above’ the algorithm.
Speaker: Robyn Scott
We have a phenomenon called ‘sleeping at the wheel’—over‑trusting AI after it performs well, ignoring false negatives, and assuming 100 % accuracy.
She highlighted a psychological pitfall of AI adoption that goes beyond technical issues, emphasizing the need for continual critical scrutiny.
Prompted panelists to discuss safeguards, continuous validation, and the importance of maintaining a skeptical stance toward AI outputs.
Speaker: Robyn Scott
What would make me comfortable is being able to explain how the AI arrived at its output, show the data points and counter‑arguments it considered, and retain ultimate human accountability.
She articulated a clear requirement for explainability and traceability, linking technical design to policy accountability.
Reinforced the earlier call for transparency; influenced later remarks about modular, auditable tools and the need for human‑level justification.
Speaker: Nandita Balakrishnan
Culture is expressed by language. To capture philosophies like Ubuntu, we must ensure models are trained on diverse languages and sources, otherwise they will reflect individualistic bias.
She connected linguistic diversity to cultural representation in AI models, raising the issue of bias in normative assumptions embedded in AI tools.
Steered the conversation toward data provenance and multilingual training; sparked audience questions about cultural inclusion and data poisoning.
Speaker: Gabriela Ramos
How do we ensure diverse cultural inputs are embedded in datasets and models, and how do we guard against data poisoning and prompt injection in AI‑mediated negotiations?
This question synthesized earlier concerns about bias, representation, and security, pushing the panel to address concrete mitigation strategies.
Triggered a focused exchange on safeguards, source verification, and the need for robust, multilingual, and tamper‑resistant AI systems.
Speaker: Audience member (Sam Dawes)
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from a broad, optimistic framing of AI in diplomacy to a nuanced, critical examination of practical, ethical, and institutional challenges. Slavina’s illustration of negotiation complexity opened the floor for AI‑driven solutions, but Charlie’s caution about LLM opacity and the need for established analytical tools redirected the conversation toward accountability and explainability. Gabriela’s real‑world example and Nandita’s emphasis on sector‑wide AI literacy highlighted both opportunities and systemic gaps, while Robyn’s data on pilotitis and the ‘sleeping at the wheel’ phenomenon underscored the gap between enthusiasm and responsible deployment. Audience questions about cultural representation and security risks crystallized these themes, prompting concrete calls for multilingual, transparent, and auditable AI systems. Collectively, these comments steered the panel toward a balanced view that champions AI augmentation while insisting on human oversight, rigorous evaluation, and inclusive design.

Follow-up Questions
How can we ensure that the diverse cultural inputs of the world’s most diverse countries are embedded in the data sets and models which inform negotiations?
Ensuring cultural representation is crucial to avoid bias and to make AI tools relevant and legitimate for all parties in diplomatic negotiations.
Speaker: Sam Dawes (Audience)
If AI is to be a useful neutral mediator in disputes or an assistant to a human mediator, what do we do about data poisoning, prompt injection, and related security risks?
Addressing adversarial attacks is essential for maintaining trust, integrity, and safety of AI‑supported diplomatic processes.
Speaker: Sam Dawes (Audience)
What concrete steps can be taken to advance cultural education frameworks using AI, and what co‑creation or co‑collaboration opportunities exist?
Identifies a need for practical pathways to integrate AI into cultural and educational initiatives, linking diplomatic AI work to broader societal impact.
Speaker: Devika Rao (Audience)
How will AI impact the balance of power given that different countries have unequal access to data sets and AI capabilities?
Explores geopolitical implications of asymmetric AI resources, which could reshape negotiation dynamics and global power structures.
Speaker: Arman (Audience)
Area for further research: Building a repository of historical positions and traditional stances of countries to support negotiators.
A centralized, searchable knowledge base would reduce preparation time and improve situational awareness for diplomats.
Speaker: Gabriela Ramos
Area for further research: Designing AI tools that are modular, transparent, and keep human authority central with clear accountability.
Transparency and modularity are needed to meet democratic and internal accountability requirements in high‑stakes negotiations.
Speaker: Charlie Posniak
Area for further research: Developing robust evaluation methodologies for AI pilots in the public sector, including ethical and performance metrics.
Many pilots lack systematic evaluation; establishing standards will help assess impact and guide scaling.
Speaker: Robyn Scott
Area for further research: Building AI literacy across public‑sector agencies (intelligence, State Department, commerce, etc.) to embed AI into daily workflows.
Widespread AI competence is required for effective, responsible adoption and to avoid a skills gap.
Speaker: Nandita Balakrishnan
Area for further research: Investigating psychological risks such as over‑trust, false negatives, and “sleeping at the wheel” when using AI tools.
Human factors can lead to misuse or overreliance on AI; understanding these dynamics is key for safe deployment.
Speaker: Robyn Scott
Area for further research: Developing multilingual LLMs trained on a wide range of languages for diplomatic contexts.
Multilingual models can better capture cultural nuance and ensure inclusive representation in diplomatic AI tools.
Speaker: Robyn Scott (referencing Swiss initiative)
Area for further research: Methods for AI to generate strategic options and uncover deepest interests in complex negotiations.
Enhancing AI’s ability to surface hidden preferences could improve negotiation outcomes and creativity.
Speaker: Slavina Ancheva
Area for further research: Defining success criteria and metrics for AI‑augmented negotiations.
Clear benchmarks are needed to assess whether AI contributions lead to better, faster, or more equitable agreements.
Speaker: Charlie Posniak
Area for further research: Creating ethical frameworks that ensure AI tools respect cultural sensitivities and avoid over‑representation of dominant cultures.
Ethical safeguards are necessary to prevent AI from reinforcing biases and to maintain legitimacy across diverse stakeholders.
Speaker: Gabriela Ramos
Area for further research: Integrating real‑time transcription and translation services powered by AI into negotiation settings.
Accurate, multilingual communication can reduce misunderstandings and broaden participation in diplomatic talks.
Speaker: Charlie Posniak

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel at the AI Impact Summit 2026 examined how rapid AI advances are reshaping India’s IT services, SaaS models, and broader economic productivity [6-11]. Arundhati Bhattacharya cautioned that market headlines about a 40 % drop in Salesforce’s valuation and a “AI-agent-replacing” SaaS model are overstated, emphasizing that successful SaaS requires workflow understanding, governance, observability and adoption, not just low-code development [12-23]. She added that the current models of working will evolve and firms must stay agile to add value, noting that many players will emerge and the ultimate test is whether AI improves living standards [30-38].


K. Krithivasan explained that AI will shift many engineers from writing code to orchestrating AI systems, but system integrators will remain essential for testing, validation, requirements and cybersecurity, especially as cloud adoption is still only 30-40 % and enterprises must rationalize data estates and train multiple models [46-70]. Salil Parekh highlighted a $300 billion AI services opportunity for Indian firms, citing AI engineering, legacy-modernisation and the company’s Topaz Fabric IP layer that enables clients to work across foundation models and custom agents [79-102]. C. Vijayakumar described HCL Tech’s unique position with a software product line, custom silicon and high revenue per employee, and said the company will focus on building solutions that bridge the gap between foundation models and enterprise needs rather than becoming a hyperscaler [108-127].


On the talent side, Krithivasan reported a recent workshop where 1,500 schoolchildren built 1,500 apps in three hours, illustrating AI’s potential to upskill non-technical youth and the industry’s collaboration with the Ministry of IT on curricula [135-147]. Arundhati expanded this view, arguing that democratizing AI for MSMEs and blue-collar workers requires addressing skilling, access to jobs, timely payments and marketplace platforms that certify and match workers, thereby raising overall quality of life [157-177]. Salil noted that India is already leveraging its digital public infrastructure to roll out AI pilots in agriculture, health and education, with support from chip, data-center and architectural layers to make AI services widely affordable [183-192].


Vijayakumar warned that capturing the projected $350-400 billion AI services market will demand substantially higher R&D spending, citing a trillion-dollar “physical AI” opportunity and the need to build solution labs ahead of demand [200-215]. Both Krithivasan and Vijayakumar agreed that AI is likely to create more jobs than it destroys, though the new roles will emphasize programming fundamentals, critical thinking and the ability to orchestrate multiple AI agents [299-304][283-296]. Salil also stressed the importance of responsible-AI frameworks to ensure ethical model training and deployment [306-311]. Concluding, Amitabh Kant summarized the panel’s optimism that AI will drive a “Vixit Bharat” by 2047, generating diverse employment and helping India reach a $30 trillion economy [312-317].


Keypoints


Major discussion points


AI’s impact on the traditional SaaS model and Indian enterprises – Arundhati Bhattacharya cautioned that market hype (e.g., Salesforce’s 40 % valuation drop) does not automatically invalidate the SaaS model; she emphasized that SaaS success still depends on workflow understanding, governance, observability and delivering concrete customer value, and that the “jury is still out” on whether AI agents will replace it outright [12-23][30-38].


The evolving nature of IT services work – K. Krithivasan argued that AI will not eliminate system-integrator roles but will shift emphasis toward requirements-engineering, context-engineering, validation, security and cloud-adoption; the volume of work will grow rather than shrink, creating “more interesting work” [46-53][58-70].


Infosys’s AI services opportunity and IP strategy – Salil Parekh described a $300 billion AI-services opportunity across six focus areas (e.g., AI engineering, legacy modernization) and highlighted Infosys’s proprietary “Topaz Fabric” IP layer that abstracts foundation models and agents, signalling a move from pure “builder-for-hire” to owning AI stack IP [79-88][98-102].


HCL Tech’s positioning in the AI stack – C. Vijayakumar explained that HCL leverages its product business and custom silicon capabilities to build enterprise-grade solutions that bridge the gap between foundation models and practical use cases, while deliberately avoiding a hyperscaler role and focusing on solution-centric IP [108-118][124-127].


Skilling, democratization of AI and national digital public infrastructure – The panel stressed that AI must be made accessible to blue-collar workers and MSMEs, requiring new curricula, community-level training, and a DPI-style AI infrastructure (agriculture, health, education) to uplift productivity across the country [135-147][157-176][183-192].


Overall purpose / goal of the discussion


The session was convened as the closing panel of the AI Impact Summit 2026 to assess how generative AI will reshape India’s massive IT services ecosystem, to debate the sustainability of existing business models (SaaS, services), to outline strategic responses (new service lines, IP creation, partnerships), and to chart a national skilling and infrastructure roadmap that ensures AI-driven productivity and job creation for both white- and blue-collar segments.


Overall tone and its evolution


– The conversation began with a formal, probing tone, as the moderator posed a challenging market-valuation question to Arundhati [11-13].


– It then shifted to a balanced, analytical tone, with panelists dissecting technical and workforce implications (Krithivasan’s and Salil’s detailed explanations) [46-70][79-88].


– When discussing corporate strategy (Infosys, HCL) the tone became pragmatic and forward-looking, highlighting concrete IP initiatives and partnership models [98-102][124-127].


– The later segment on skilling and public AI infrastructure adopted an optimistic, inclusive tone, emphasizing democratization and national-scale impact [135-147][157-176][183-192].


– The moderator closed with a hopeful, rallying tone, projecting AI as a catalyst for massive job creation and a “Vixit Bharat” future [312-317].


Overall, the discussion moved from cautious skepticism about market hype to confident optimism about India’s capacity to harness AI through strategic innovation, skill development, and public-sector support.


Speakers

Moderator – Session moderator for the AI Impact Summit 2026. Role: Moderator of the panel discussion. [S13]


Amitabh Kant – Host and moderator of the panel. Role: Moderator (referred to as “Mr. Amitabh Kant”). Expertise: Indian IT industry, AI policy. [S6]


Arundhati Bhattacharya – Former SBI CEO and current technology leader. Role: Former Chairman & MD of State Bank of India; now a tech leader focusing on SaaS and AI. Expertise: Banking, technology, SaaS, AI. [S16]


Salil Parekh – CEO of Infosys. Role: Chief Executive Officer, Infosys Ltd. Expertise: IT services, AI services, digital transformation. [S9]


K. Krithivasan – CEO of Tata Consultancy Services (TCS). Role: Chief Executive Officer, TCS. Expertise: IT services, AI-driven workforce transformation. [S11]


C. Vijayakumar – Senior executive of HCLTech. Role: Senior leader (often referred as “C. Vijayakumar”) at HCL Technologies. Expertise: IT services, hardware/AI chips, enterprise AI solutions. [S18]


Navneet Kaul – Audience member who asked a question about AI-created jobs. Role: Audience participant. [S5]


Audience – Various members of the live audience who asked questions (e.g., Mania Sharma, Devika Rao, Kishla, Harswar, etc.). Role: Audience participants. [S1]


Additional speakers:


Christy Varshan – Referred to as “CEO of TCS” early in the discussion (likely a mis-naming of the TCS CEO).


Christy Wilson – Mentioned as “the biggest employer in India,” presumably a senior executive of a large Indian IT firm.


Mania Sharma – CEO of Mono AI, a young entrepreneur seeking mentorship.


Devika Rao – Representative from the University of Leeds, interested in AI-creative education collaborations. [S5]


Kishla – Audience member asking about skill development for current IT employees.


Harswar – Audience member concerned about AI misuse.


Mamanama Venkatana Rasimahati – Software architect and founder of “Startup Sanatana,” advocating culturally-aligned AI.


Rupa Arvindakshan – Leader of Salesforce’s startup community, mentioned as a point of contact for startups.


Full session reportComprehensive analysis and detailed insights

The AI Impact Summit 2026 closed with a panel of senior leaders from India’s IT services sector, moderated by Amitabh Kant. Kant introduced the four panelists – Salil Parekh (CEO, Infosys), K. Krithivasan (CEO, TCS), C. Vijayakumar (CEO, HCL Tech) and Arundhati Bhattacharya (Senior VP, Salesforce) – and set the tone by noting that the industry was “at a point of disruption” [1-11]. He also highlighted that the Indian IT services industry “represents over 300 billion USD in market value and employs millions of professionals” [1-3].


Arundhati Bhattacharya responded to a market-driven narrative that AI agents could wipe out the traditional SaaS model. Amitabh Kant’s opening question referenced the recent ≈ 40 % fall in Salesforce’s market value over the past 12 months [11-13]. Bhattacharya warned that such headlines often over-state the impact because SaaS success depends on more than low-code generation; it requires deep workflow understanding, governance, observability, auditability and genuine customer-value delivery [14-23]. She noted that some of the capital flowing into AI-driven SaaS is “circular money” and that investors must read the fine print [24-28]. While acknowledging that current working models will evolve, she argued that the “jury is still out and may remain so for some time” on whether AI will fundamentally overturn SaaS, and that the ultimate test will be whether AI improves living standards [30-38]. When asked about startup support, she directed interested founders to contact Rupa Arvindakshan, whose details are publicly listed on Salesforce’s website [280-282].


K. Krithivasan, CEO of TCS, shifted the focus to the future of IT-services work. He argued that AI will not eliminate system-integrator roles; instead, engineers will move from writing code to orchestrating AI systems, emphasizing requirements-engineering, context-engineering, validation, cybersecurity and testing of AI-generated outputs [46-53]. He highlighted that cloud adoption in Indian enterprises remains only 30-40 % after a decade, meaning a long-tail of migration, data-estate rationalisation and model training will generate a larger volume of more interesting work rather than a headcount shrinkage [58-70].


Salil Parekh, CEO of Infosys, outlined the company’s view of the AI-services opportunity. He cited a $300 billion market across six focus areas – AI engineering, legacy modernisation, AI factories, AI agents, physical AI and AI-driven analytics – and presented internal data showing aggressive hiring: 20 000 graduates recruited this year [91] and 13 000 added in the first three quarters [92]. Parekh also described Infosys’s proprietary “Topaz Fabric” IP layer, which abstracts foundation models and custom agents, allowing clients to work with any model while retaining Infosys-built capabilities [98-102]. This signals a strategic move from pure “builder-for-hire” to owning a reusable AI stack.


C. Vijayakumar, CEO of HCL Tech, explained his firm’s positioning. He noted that HCL’s software-product business contributes about 10 % of revenue and that the company has built custom silicon – a two-nanometre chip – for a major technology client, giving it a high revenue-per-employee metric [108-113]. HCL’s AI strategy is to bridge the gap between foundation models and enterprise use-cases, developing IP that makes large models scalable for businesses rather than attempting to become a hyperscaler or to build its own foundation models [118-127]. This pragmatic focus aligns with HCL’s decision to stay “solution-centric” while partnering with major solution providers [124-125].


Krithivasan then turned to talent development, describing a workshop in the NCR where 1 500 schoolchildren, most with no technical background, built 1 500 apps in three hours using AI-assisted native-language coding [130-138]. He framed skilling as a “major national challenge” and said the workshop was part of a broader collaboration with the Ministry of IT to design AI curricula for universities [130-138].


Bhattacharya expanded the democratisation theme, arguing that AI must be made accessible to blue-collar workers and MSMEs. She listed the challenges faced by carpenters, plumbers, hospitality staff and Anganwadi workers – skill gaps, lack of job visibility, payment delays and weak community safety nets [158-166]. She suggested AI-driven platforms could certify skills, match workers to opportunities and improve quality of life for both workers and their customers [170-176].


Parekh linked corporate strategy to national policy by describing an emerging digital public infrastructure for AI that mirrors India’s earlier DPI achievements (Aadhaar, UPI). He cited pilot projects in agriculture, health and education that are already being rolled out with support from chip, data-centre and architectural layers, and noted ongoing work with ministries to make AI services affordable and widely available [188-192].


Vijayakumar warned that capturing the projected $350-$400 billion AI-services market will require a substantial increase in R&D intensity. He pointed to a trillion-dollar “physical AI” opportunity, estimating at least $200 billion of services revenue, and argued that Indian firms must invest now in labs, proofs-of-concept and solution-building before demand materialises [200-209][210-215]. He added that outcome-based contracts will eventually fund higher R&D spend, but a proactive investment is needed to stay ahead of the curve [211-215].


All panelists agreed that AI will be a net job creator. Krithivasan asserted that AI will generate many new jobs in India, albeit in different occupational categories [299-304]; Vijayakumar reinforced that while programming fundamentals remain essential, the critical future skill will be orchestrating AI agents, i.e., managing multiple AI agents, applying critical thinking and delivering outcomes at several-fold the speed of manual coding [283-297]; and the moderator closed with optimism that AI will help India become a “Vixit Bharat” by 2047, driving a $30 trillion economy and massive employment growth [312-317].


During the audience segment, Kant repeatedly asked participants to keep questions brief, limit themselves to one per person, and be direct, emphasizing gender-balanced participation [250-260]. Questions from young entrepreneurs Mania Sharma and Devika Rao prompted Bhattacharya to direct them to the Salesforce startup community contact (Rupa Arvindakshan) [274-280]. Queries about future job types and required skills were answered by Krithivasan and Vijayakumar, who both stressed the rise of orchestration, analytical thinking and AI-tool proficiency [283-297][299-304]. Concerns about AI misuse, such as disinformation, led Salil Parekh to reiterate the need for responsible-AI frameworks, governance, cultural alignment and ethical model training [306-311][262-267].


Session transcriptComplete transcript of the session
Moderator

With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arundhati Bhattacharya. With the moderator, Mr. Amitabh Kant. A big round of applause, ladies and gentlemen, to welcome them all to the stage. Well, it’s over to you, Mr. Amitabh Kant.

Amitabh Kant

So let me welcome these very distinguished leaders of the Indian IT. tech service industry and we have with us Arundhati Bhattacharya who is both a banker and a great tech leader now. The three of them amongst them they are leaders of an industry that represents over 300 billion in market value over 25 lakh crore and they employ millions of Indians. We are actually meeting at a point of disruption. I’m not going to take much time in introducing the panel or I’m not going to take much time in giving my own introduction. I will straight away move to asking them questions and then open it up to all of you so that you can ask the questions. I’ll try and start with the lady in the panel and I’ll also try and end with the lady in the panel Arundhati Bhattacharya was probably the most distinguished of us all so let me, Arundhati, let me try and be as direct as possible so Salesforce has lost roughly about 40 % of its market value in just 12 months a single AI product launch wiped almost 285 billion of SaaS stocks in a day the market is saying that AI agents will replace per seat software subsystems is the market wrong or is the traditional SaaS model genuinely under threat and what does that mean for thousands of Indian enterprises that have built their operations on Salesforce platform?

Arundhati Bhattacharya

First and foremost, thank you very much for asking that question. I’ve been answering that question so many times in the last few days that it’s almost like rehearsed, you know, as to how I should go about it. But having said that, you know, markets will say a lot of things. Not all of it comes true. And when you talk about the SaaS model, it’s not only about vibe coding. It’s not only about creating an application. It’s also about, you know, understanding what the workflows are like. It’s about realizing what the customer’s pain points are and ensuring that you are addressing those particular pain points. It’s about observability about what your agents are doing. It’s about governance.

It’s about auditability. It’s about adoption. There are so many pieces to making something really work in an organization that to just say that because I can vibe code, that means, you know, everything else goes out of the window. I think that’s being a little too, you know, little too hasty about totally, you know, rejecting a way of doing business. Also, I must say that, you know, which I’m not very true is correct, but people have to sometimes pump up values given the kind of money that is going in over there. And by the way, some of that money is circular money that’s going in over there. So I’m not too sure that the market is actually giving the right message that it should.

And like for everything that the market gives people, investors especially are requested to read the fine print. And obviously, you know, exercise their discretion in the matter. Having said that, is it true that the models that we have today will remain exactly the same? They won’t. I’m very clear about one thing, which is that all of the models that we have today, and I mean not the LLM models, but I mean the models of working, the ways of working, whether it be in respect of the SaaS companies or it be in respect of any of the other companies, even intra -companies or any of the other companies, things will change. And we have to be very agile about the way we look at these things and realize where are the changes going to come and how can we ourselves change in tune so as to remain relevant, so as to be able to actually add value to your customer.

End of the day, people who add value are the ones who are going to stay. who are going to survive, who are going to be sustainable. And therefore, adding value is what we need to do. And for adding value, whatever it takes for us to do, we need to do those things. So I think, you know, the jury is out and will remain out for a while because the race is on and we don’t know who’s going to win the race. But this much I do know that, you know, it will not be one individual unit or one individual kind of unit. There will be very many players in this whole thing. But at the end of it all, as long as it improves our standards of living, as long as it gives us results and answers which we never had before, it is for the good of humanity.

And I hope that will definitely happen.

Amitabh Kant

Thank you. Thank you for that very detailed answer to that question. I now turn to Mr. Christy Varshan, the CEO of, TCS. Yes, Mr. Krishnivasan, the industry consensus is that AI will shift work from writing code to orchestrating AI systems. What does TCS look like to you in 2030? What will be the headcount and what will be the revenue per employee? And how is TCS communicating that transition to its workforce and to the country?

K. Krithivasan

Thank you and good afternoon to everyone. See, this is the topic that everyone has been discussing in the last few days and few weeks. And like Arun has explained, the market also has been contemplating. But there will be a few things that will change. Many things may not change or the other way. Like if you look at the role of what most of us do as a system integrator, the role of system integrators come into play because there are complex, complex systems and many of them have a lot of legacy. you it’s not that one day you can have a llm understand everything and auto generate code and the software engineers will go away but not to say like arunthi said there will be more and more productivity that will be brought in and at the same time you need system integrators who can test validate verify what is being generated and so that’s one part of it the second is as you also look at today the role will shift towards more and more requirements engineering context engineering how do you know whether you are building the right system how do you validate a system is doing the right thing does it have cyber security does it do some harm all those things are to be validated you like you may not know all the roles that you will have five years down the line but we don’t envisage a situation where there will be a significant shrinkage of hope Now, the other areas that we have somehow not looked at when we get excited about generating code is for many of the, for instance, cloud came into play about maybe 10 years ago.

But if you ask most organizations, they will tell you that 30 -40 % cloud has been adopted. There is so much to be done. So, this is going to be a long tail. And even within that, you would see many organizations, they have to prepare for deploying or adopting. It’s not a trivial job. They have to get their data estate right. They have to get their applications rationalized, modernized. So, there is a certain amount of work to be done. They need to train their models like Arunthati was saying or somebody was mentioning. You will have some large models, many small models in every enterprise. They have to be trained. And the last part of it, which you are not…

Again, looking at is what is it new that you can do with all these LLMs? There will be many interesting things. Somebody has to build. Somebody has to think through that. So, if you look at another 30, I don’t envisage a time that there is a significant shrinkage of hope. but there will be more volume of work that will be produced. More volume of work. More volume of work that will be produced and more interesting work that will be done.

Amitabh Kant

Thank you. Thank you. So that’s an interesting perspective. I turn to Salil, who is the CEO of Infosys. Salil, one of the very provocative statements made by one of the Bay Area leaders and investor actually was that, which attracted a lot of news coverage, was that services model is dead within five years. Your chairman, Nandan Nilikani, just I was reading, in fact, I went through his interaction, and he said that this is not an opportunity gap, but this is an execution gap and that the real money isn’t clear. He’s cleaning up trillions of dollars of legacy tech debt. who is right? Is Nandan right or is it this Bay Area leader? Who’s right according to you?

Salil Parekh

That’s an easy answer Nandan’s of course right. There’s no question. So simply because he’s your chairman because I think Nandan I’m sure everyone has a view. Nandan is a visionary who has a view on this business for years. I think the way we see it is and we shared this a few days ago there are several areas of opportunity that come from AI services and there are six that we have highlighted recently just a few days ago and those in aggregate we have shared some data are about 300 billion dollars of opportunity over the next several years and then we’ve gone into a little bit of of the detail in each. I’ll give just a couple of examples.

There’s one which is like AI engineering, which is the building of agents, orchestrating, integrating some of the points what Kriti was mentioning. There’s another which you alluded to how Nandan has said it of legacy modernization, which is basically saying there were some things which were 15, 20 years old with large companies. How can we bring it to the more current? And there because of AI agents, the cost is lower, the time is less. And so there’s an easier economic rationale for companies to do it. So as we put all these together, we see these what we call AI services, which will give us the growth. And what I think the point you made, what Nandan said, if we can pivot our company to serve these big six areas for our clients, that execution path.

then the opportunity is good. And again, some data on that, like this year, which will end in March, we have recruited 20 ,000 college graduates. Next year, we are on track and we’ve announced we are recruiting 20 ,000 college graduates. This year, our headcount has increased in the first three quarters by 13 ,000 people. And my sense is that will continue. So what it’s opening up really is new set of opportunities. And there is some productivity benefit that comes with if like specifically in Infosys, but I’m sure in general, if we execute and serve our clients, there will be more opportunity.

Amitabh Kant

So tell me, do you at any stage aspire to own intellectual property in the AI stack or will you remain a builder for hire?

Salil Parekh

So then the approach, I’ll speak a little bit for Infosys, our approach is, we have a lot of opportunities. We have tremendous IP. So like in AI, we built this IP layer called Topaz Fabric. which has the ability for clients to work with any of the foundation models, plus the agents that we have built, that Infosys has built, plus any third party agents. So that’s the layer that we are, let’s say, pretty good at and that we will build and continue to build the IP on. That’s the approach on the IP that we’ve taken.

Amitabh Kant

OK. CVK, let me turn to you because HCL Tech has a software product business. It has a design custom AI chips in Bangalore and you operate across the full stack, actually. Is HCL Tech positioning as an AI builder at any stage? And if so, how far up the stack are you willing to go into models and to compute, into infrastructure? Infrastructure that at any stage will compete with the hyperscalers rather than just partner with them.

C. Vijayakumar

Thank you and good evening everyone. HCL Tech, as you kind of gave some pointers, we are uniquely placed because, first of all, we have a software product business which delivers 10 % of our revenue. And we also have a very deep engineering heritage where we service top 50 of the 100 R &D spenders doing a lot of work, including some cutting edge work. Like we have built a two nanometer custom silicon for one of the technology companies. So we have these unique capabilities. And this also reflected in our high. Highest revenue per employee amongst the IT services companies. So with this backdrop, our AI strategy is one of them is. heavily indexed on building because of course, our core services, we will continue to modernize and evolve our services to be relevant for the future.

And even if it means it takes away some revenue streams, we are proactively doing it. But I think the biggest focus is there are these large language models and the foundational models. They cannot be applied most efficiently for enterprise use cases. There is still a gap between what a foundation model can deliver and what is the ultimate efficiency and innovation that’s possible. So we are really trying to bridge the gap and building IPs that will bridge the gap which helps enterprises to scale AI adoption. And we’re also focused on a lot of specialized services like even Salil mentioned, like physical AI, AI factory. agentic AI. All of these new solutions we are very focused on.

And of course, the partnership ecosystem becomes extremely critical. So we are partnering with almost all the large solution providers. I don’t think we are building anything to become a hyperscaler. I think we missed the bus many, many years ago. And we’re not building models, but we are building solutions which will make the models much more scalable and applicable within enterprises.

Amitabh Kant

Thanks. Thanks. Thanks for that. I just wanted to turn to Mr. Christy Wilson because he’s the biggest employer in India. He employs over 600 ,000 engineers. You know, India produces millions of engineering graduates a year and many of them are trained for exactly the kind of work AI is now going to automate in a very big way. So what should be a skilling strategy of this country? I mean, how do we do reskilling and skilling at an individual level or should we do it at a national country level? I mean, what is the view of India’s leading CEO in terms of skilling and reskilling?

K. Krithivasan

This is, to my mind, a major national challenge. It’s a challenge and an opportunity. Because, in fact, three days ago, here we ran a workshop with about 1 ,500 kids from all the schools in the NCR region. Most of them are, in fact, all of them have non -technical background and many of them could not even speak or not fluent in English. And we taught them how to use their native language, how to do coding. And, in fact, within a span of about three hours, almost 1 ,500. Apps were built. So that’s the power of AI. You can… You can worry about how AI is going to take away the jobs at the entry level. I think AI also enables all these people to develop and imagine new areas where software can make the lives of people better.

And it creates more and more opportunity. You can be afraid and not do anything. I think we should be forward leaning and train as many people as possible. In fact, all three of us are working with the Ministry of IT in creating the curriculum for the students coming up in all these universities.

Amitabh Kant

Wonderful. Wonderful. Thanks. Thanks for that very positive and constructive perspective. Arundhati, just let me turn to you again and ask you if AI is to drive India’s productivity. It cannot remain just a Fortune 500 story. How do we make it? AI tools accessible. to millions of MSME? How do we raise productivity? How do we build for the market? Even if unit economics looks slightly different from enterprise contracts, how do we scale it up in a big way for MSME to make the difference to Indian economy?

Arundhati Bhattacharya

So thank you for that question, because I personally believe that unless we can democratize any technology, it doesn’t really serve the purpose of that country or for that people. And AI is something that is not meant for the white -collar worker alone. In fact, it’s one of the things that can actually empower the blue -collar, just as you were talking, Mr. Kritivasan, it can actually empower the blue -collar workers. Now, if you look at the blue -collar workers, in fact, in NEETI, we did one report taking into account the blue -collar workers like carpenters, plumbers, hospitality workers, Anganwadi workers. So a lot of these, you know, personal… personas we had taken. And what we realized is that they have multiple challenges.

One challenge, of course, is the way that they have been skilled. So they have a skilling challenge. But more than skilling challenge, they have an access challenge in the sense that they may be very well skilled, but they don’t know that the job exists within the village or in the next village. So they have an access challenge. The second thing that they have is the challenge of ensuring that they are getting paid on time. Even that is not available. So they have several challenges of this nature, that of skilling, that of access, that of payments, that of ensuring that they are parts of communities which actually enable them to be supported during times of distress or during times of need.

Many of these are actually challenges that can be solved if we can use AI in the proper way, in a proper marketplace to get the right kind of opportunities to them, the right kind of certifications to them, the right kind of assessment of their skills to them. If all of this can be done, actually speaking, we will be doing the country an enormous favor. And you will find that the quality of life of not only these people, but the quality of life that, you know, that they serve, of the people they serve, people, you know, who are actually taking their services, even that is going to become much, much better. So I think, you know, AI is not something that is meant only for the white collar workers or for people in tier one, tier two cities.

It’s meant for the SMEs. It’s meant for the MSMEs. It is what is going to empower them to get into a league which they were not able to access earlier. It is also meant for the blue collar workers because it can empower all of them.

Amitabh Kant

Salil, we’ve had a India has done something unique in digital public infrastructure It’s been transformational in terms of identity payment, credit You know, the Bank of International Settlement said that India achieved in seven years what it would have taken 50 years to achieve. How do we create a DPI, a digital public infrastructure for artificial intelligence? How do we take computing power to the common citizen? How do we scale? How do we make a difference using the power of AI?

Salil Parekh

Absolutely. I think there’s already work going on. Specifically, there are three big areas where there’s thinking going on. There’s actual projects on the ground in agriculture, in healthcare, in making sure that everything that is being done for education is helping within the country for the citizens. Also in that area, There are examples where we have shown some of it. And now the way that, as you mentioned, the India stack or the digital public infrastructure was created, where essentially it was fully available without exorbitant costs or any cost, is what the approach is being driven today. There are various components of the architecture which are being discussed. And working closely with the ministry, with the government, those will be rolled out.

Today, of course, you have seen that there is tremendous support at the chip layer, at the data center layer, at the infra layer. And now there will be more at the architecture, how it can be distributed. And at least these three big areas, agriculture, education, healthcare, are being looked at today.

Amitabh Kant

Thanks. Thanks. CVK. One last question before I open it up to the floor. You know, the hyperscalers, Microsoft, Google, Amazon, they’re spending close to about $600 billion this year alone on AI infrastructure. Spending almost about close to 50 to 55 % on CapEx. And if AI services opportunity is really in the range of about $350 to $400 billion, can Indian IT companies, can all of you together, capture it without adequate R &D? Does it not require greater level of R &D intensity? And what will it take all of you to put in more resources into R &D for the future?

C. Vijayakumar

Yes. First of all, this big CapEx. Spend also triggers a lot of services spent, like building all these data centers, AI factories. The entire IT infrastructure landscape in the world would get refreshed over the next five to eight years. That itself is a huge services opportunity. Then there is this physical AI. It’s a completely new spend. Today, there is very, very little physical AI deployed in the world. It’s believed that one of the studies by Zeno says it’s a trillion -dollar opportunity. That would mean at least $200 billion of services opportunity. I think we are looking at some very big services opportunities. Even to really encash on these big services opportunities, the companies like us will need to invest in building solutions because it’s not straightforward services.

You need to build solutions, which will mean we will have to put in more money into R &D. Right? Right. Building solutions. building labs and kind of POCs, a lot of pre -work needs to be done. And also there is a lot of opportunities to create solutions which will really make the foundational models much more scalable for enterprise. So I personally believe we should increase the R &D spend. And I do think the industry model will support us because as more and more AI is infused, more outcome -based contracts would come, which also helps us to deliver a higher profitability, which means we can very comfortably invest more in R &D. But the timing of that, we might need to invest a little ahead of the curve before the real benefits come.

Amitabh Kant

Okay, wonderful. Wonderful to hear this perspective. I’m going to open it up to this house. We’ll ask about, I’ll open it up for five questions. Please. Name yourself. just be very direct and blunt. Please don’t ask long winding questions. To the point, very matter of fact all young people, any ladies here will be given preference. The lady there. Yeah, the lady here.

Audience

Hello everyone. Thank you so much for giving me opportunity. No introduction, just name and shoe. Mania Sharma, CEO of Mono AI. Saving my two years to reach out to you from here till here. First question, how can I come and meet you as a young entrepreneur, 27 years with no network? My first question. Saving my lot of marketing money. Second question is, as a young Indian, 27 year old coming from a small town but being in Bangalore for seven years, what is your view or idea that we can be work with you guys and your support and mentorship so that young India can go somewhere else and we can also show the Silicon Valley that 25 year, 27 year standing here in a suit talking to you can do something.

Amitabh Kant

Okay. You know, I’ve allowed two questions because she’s a lady but no two questions. I will ask the five questions and then I’ll open it up for responses. Second, there’s a lady there. Go ahead.

Audience

My name is Devika Rao. I’m from UK University of Leeds. Trying to do this stage for AI creative education and art health and well -being concept and how the case study can be presented in six month time and I would like to co -create and collaborate with you.

Amitabh Kant

Okay. Anyone at the back? Yeah, go ahead. No, no, please there. Yeah, yeah, the blue shirt. Get up and ask.

Audience

My name is Navneet Kaul. and I have a three -part question.

Amitabh Kant

No, just ask one question. Don’t ask three in one. No, no, don’t ask three in one. Ask one question.

Audience

One question.

Amitabh Kant

Yeah.

Navneet Kaul

How will AI create jobs? What kind of jobs and what kind of skills do we need?

Amitabh Kant

You’re asking the question which I’ve already asked.

Audience

I want the panelists to answer very specifically and directly.

Amitabh Kant

All right. Anyone at the back, that side? Yeah, that gentleman there.

Audience

Hello. Kishla here. And my question is, as any employee who is currently in the IT sector, what skills he should plan to develop in the next five years to boost his employability?

Amitabh Kant

Okay. No, no, not front row. Back. I want to go right at the back. Anybody right at the back? Some back venture. huh who’s who’s the backest pitch yeah that gentleman in the black suit yeah go ahead ask yeah yeah shoot yeah please sir

Audience

sir my name is harswar than my question is what can be done to stop misuse of ai for example some people use grok you to unrest people

Amitabh Kant

okay so one last question to that yeah shoot in one line Just shoot.

Audience

Yeah, namaste. Mamanama Venkatana Rasimahati. Intentionally, I’m speaking in Sanskrit. One line. Yeah, I will come, sir. I’m a software architect. I’m a founder of a startup called Startup Sanatana. Though we are talking about so much of AI and AI is the limelight, unless and otherwise we built on AI on our culture, rich culture, tradition, heritage kind of thing, probably it will be a different kind of thing.

Amitabh Kant

Thank you. Thank you. Okay. So we got the question. We got the question. Now I’ll open it up. I’ll starting from we start with Arundhati’s response and then move on. You can you can respond to one of the questions and then move on.

Arundhati Bhattacharya

In respect of the question that this lady. gave and also others who are startups in this particular organization. Salesforce has a very vibrant startup community. The lady who leads it for us, her name is Rupa Arvindakshan. Her coordinates are available on our website. Please get in touch with her. She will get you in touch with our entire community. The community is supported to develop on Salesforce and also, you know, to ensure that you can take your products to market. So all of that is done. So please, you are more than welcome to communicate with us and get whatever support we can definitely give you.

C. Vijayakumar

Maybe I’ll take up the question on what skills are needed in the future. I think there is a big misconception. That software, coding, programming skills are not going to be relevant. I think the fundamental conceptual skills. in software development, programming is very, very essential if you really want to build a long -term career in the software industry. It could be in services or in product companies or AI. All of that requires sound programming skills. The second aspect is, I think, just the critical thinking, analytical skills. How do you really orchestrate work? A lot of standard work that many of you do or initially the younger engineers do, some of that or a lot of it can be done with the AI tools that are there today.

But how can you now think of yourself as an orchestrator and deliver maybe four or five times the output that you would deliver without these tools, right? I think just orchestrating the work with multiple coding agents, I think that’s really a skill. Or while you may take three, four years to manage a small team, but on day one, you have an opportunity to manage several agents to deliver an outcome which is 5x of what you would normally do. That’s just an example. And that’s how you need to think of every role. You can amplify the value that you can create using AI skills.

Amitabh Kant

Mr. Kristinasan.

K. Krithivasan

That was his question once again about will AI take away jobs or will AI? My view is it has been discussed for the last few days. Eventually, AI will create more jobs than destroy jobs. And you would find, but it may not all of them need not be programming jobs. That would be jobs of different categories, different classifications. But on the whole, we find it’s going to create more jobs and create more employment.

Amitabh Kant

Sal.

Salil Parekh

Thanks. I think there was a question really focused on can we build AI with views of our culture or views? Of responsibility. And that’s absolutely essential. we’ve also put together and I know many others have a framework on responsible AI and there were a couple of questions which sort of were on that line. It’s absolutely critical so in fact the way the agents are built, the way the foundation models data, the way they learn all of that needs this approach of responsible AI and that’s the approach that we’ve recommended. Many others are working on it and as an overall industry we should focus on that and that will give us as good an outcome as we can get and even that after that outcome we will have to refine and modify.

Responsible AI is critical in that. Thank you.

Amitabh Kant

So ladies and gentlemen we’ve heard the captains of the industry we are in the midst of disruption as I said but these leaders bring great optimism, they bring hope and according to them actually The wave of AI will end up creating many more jobs for India, but there’ll be jobs of a different kind. And we need to skill ourselves for the new emerging jobs of tomorrow. But with these leaders, I’m absolutely confident that India will ride this wave to greater progress and prosperity as it becomes a Vixit Bharat by 2047. It’ll create many, many more jobs. And these leaders will drive India to a 30 plus trillion dollar economy with jobs in the coming years. Thank you very much, ladies.

Related ResourcesKnowledge base sources related to the discussion topics (39)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The AI Impact Summit 2026 closed with a panel of senior leaders from India’s IT services sector, moderated by Amitabh Kant, who introduced Salil Parekh (CEO, Infosys), K. Krithivasan (CEO, TCS), C. Vijayakumar (CEO, HCL Tech) and Arundhati Bhattacharya (Senior VP, Salesforce).”

The panel composition and moderator are confirmed by the transcript of the closing panel where Amitabh Kant introduced Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arundhati Bhattacharya as the four panelists [S4].

Additional Contextmedium

“Arundhati Bhattacharya warned that SaaS success depends on more than low‑code generation – it requires auditability, adoption, deep workflow understanding, governance and genuine customer‑value delivery.”

The knowledge base includes remarks emphasizing auditability, adoption and the many pieces needed for an AI solution to work in an organization, echoing Bhattacharya’s warning and adding nuance about practical utility [S22] and about sustainable value creation depending on user adoption [S19].

!
Correctionhigh

“Amitabh Kant’s opening question referenced a ≈ 40 % fall in Salesforce’s market value over the past 12 months.”

Salesforce’s recent market performance described in the knowledge base shows the company’s shares soaring to a record high of $368.7, indicating a strong rise rather than a 40 % decline, contradicting the claim of a steep fall [S110].

External Sources (112)
S1
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S2
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S3
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S4
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — <strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact S…
S5
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — My name is Devika Rao. I’m from UK University of Leeds. Trying to do this stage for AI creative education and art health…
S6
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — <strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact S…
S7
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — 1327 words | 131 words per minute | Duration: 607 secondss All right. Anyone at the back, that side? Yeah, that gentlem…
S8
Seismic Shift — 1. International Monetary Fund, ‘India’s Economy to Rebound as Pandemic Prompts Reforms’, November 11, 2021, https://www…
S10
Infosys CEO settles insider trading charges — According toIndia’s markets regulator, Infosys CEO Salil Parekhhas settledcharges related to insufficient internal contr…
S11
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — -K. Krithivasan: CEO of TCS (Tata Consultancy Services), leads company with over 600,000 engineers
S12
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — Speakers:Arundhati Bhattacharya, K. Krithivasan Speakers:Arundhati Bhattacharya, K. Krithivasan, Salil Parekh Speakers…
S13
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S14
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S15
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S16
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — <strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact S…
S17
Building the Next Wave of AI_ Responsible Frameworks & Standards — This panel discussion at the Global AI Summit focused on reimagining responsible AI and balancing rapid innovation with …
S19
Multistakeholder Partnerships for Thriving AI Ecosystems — Dr. Bärbel Koffler emphasized that governments must create frameworks and governance structures to ensure AI benefits ar…
S20
Multistakeholder Partnerships for Thriving AI Ecosystems — Bhattacharya advises against being driven by market capitalizations when creating companies, emphasizing that sustainabl…
S21
Building Inclusive Societies with AI — Evidence:Example of a skilled plumber in a village who might be unaware of good opportunities in neighboring villages E…
S22
https://app.faicon.ai/ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti — But if you ask most organizations, they will tell you that 30 -40 % cloud has been adopted. There is so much to be done….
S23
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-empowerment_-driving-change-and-inclusion — I just have a follow -up on that, and then I’ll move to Julie. I’ll put it very, I mean, let’s say, a very, very simple,…
S24
Collaborative AI Network – Strengthening Skills Research and Innovation — about those. So obviously, it’s not just creating applications. It’s the same old story of digital transformation, right…
S25
Inclusive AI Starts with People Not Just Algorithms — Hi, my name is I’m founder of an AI company. We work with global higher education institutions. So I actually led my lif…
S26
Pre 8: IGF Youth Track: AI empowering education through dialogue to implementation – Follow-up to the AI Action Summit declaration from youth — Anja Gengo: Yes, I am. Thank you. I hope you can hear me. First of all, thank you so much for such an interesting and ri…
S27
How AI Is Transforming Indias Workforce for Global Competitivene — While acknowledging a transition period, Srikrishna believes AI will ultimately generate more employment opportunities t…
S28
From India to the Global South_ Advancing Social Impact with AI — This comment directly addresses one of the most anxiety-provoking aspects of AI adoption – job displacement. By framing …
S29
Generative AI is enhancing employment opportunities and shaping job quality, says ILO report — A new study conducted by the International Labour Organization (ILO) investigates the consequences of Generative AI on t…
S30
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — Sabharwal contends that the traditional hourly pricing model ($20-40 per hour) in Indian IT services will become obsolet…
S31
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “AI will come, jobs will go, mass exodus will happen in corporates”[16]. “What do you mean by the business model will ha…
S32
How AI Is Transforming Indias Workforce for Global Competitivene — -Srikrishna Ramakarthikeyan- (Role/title not clearly specified, but appears to be from IT services sector based on discu…
S33
Driving Indias AI Future Growth Innovation and Impact — Evidence:Generative AI was built without following strict rules initially. Cloud security and data protection are still …
S34
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Madaan reinforces the concept that employment disruption happens at the task level rather than complete job elimination….
S35
IT clients taking cautious approach to costly AI technology, says Infosys executive — IT clientsare keen to adoptAI technology, but the high cost is causing them to take a cautious approach, according to Sa…
S36
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Economic | Infrastructure European Competitive Advantages and Success Stories Klein argues that Europe shouldn’t try t…
S37
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Thank you. Thank you for inviting me here. So it’s a very valid question. And I will not answer it in a very technical w…
S38
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — So future of AI I think will depend on the market. We’ll also depend on the people. We’ll also depend on the trust in us…
S39
The Foundation of AI Democratizing Compute Data Infrastructure — This connects AI democratization to broader digital infrastructure development, suggesting that individual data empowerm…
S40
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — Saibal argues that India is approaching AI with the same ethos as DPI – treating it as shared public infrastructure that…
S41
Collaborative AI Network – Strengthening Skills Research and Innovation — Artificial intelligence | Information and communication technologies for development Garg frames AI itself as a possibl…
S42
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Industry Perspectives: Systems Integration Challenges Eltjo Poort: thank you Isadora yeah and thanks for giving me t…
S43
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Mark Irura:To add on to what’s been shared already, the supply and the demand side were mentioned. And on the supply sid…
S44
Secure Finance Risk-Based AI Policy for the Banking Sector — The moderator emphasizes that AI governance should not be viewed through a completely different lens but should be integ…
S45
Ministerial Roundtable — There’s a stark contrast between countries that have achieved near-universal connectivity (like Azerbaijan) and those st…
S46
All hands on deck to connect the next billions | IGF 2023 WS #198 — Expanding internet connectivity is a complex task that requires innovative approaches, responsive to the needs of local …
S47
Fixing Healthcare, Digitally — Traditional models may not fully address the complex gaps and needs in healthcare infrastructure. Moreover, it is crucia…
S48
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — In conclusion, DPI is a critical building block for the digital economy and plays a significant role in achieving the SD…
S49
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — Thank you and good evening everyone. HCL Tech, as you kind of gave some pointers, we are uniquely placed because, first …
S50
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — When addressing competition with hyperscalers’ massive infrastructure investments, the Indian IT companies positioned th…
S51
Comprehensive Report: “Converging with Technology to Win” Panel Discussion — Economic | Legal and regulatory | Human rights Five hyperscaler firms competing to reach AGI first; financial structure…
S52
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Larissa Zutter:So this is not quite as concrete as you might want, but I think I want to piggyback off of what was said …
S53
AI Governance Dialogue: Steering the future of AI — – **Civil Society**: Advocacy ensuring frameworks reflect diverse societal needs Doreen Bogdan Martin: Thank you. And w…
S54
Global Digital Governance &amp; Multistakeholder Cooperation for WSIS+20 — The session concluded with time constraints as “the president of Estonia is about to make his remarks,” reflecting the b…
S55
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S56
How AI Is Transforming Indias Workforce for Global Competitivene — While acknowledging a transition period, Srikrishna believes AI will ultimately generate more employment opportunities t…
S57
Comprehensive Discussion Report: The Future of Artificial General Intelligence — Near-term job displacement will likely be offset by new job creation, with current impact mainly on junior-level positio…
S58
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Historical evidence shows that technological advances eliminate some jobs while creating others, with the net effect bei…
S59
Artificial intelligence — The disruptions that AI systems could bring to the labour market are another source of concern. Many studies estimate th…
S60
How Trust and Safety Drive Innovation and Sustainable Growth — Explanation:Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was une…
S61
Ray Dalio warns of global breakdown behind market turmoil — Billionaire investorRay Daliohas warned that the recent market turbulence is part of a larger global crisis. The turmoil…
S62
AI investment shows strong momentum beyond bubble fears — AI investmentis not showingsigns of a speculative bubble, according to theAlibaba Groupchairman. Instead, he argued at t…
S63
How AI Drives Innovation and Economic Growth — Arguments:First, model evaluation. So AI companies typically do that part. How good is the model output for specific tas…
S64
The open-source gambit: How America plans to outpace AI rivals by democratising tech — A “worker-first AI agenda” is the key social pillar of the Plan. The focus is on helping workers reskill and build capac…
S65
AI: The Great Equaliser? — While the introduction of AI technology may result in job losses in certain sectors, it also creates new job opportuniti…
S66
Shaping the Future AI Strategies for Jobs and Economic Development — Continuous learning and upskilling will be essential for workforce adaptation to rapid technological change across all s…
S67
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Additionally, reskilling the workforce is crucial to fully embrace new technologies. AI, for instance, has the potential…
S68
AI will not replace people – but people who use AI will replace people who do not | IBM’s Report — According toIBM’s report, executives estimate that around 40% of their workforce will need to reskill due to implementin…
S69
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Rather than following historical patterns of automation that replace workers, AI development should prioritize applicati…
S70
SAP elevates customer support with proactive AI systems — AIhas pushedcustomer support into a new era, where anticipation replaces reaction. SAP has built a proactive model that …
S71
Multistakeholder Partnerships for Thriving AI Ecosystems — An audience member raised concerns about whether AI democratisation would genuinely benefit small enterprises or primari…
S72
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — Sabharwal contends that the traditional hourly pricing model ($20-40 per hour) in Indian IT services will become obsolet…
S73
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — Arundhati Bhattacharya from Salesforce highlighted how her company established an office for humane and ethical use of t…
S74
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “AI will come, jobs will go, mass exodus will happen in corporates”[16]. “What do you mean by the business model will ha…
S75
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — The industry leaders unanimously rejected predictions that AI would eliminate the services model. Krithivasan noted that…
S76
Inclusive AI Starts with People Not Just Algorithms — 200 years later, we are like, okay, let’s clean it up. Even in the Internet revolution, you know, we have the problems w…
S77
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Krishan positioned this challenge within a broader context, noting that the key lies in focusing on value creation and p…
S78
How AI Is Transforming Indias Workforce for Global Competitivene — -Srikrishna Ramakarthikeyan- (Role/title not clearly specified, but appears to be from IT services sector based on discu…
S79
From Innovation to Impact_ Bringing AI to the Public — Discussion point:Evolution of banking services while maintaining core functions Discussion point:Evolution of work rath…
S80
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — Our first major research vertical is in structured foundation. A field that a recent Forbes article estimates at a $600 …
S81
AI for equality: Bridging the innovation gap — Blair presented evidence from surveys conducted with the World Bank and Intuit of 3,000 women entrepreneurs, showing tha…
S82
IT clients taking cautious approach to costly AI technology, says Infosys executive — IT clientsare keen to adoptAI technology, but the high cost is causing them to take a cautious approach, according to Sa…
S83
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Discussion point:Global talent acquisition for Indian IP development Discussion point:Strategic pivot from services to …
S84
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — Thank you and good evening everyone. HCL Tech, as you kind of gave some pointers, we are uniquely placed because, first …
S85
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Thank you. Thank you for inviting me here. So it’s a very valid question. And I will not answer it in a very technical w…
S86
Empowering People with Digital Public Infrastructure — Pervinder Johar: Absolutely. So I think our focus is on what we call the physical infrastructure of the world. So whe…
S87
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — So future of AI I think will depend on the market. We’ll also depend on the people. We’ll also depend on the trust in us…
S88
Collaborative AI Network – Strengthening Skills Research and Innovation — Artificial intelligence | Information and communication technologies for development Garg frames AI itself as a possibl…
S89
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — Saibal argues that India is approaching AI with the same ethos as DPI – treating it as shared public infrastructure that…
S90
The Foundation of AI Democratizing Compute Data Infrastructure — “It needs to be interoperable and shareable.”[37]. “So I think two characteristics of digital public infrastructure, whi…
S91
Building Inclusive Societies with AI — Impact:This comment fundamentally reframed the discussion from focusing on solutions to focusing on execution mechanisms…
S92
Building Inclusive Societies with AI — These key comments fundamentally shaped the discussion by challenging assumptions, introducing new frameworks, and groun…
S93
Open Forum #66 the Ecosystem for Digital Cooperation in Development — The discussion maintained a consistently collaborative and solution-oriented tone throughout. It began with formal intro…
S94
From Technical Safety to Societal Impact Rethinking AI Governanc — The discussion began with a formal, academic tone but became increasingly critical and urgent throughout. Speakers expre…
S95
Science as a Growth Engine: Navigating the Funding and Translation Challenge — The discussion maintained a consistently thoughtful and collaborative tone throughout. While panelists acknowledged seri…
S96
GermanAsian AI Partnerships Driving Talent Innovation the Future — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers demonstrated mutual resp…
S97
Debating Education / DAVOS 2025 — The tone was thoughtful and analytical, with panelists offering differing perspectives in a respectful manner. There was…
S98
Comprehensive Summary: The Future of Robotics and Physical AI — The tone was optimistic yet realistic throughout. The panelists demonstrated enthusiasm about recent breakthroughs and n…
S99
Driving Enterprise Impact Through Scalable AI Adoption — The tone was thoughtful and exploratory rather than alarmist, with participants acknowledging both the transformative po…
S100
Revamping Decision-Making in Digital Governance and the WSIS Framework — The discussion maintained a constructive and collaborative tone throughout, with speakers building upon each other’s poi…
S101
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S102
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S103
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S104
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — The discussion maintained a professional, collaborative tone throughout, characterized by constructive problem-solving r…
S105
Democratizing AI: Open foundations and shared resources for global impact — The tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained …
S106
Building Population-Scale Digital Public Infrastructure for AI — The tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementatio…
S107
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S108
AI Innovation in India — The tone was consistently celebratory, inspirational, and optimistic throughout the discussion. Speakers expressed pride…
S109
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — The discussion began with a technology-focused, optimistic tone about AI’s transformative potential but gradually shifte…
S110
Salesforce’s AI tools drive growth — Salesforce sharessoaredto a record high of $368.7 on Wednesday, climbing 11% after surpassing quarterly sales estimates …
S111
Can a layered policy approach stop Internet fragmentation? | IGF 2023 WS #273 — Audience:We will fight to see who goes first. Colin Perkins, University of Glasgow. I guess I want to follow up a little…
S112
AI Infrastructure and Future Development: A Panel Discussion — Economic | Infrastructure Lessin raises concerns from the financial industry about whether the complex financing arrang…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Arundhati Bhattacharya
5 arguments159 words per minute1123 words421 seconds
Argument 1
SaaS resilience through workflow, governance, and value‑add
EXPLANATION
Arundhati argues that the SaaS model’s strength lies beyond simple code generation; it requires deep understanding of customer workflows, governance, auditability, and adoption to deliver real value.
EVIDENCE
She explains that SaaS is not only about vibe coding or creating an application, but also about understanding workflows, addressing customer pain points, ensuring observability, governance, auditability, and adoption, emphasizing that these multiple pieces are essential for success [16-23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bhattacharya emphasizes that SaaS success requires deep workflow understanding, governance, auditability, and adoption, as highlighted in the panel transcript <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4] and reinforced in the discussion summary [S5].
MAJOR DISCUSSION POINT
SaaS success depends on holistic operational capabilities.
Argument 2
Market hype often overstates AI disruption; investors must read fine print
EXPLANATION
She cautions that market narratives frequently exaggerate AI’s impact, with inflated valuations and circular money, and advises investors to scrutinize details before drawing conclusions.
EVIDENCE
Arundhati notes that markets say many things, not all true, that people pump up values due to large money flows, some of which is circular, and urges investors to read the fine print and exercise discretion [14-28].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She warns that market valuations can be inflated by circular money flows and advises investors to scrutinize details, matching her comments on avoiding market-cap-driven decisions [S20] and observations of market manipulation [S5].
MAJOR DISCUSSION POINT
Skepticism toward AI market hype.
DISAGREED WITH
Implicit market narrative (as referenced by Amitabh Kant)
Argument 3
AI must be accessible to improve livelihoods of blue‑collar and MSME sectors
EXPLANATION
She stresses that AI should not be limited to white‑collar workers; it can empower blue‑collar workers and MSMEs by addressing their specific challenges.
EVIDENCE
Arundhati states that AI is not just for white-collar workers, it can empower blue-collar workers, and cites a report covering carpenters, plumbers, hospitality and Anganwadi workers, highlighting the need for democratization [158-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Her point about AI empowering blue-collar workers and MSMEs is echoed by the panel’s identification of challenges for carpenters, plumbers, and other workers and the role of AI marketplaces [S5], as well as a concrete plumber example [S21].
MAJOR DISCUSSION POINT
Democratizing AI for broader workforce.
AGREED WITH
Salil Parekh, Audience (Kishla)
Argument 4
Address skilling, job‑access, and payment challenges through AI‑enabled marketplaces
EXPLANATION
She outlines how AI‑driven marketplaces can solve blue‑collar workers’ problems of skill validation, job discovery, timely payments, and community support.
EVIDENCE
She describes challenges such as skilling, access to jobs, timely payments, and community support, and argues that AI-enabled marketplaces can provide certifications, skill assessments, and job matching to improve quality of life for workers and their customers [167-172].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion details how AI-driven marketplaces can provide certifications, skill assessments, and timely payments for blue-collar workers, supporting her claim [S5] and the plumber case study [S21].
MAJOR DISCUSSION POINT
AI as a solution for blue‑collar ecosystem challenges.
AGREED WITH
K. Krithivasan, Vijayakumar C., Amitabh Kant
Argument 5
Salesforce ecosystem offers mentorship and community contacts for startups
EXPLANATION
Arundhati points to Salesforce’s vibrant startup community and provides a direct contact for entrepreneurs seeking support.
EVIDENCE
She mentions that Salesforce has a vibrant startup community led by Rupa Arvindakshan, whose coordinates are on the website, and encourages startups to get in touch for development and market access support [274-280].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel notes Salesforce’s vibrant startup community and provides contact details for mentorship through Rupa Arvindakshan <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4].
MAJOR DISCUSSION POINT
Startup mentorship via corporate ecosystem.
K
K. Krithivasan
5 arguments178 words per minute773 words259 seconds
Argument 1
System integrators remain essential; role moves to requirements and context engineering
EXPLANATION
Krithivasan asserts that despite AI code generation, system integrators will still be needed to validate, test, and ensure security, shifting their focus toward requirements and context engineering.
EVIDENCE
He explains that system integrators are needed because of complex legacy systems, to test, validate, and verify AI-generated code, and that the role will shift toward requirements engineering, context engineering, and security validation [51-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Krithivasan stresses that system integrators are still needed for legacy complexity, a view confirmed by the panel’s consensus that the services model will evolve rather than disappear <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4].
MAJOR DISCUSSION POINT
Evolving role of system integrators.
AGREED WITH
Salil Parekh, Vijayakumar C.
DISAGREED WITH
C. Vijayakumar
Argument 2
No major headcount shrink; volume and complexity of work will increase
EXPLANATION
He predicts that AI will not cause a significant reduction in workforce size; instead, the amount and sophistication of work will grow.
EVIDENCE
Krithivasan states that he does not envisage a significant shrinkage of headcount, but rather a larger volume of work and more interesting work being produced [68-70].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He predicts stable headcount with increased work volume, consistent with broader observations that AI will generate more jobs than it eliminates [S27].
MAJOR DISCUSSION POINT
Workforce size remains stable, workload expands.
AGREED WITH
Salil Parekh, Vijayakumar C.
Argument 3
AI can rapidly up‑skill large numbers; partnership with Ministry of IT to create curricula
EXPLANATION
He highlights a national initiative, collaborating with the Ministry of IT, to develop curricula that can quickly up‑skill large populations for AI‑driven jobs.
EVIDENCE
Krithivasan describes a recent workshop with 1,500 non-technical schoolchildren, teaching coding in native languages, and notes that all three panelists are working with the Ministry of IT to create curricula for university students [135-147].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He describes a workshop with the Ministry of IT that taught coding to 1,500 non-technical students, illustrating rapid up-skilling efforts [S5].
MAJOR DISCUSSION POINT
National AI up‑skilling collaboration.
AGREED WITH
Arundhati Bhattacharya, Vijayakumar C., Amitabh Kant
Argument 4
Hands‑on workshops show AI’s potential to empower non‑technical youth
EXPLANATION
He provides evidence that short, practical workshops can enable thousands of non‑technical participants to build apps, demonstrating AI’s empowering potential.
EVIDENCE
He recounts that in a three-hour session, 1,500 participants built apps, showcasing AI’s ability to quickly up-skill and empower people without prior technical background [136-142].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same workshop demonstrated that participants could build apps in three hours, showcasing AI’s empowerment potential [S5].
MAJOR DISCUSSION POINT
Practical AI training for youth.
AGREED WITH
Arundhati Bhattacharya, Vijayakumar C., Amitabh Kant
Argument 5
AI will create more jobs than it destroys; new roles will differ from traditional programming
EXPLANATION
Krithivasan argues that AI will be a net job creator, though the nature of those jobs will shift away from conventional programming tasks.
EVIDENCE
He states that AI will create more jobs than it destroys, and that many of the new jobs will not be programming-centric, reflecting a change in job classifications [300-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He argues AI will be a net job creator, supported by reports that generative AI improves employment prospects and drives economic growth [S27], [S28], [S29].
MAJOR DISCUSSION POINT
AI as a net job creator.
AGREED WITH
Arundhati Bhattacharya, Vijayakumar C., Amitabh Kant
S
Salil Parekh
6 arguments149 words per minute789 words316 seconds
Argument 1
Services model is alive; AI creates $300 bn opportunity via AI engineering, legacy modernization, etc.
EXPLANATION
Salil contends that the services model remains viable, with AI opening roughly $300 billion of opportunities across six identified areas such as AI engineering and legacy modernization.
EVIDENCE
He cites that Infosys sees about $300 bn of AI services opportunity over the next years, highlighting AI engineering and legacy modernization as examples where AI agents lower cost and time, creating economic rationale for companies [82-89].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Parekh cites Infosys’s estimate of a $300 bn AI services opportunity across AI engineering, legacy modernization, and other areas <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4].
MAJOR DISCUSSION POINT
AI‑driven growth for services model.
AGREED WITH
K. Krithivasan, Vijayakumar C.
Argument 2
Aggressive hiring and execution on AI services will drive growth
EXPLANATION
He points to Infosys’s large recruitment drives and headcount growth as evidence of its commitment to capture AI services opportunities.
EVIDENCE
Salil mentions recruiting 20,000 college graduates this year, a similar target for next year, and a 13,000 increase in headcount in the first three quarters, indicating continued expansion [91-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He points to Infosys’s recruitment of 20,000 graduates and a 13,000 headcount increase as evidence of aggressive hiring to capture AI services demand <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4].
MAJOR DISCUSSION POINT
Talent acquisition to fuel AI services.
Argument 3
Infosys’ Topaz Fabric IP layer enables use of any foundation model and custom agents
EXPLANATION
He describes Infosys’s proprietary Topaz Fabric, which allows clients to work with any foundation model and integrate custom or third‑party agents, representing a strategic IP asset.
EVIDENCE
He explains that Topaz Fabric is an IP layer that lets clients use any foundation model, combines Infosys-built agents and third-party agents, and that Infosys will continue to build on this IP [100-102].
MAJOR DISCUSSION POINT
Proprietary AI integration platform.
Argument 4
Deploy AI‑driven projects in agriculture, health, education using DPI principles
EXPLANATION
Salil outlines ongoing AI projects in key sectors, leveraging India’s digital public infrastructure (DPI) model to make AI services widely accessible.
EVIDENCE
He notes three big areas-agriculture, healthcare, education-where AI projects are being deployed, following the DPI approach of low-cost, widely available services, with components being rolled out in partnership with ministries [184-192].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He outlines AI deployments in agriculture, health, and education following India’s Digital Public Infrastructure (DPI) model, as described in the panel discussion <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4].
MAJOR DISCUSSION POINT
Sectoral AI deployment via DPI.
Argument 5
Leverage chip, data‑center, and architectural layers to make AI power common‑citizen ready
EXPLANATION
He emphasizes the need to build AI infrastructure across hardware, data‑center, and architectural layers to democratize AI access for citizens.
EVIDENCE
Salil references support at the chip layer, data-center layer, and infrastructure layer, and mentions ongoing work on architecture to distribute AI capabilities to the common citizen [190-192].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He mentions building AI capability across chip, data-center, and architectural layers to democratize AI for citizens, noted in the discussion <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4].
MAJOR DISCUSSION POINT
Infrastructure stack for AI democratization.
Argument 6
AI development must follow responsible‑AI frameworks and reflect cultural values
EXPLANATION
He asserts that responsible AI principles and cultural considerations are essential when building agents and training models.
EVIDENCE
Salil states that responsible AI is critical, that agents and foundation models must be built with responsible AI approaches, and that the industry should adopt such frameworks to ensure good outcomes [306-311].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He stresses adherence to responsible-AI frameworks and cultural considerations, aligning with the responsible AI assessment tool (RAISE Index) and calls for culturally aware AI governance [S17], [S19].
MAJOR DISCUSSION POINT
Ethical and cultural responsibility in AI.
AGREED WITH
Arundhati Bhattacharya, Audience (Kishla)
C
C. Vijayakumar
5 arguments136 words per minute831 words365 seconds
Argument 1
Focus on product business and bridging foundation models to enterprise, not becoming a hyperscaler
EXPLANATION
Vijayakumar explains that HCL will concentrate on its product business and on creating solutions that make foundation models usable for enterprises, rather than competing with hyperscalers.
EVIDENCE
He says HCL is uniquely placed with a software product business, builds custom silicon, and will focus on bridging foundation models to enterprise use cases, explicitly stating they are not becoming a hyperscaler and will not build models themselves [118-126].
MAJOR DISCUSSION POINT
Strategic positioning away from hyperscaling.
AGREED WITH
K. Krithivasan, Salil Parekh, Vijayakumar C.
DISAGREED WITH
K. Krithivasan
Argument 2
Leverage custom silicon and high revenue‑per‑employee to build scalable AI solutions
EXPLANATION
He highlights HCL’s capabilities, such as custom two‑nanometer silicon and the highest revenue per employee among Indian IT services, as foundations for scalable AI offerings.
EVIDENCE
Vijayakumar notes HCL’s 10 % revenue from product business, deep engineering heritage, a two-nanometer custom silicon project, and the highest revenue per employee among IT services firms [110-115].
MAJOR DISCUSSION POINT
Competitive advantage through hardware and efficiency.
Argument 3
Significant R&D investment needed to build solutions, labs, and physical‑AI offerings
EXPLANATION
He argues that capturing the AI services market will require substantial R&D spending to develop solutions, labs, and emerging physical‑AI products.
EVIDENCE
He describes how large CapEx spend will generate services demand, cites a trillion-dollar physical AI opportunity, and stresses the need for building solutions, labs, and pre-work, concluding that R&D spend must increase [200-215].
MAJOR DISCUSSION POINT
R&D as a prerequisite for AI services capture.
Argument 4
Outcome‑based contracts will fund higher R&D spend ahead of market benefits
EXPLANATION
He suggests that as AI‑infused services grow, outcome‑based contracts will generate higher profitability, enabling firms to invest more in R&D before full market returns materialize.
EVIDENCE
Vijayakumar notes that outcome-based contracts will help deliver higher profitability, which in turn will allow comfortable investment in R&D, though timing may require early spending [214-215].
MAJOR DISCUSSION POINT
Financial model supporting early R&D.
Argument 5
Core programming plus orchestration, critical thinking, and AI‑tool mastery are essential
EXPLANATION
He emphasizes that while programming remains fundamental, future success will hinge on critical thinking, orchestration of AI agents, and the ability to amplify output using AI tools.
EVIDENCE
He states that programming is essential for long-term software careers, highlights critical thinking and analytical skills, and describes orchestrating multiple coding agents to achieve 5× output as a key future skill [284-297].
MAJOR DISCUSSION POINT
Skill set evolution for AI‑augmented software work.
AGREED WITH
K. Krithivasan, Arundhati Bhattacharya, Vijayakumar C., Amitabh Kant
A
Amitabh Kant
2 arguments131 words per minute1327 words607 seconds
Argument 1
India is at a pivotal AI disruption point that demands proactive policy and industry engagement.
EXPLANATION
Kant observes that the country is currently experiencing a major disruption driven by AI, implying that coordinated action from both the public and private sectors is essential to harness the opportunity.
EVIDENCE
He explicitly states, “We are actually meeting at a point of disruption,” signalling the need for strategic response [8].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Kant’s statement about a disruption point is echoed by the panel’s emphasis on coordinated policy and multi-stakeholder frameworks for AI development <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4], [S19].
MAJOR DISCUSSION POINT
AI-driven disruption as a catalyst for strategic action.
Argument 2
AI will be a primary engine of employment and economic growth, propelling India toward a $30+ trillion economy and a “Vixit Bharat” by 2047.
EXPLANATION
Kant concludes that the wave of AI will generate far more jobs than it eliminates, driving unprecedented economic expansion and positioning India as a leading global economy by mid‑century.
EVIDENCE
He summarises the panel’s view that AI will create many jobs, boost productivity, and help India achieve a $30 + trillion economy and the vision of a “Vixit Bharat” by 2047 [312-317].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
His projection aligns with analyses that AI will boost employment and drive massive economic growth, supporting a $30+ trillion outlook [S27], [S28], [S29].
MAJOR DISCUSSION POINT
AI as a catalyst for massive job creation and macro‑economic growth.
A
Audience
2 arguments162 words per minute340 words125 seconds
Argument 1
Young entrepreneurs need structured mentorship and ecosystem support to turn AI ideas into viable ventures.
EXPLANATION
An audience member highlights the difficulty of accessing networks and guidance, arguing that a formal mentorship channel would enable emerging innovators to scale their AI projects.
EVIDENCE
Mania Sharma, a 27-year-old entrepreneur, asks for direct contact and support, noting she has “no network” and seeks mentorship to engage with the panelists and the broader AI ecosystem [225-229].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The audience’s request for mentorship matches the panel’s acknowledgment of the need for structured startup support and the existence of Salesforce’s mentorship network <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4].
MAJOR DISCUSSION POINT
Mentorship and ecosystem support for early‑stage AI entrepreneurs.
Argument 2
AI solutions should be rooted in local cultural heritage and values to ensure relevance and acceptance.
EXPLANATION
A participant argues that building AI without incorporating India’s cultural traditions risks producing solutions that are disconnected from societal context, advocating for culturally‑aware AI design.
EVIDENCE
Kishla states that unless AI is built on “our culture, rich culture, tradition, heritage,” it will be a “different kind of thing,” emphasizing the need for cultural integration in AI development [262-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for culturally rooted AI reflects the panel’s discussion on integrating cultural values into responsible AI design and broader calls for AI frameworks that respect local heritage [S19], [S17].
MAJOR DISCUSSION POINT
Cultural relevance and ethical grounding of AI systems.
Agreements
Agreement Points
AI will be a net job creator, generating more employment than it eliminates, though the nature of jobs will shift toward new AI‑augmented roles.
Speakers: Arundhati Bhattacharya, K. Krithivasan, Vijayakumar C., Amitabh Kant
AI must be accessible to improve livelihoods of blue‑collar and MSME sectors AI will create more jobs than it destroys; new roles will differ from traditional programming Core programming plus orchestration, critical thinking, and AI‑tool mastery are essential AI will create many more jobs for India, driving a $30+ trillion economy
All speakers agree that AI will expand employment opportunities in India, especially for blue-collar and MSME workers, even though many of the new roles will require orchestration, critical thinking and AI-tool proficiency rather than traditional coding [158-172][300-304][284-297][312-317].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple studies and policy discussions highlight AI as a net job creator, with India’s AI workforce strategy emphasizing more jobs than losses and the need for upskilling [S56][S57][S58][S65][S66].
The traditional IT services model and system‑integrator role remain vital; AI will augment rather than replace these functions, and headcount is not expected to shrink dramatically.
Speakers: K. Krithivasan, Salil Parekh, Vijayakumar C.
System integrators remain essential; role moves to requirements and context engineering No major headcount shrink; volume and complexity of work will increase Services model is alive; AI creates $300 bn opportunity via AI engineering, legacy modernization, etc. Focus on product business and bridging foundation models to enterprise, not becoming a hyperscaler
Krithivasan stresses that system integrators will still be needed and that headcount will not fall sharply [51-53][68-70]; Salil highlights a $300 bn AI services opportunity that keeps the services model alive [82-89]; Vijayakumar adds that HCL will concentrate on bridging foundation models to enterprises rather than trying to become a hyperscaler [118-126].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry panels note that system integrators remain essential, with HCL and Infosys positioning themselves as bridges between foundation models and enterprise applications rather than pursuing hyperscaler scale [S49][S50][S42].
Upskilling, capacity building and education are essential to prepare the workforce for AI‑driven transformation.
Speakers: K. Krithivasan, Arundhati Bhattacharya, Vijayakumar C., Amitabh Kant
AI can rapidly up‑skill large numbers; partnership with Ministry of IT to create curricula Hands‑on workshops show AI’s potential to empower non‑technical youth Address skilling, job‑access, and payment challenges through AI‑enabled marketplaces Core programming plus orchestration, critical thinking, and AI‑tool mastery are essential
Krithivasan describes a workshop that taught 1,500 non-technical students to build apps and notes collaboration with the Ministry of IT on curricula [135-142]; Arundhati outlines the skilling, access and payment challenges faced by blue-collar workers and proposes AI-enabled marketplaces to solve them [163-172]; Vijayakumar reiterates that programming fundamentals remain crucial while new orchestration skills are needed [284-297]; Amitabh’s question on national skilling strategy underscores the shared focus on capacity development [132-134].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy frameworks such as the US ‘worker-first AI agenda’ and various AI governance reports stress reskilling, capacity building and education as core to AI adoption [S64][S65][S66][S67][S68].
AI development should be responsible, inclusive and culturally grounded, ensuring that technology serves broader societal needs.
Speakers: Salil Parekh, Arundhati Bhattacharya, Audience (Kishla)
AI development must follow responsible‑AI frameworks and reflect cultural values AI must be accessible to improve livelihoods of blue‑collar and MSME sectors AI solutions should be rooted in local cultural heritage and values to ensure relevance and acceptance
Salil calls for responsible-AI practices and cultural alignment in building agents and models [306-311]; Arundhati stresses democratizing AI for blue-collar workers and MSMEs [157-166]; Kishla argues that AI should be built on India’s cultural heritage to be meaningful [262-267].
POLICY CONTEXT (KNOWLEDGE BASE)
Inclusive AI governance discussions call for culturally grounded, responsible AI development, reflected in multistakeholder dialogues and inclusive AI initiatives [S52][S53][S54][S55][S48].
Similar Viewpoints
Both leaders emphasize the need for broad ecosystem partnerships—Salil through collaboration with ministries and public‑sector DPI initiatives, Vijayakumar through partnerships with major solution providers—to scale AI solutions effectively [188-190][124-125].
Speakers: Salil Parekh, Vijayakumar C.
Deploy AI‑driven projects in agriculture, health, education using DPI principles Partnering with almost all the large solution providers
Unexpected Consensus
Both Infosys and HCL choose to remain solution‑builders rather than pursue hyperscaler ambitions or develop their own large foundation models.
Speakers: Salil Parekh, Vijayakumar C.
Infosys’s Topaz Fabric IP layer enables use of any foundation model and custom agents I don’t think we are building anything to become a hyperscaler… not building models
Despite being large IT services firms, both Salil and Vijayakumar state that their strategy is to create proprietary IP and integration layers (Topaz Fabric) and to focus on building solutions, explicitly rejecting the pursuit of hyperscaler status or in-house model development [99-102][125-126].
POLICY CONTEXT (KNOWLEDGE BASE)
Panel discussions with HCL leadership confirm their strategy to stay as solution-builders and avoid building large foundation models, contrasting with hyperscaler ambitions [S49][S50][S51].
Overall Assessment

The panel reached strong consensus on four core themes: (1) AI will be a net creator of jobs, especially for blue‑collar and MSME workers; (2) the traditional services and system‑integrator model will persist and even expand with AI engineering opportunities; (3) large‑scale upskilling and capacity‑building are essential to equip the workforce for new AI‑augmented roles; (4) AI must be developed responsibly, inclusively and with cultural relevance. These agreements cut across the digital economy, capacity development, AI governance and social development domains.

High consensus – the speakers largely reinforce each other’s positions, indicating a shared vision that policy, industry investment and education should focus on inclusive, responsible AI deployment rather than fearing displacement.

Differences
Different Viewpoints
Future role of system integrators versus product‑focused AI solution building
Speakers: K. Krithivasan, C. Vijayakumar
System integrators remain essential; role moves to requirements and context engineering Focus on product business and bridging foundation models to enterprise, not becoming a hyperscaler
Krithivasan argues that despite AI code generation, system integrators will still be needed to test, validate and ensure security of complex legacy environments, with a shift toward requirements and context engineering [51-53]. Vijayakumar counters that HCL will concentrate on building product solutions that make foundation models usable for enterprises, emphasizing bridging gaps rather than traditional system-integration services, and explicitly states they are not becoming a hyperscaler or building models themselves [118-126].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on the future of system integrators versus product-focused AI firms are captured in system integration challenge reports and industry panels on bridging models and applications [S42][S49][S50].
Extent of market hype and valuation of AI‑driven SaaS disruption
Speakers: Arundhati Bhattacharya, Implicit market narrative (as referenced by Amitabh Kant)
Market hype often overstates AI disruption; investors must read fine print AI agents will replace per‑seat software subsystems – market suggests traditional SaaS model is under threat
Arundhati cautions that market narratives frequently exaggerate AI impact, noting inflated valuations and circular money, and urges investors to scrutinize details [14-28]. Amitabh’s question frames the market view that AI agents could replace traditional SaaS per-seat models, implying a significant threat to the SaaS business model [11]. This reflects a disagreement between Arundhati’s skeptical view of market hype and the market-driven narrative of imminent SaaS disruption.
POLICY CONTEXT (KNOWLEDGE BASE)
Analysts differentiate between genuine AI investment momentum and hype, noting concerns about valuation of AI-driven SaaS and bubble risks [S62][S63][S61].
Unexpected Differences
Contrasting views on the survivability of the traditional services model
Speakers: Salil Parekh, Bay Area leader (referenced by Amitabh Kant)
Services model is alive; AI creates $300 bn opportunity via AI engineering, legacy modernization, etc. Services model is dead within five years (Bay Area leader’s claim)
Amitabh cites a Bay Area leader who claimed the services model would die in five years [75-78]. Salil directly refutes this by stating the services model remains viable and outlines a $300 bn AI services opportunity [82-89]. The disagreement is unexpected because it pits a high-profile external prediction against the internal confidence of an industry leader.
POLICY CONTEXT (KNOWLEDGE BASE)
The survivability of the traditional services model is contested, with some reports emphasizing continued relevance of services firms while others highlight pressure from hyperscalers [S42][S49][S50][S51].
Overall Assessment

The panel shows broad consensus that AI will be a net creator of jobs and economic growth, but there are notable disagreements on implementation pathways: the role of system integrators versus product‑centric solution building, the degree to which market hype should be trusted regarding SaaS disruption, and whether the traditional services model is still viable. These divergences reflect differing strategic priorities among Indian IT firms and between industry insiders and external market narratives.

Moderate – while the overarching goals (AI‑driven growth, job creation, democratization) are shared, the speakers differ on key strategic approaches, which could lead to fragmented policy recommendations and varied investment strategies across the sector.

Partial Agreements
All speakers concur that AI will generate net employment and drive economic expansion, but they diverge on the nature of the future jobs and the pathways to achieve this outcome: Krithivasan emphasizes new, non‑programming job categories [300-304]; Vijayakumar stresses the need for programming fundamentals combined with orchestration and critical thinking skills [284-297]; Salil highlights large‑scale hiring and AI services opportunities as the growth engine [91-95]; Amitabh frames AI as the catalyst for a $30 + trillion economy and massive job creation [312-317].
Speakers: Amitabh Kant, K. Krithivasan, C. Vijayakumar, Salil Parekh
AI will be a primary engine of employment and economic growth, propelling India toward a $30+ trillion economy and a “Vixit Bharat” by 2047 AI will create more jobs than it destroys; new roles will differ from traditional programming Core programming plus orchestration, critical thinking, and AI‑tool mastery are essential Aggressive hiring and execution on AI services will drive growth
Takeaways
Key takeaways
The SaaS model remains resilient; success depends on workflow integration, governance, observability, and delivering concrete value, not just on low‑code or AI code generation. Market hype around AI‑driven disruption is often overstated; investors should scrutinize valuations and fine‑print. The core role of system integrators will shift from manual coding to requirements engineering, context engineering, validation, and orchestration of AI agents, without a major headcount reduction. The traditional services model is alive and can unlock a $300 bn+ AI services opportunity through AI engineering, legacy modernization, and other high‑value offerings. Infosys is building proprietary AI IP (Topaz Fabric) that abstracts foundation models and custom agents, positioning itself as both a builder and a platform provider. HCL Tech will focus on bridging foundation models to enterprise use cases and building scalable AI solutions, leveraging its product business and custom silicon, but will not attempt to become a hyperscaler. National skilling and reskilling are critical; AI can rapidly up‑skill large numbers, and TCS is collaborating with the Ministry of IT to develop curricula and run hands‑on workshops. Democratizing AI for MSMEs and blue‑collar workers is essential; AI‑enabled marketplaces can address skill, access, and payment challenges for these segments. India’s Digital Public Infrastructure (DPI) model will be extended to AI, with pilot projects in agriculture, health, and education, supported by chip, data‑center, and architectural layers. Capturing the AI services market will require increased R&D investment to build solutions, labs, and physical‑AI offerings; outcome‑based contracts can fund higher R&D spend. AI is expected to create more jobs than it destroys, but new roles will emphasize programming fundamentals, AI‑tool orchestration, critical thinking, and analytical skills. Responsible AI frameworks and cultural alignment are seen as non‑negotiable for trustworthy AI deployment. Start‑ups can tap into the Salesforce ecosystem for mentorship and community support.
Resolutions and action items
Infosys will continue aggressive hiring (20,000 graduates announced, 13,000 added in FY) to staff AI services and build Topaz Fabric IP. Infosys will expand execution on the six identified AI service areas to capture the $300 bn opportunity. TCS will focus on expanding requirements/context engineering, validation, cybersecurity, and cloud rationalization services as AI adoption grows. HCL Tech will prioritize building IP that bridges foundation models to enterprise workloads and will deepen partnerships with major solution providers. TCS (and other firms) will work with the Ministry of IT to develop AI curricula for universities and run large‑scale workshops for non‑technical youth. Infosys and other industry players will adopt responsible‑AI frameworks and embed cultural considerations into model training and deployment. Salesforce will provide a point of contact (Rupa Arvindakshan) for start‑ups seeking mentorship and ecosystem support. Industry consensus to increase R&D spend to develop AI solutions, labs, and physical‑AI offerings ahead of market demand.
Unresolved issues
Exact impact of AI on Salesforce’s market valuation and whether the SaaS model is fundamentally threatened. Specific headcount and revenue‑per‑employee targets for TCS by 2030. Detailed roadmap and funding mechanisms for a nationwide AI‑focused Digital Public Infrastructure. Concrete mechanisms to prevent misuse of AI (e.g., disinformation, malicious prompting). How Indian IT firms can collectively compete with hyperscalers in AI infrastructure without becoming hyperscalers themselves. Precise curriculum content and scaling strategy for national AI skilling and reskilling programs. Metrics and timelines for measuring the success of AI democratization for MSMEs and blue‑collar workers.
Suggested compromises
Acknowledgement that AI will not eliminate the SaaS business model but will require augmentation with workflow, governance, and value‑add capabilities. Balancing the view that AI will not cause massive headcount cuts with the need to upskill existing staff for orchestration roles. Combining proprietary IP development (Infosys Topaz Fabric) with openness to third‑party foundation models, rather than pursuing a pure build‑or‑buy stance. Emphasizing both aggressive R&D investment and reliance on outcome‑based contracts to fund that investment.
Thought Provoking Comments
Markets will say a lot of things, but the SaaS model is not just about code generation; it involves understanding workflows, governance, auditability, and adoption. AI‑generated code alone cannot replace these essential components.
She reframes the hype around AI‑driven code generation by highlighting the broader ecosystem needed for SaaS success, challenging the notion that AI will make traditional SaaS obsolete.
Shifted the conversation from a market‑value panic to a more nuanced view of SaaS resilience, prompting other panelists to discuss the continuing relevance of system integrators and the need for new skill sets.
Speaker: Arundhati Bhattacharya
System integrators will still be needed because enterprises have complex legacy environments. The future will focus more on requirements engineering, context engineering, validation, cybersecurity, and testing of AI‑generated outputs.
He identifies concrete areas where human expertise remains critical, countering the fear that AI will eliminate software engineering jobs.
Introduced the theme of role transformation rather than job loss, leading Salil and others to elaborate on new service opportunities and the importance of up‑skilling.
Speaker: K. Krithivasan
We see about $300 billion of AI services opportunity over the next few years across six domains – AI engineering, legacy modernization, AI factories, etc. – and we are scaling headcount (20 k graduates this year, 13 k added in Q3) to capture it.
Provides a data‑driven, optimistic outlook that the services model is not dead but evolving into high‑value AI‑centric offerings.
Set a positive tone for the panel, framing AI as a growth engine and prompting discussion on IP creation (Topaz Fabric) and recruitment strategies.
Speaker: Salil Parekh
AI must be democratized; it should empower blue‑collar workers and MSMEs, not just white‑collar professionals. We need to solve skilling, access, payment, and community support challenges for these groups.
Broadens the AI conversation to inclusive economic development, highlighting societal impact beyond large enterprises.
Steered the dialogue toward policy and public‑infrastructure considerations, leading Salil to talk about AI‑focused digital public infrastructure.
Speaker: Arundhati Bhattacharya
In a workshop with 1,500 non‑technical kids, we taught them to code in their native language and they built 1,500 apps in three hours – showing AI’s power to enable anyone to create software.
Demonstrates a tangible example of AI lowering entry barriers, reinforcing the argument that AI can be a catalyst for mass up‑skilling.
Supported Arundhati’s point on democratization and sparked interest in national‑level curriculum development, influencing the later discussion on DPI.
Speaker: K. Krithivasan
Physical AI represents a trillion‑dollar opportunity; to capture it, Indian IT firms must increase R&D spend now, building labs and POCs before the market matures.
Highlights a less‑discussed frontier (hardware‑centric AI) and stresses proactive investment, adding depth to the conversation about future revenue streams.
Prompted the panel to acknowledge the need for higher R&D intensity, linking back to Salil’s IP strategy and the broader question of competing with hyperscalers.
Speaker: C. Vijayakumar
Programming fundamentals remain essential, but the key future skill will be orchestration – managing multiple AI agents, critical thinking, and delivering outcomes at 5× the traditional speed.
Clarifies the evolving skill set required, countering the myth that coding will become obsolete, and provides a concrete direction for workforce development.
Guided the audience Q&A toward concrete skill recommendations, influencing Krithivasan’s later comment that AI will create more jobs than it destroys.
Speaker: C. Vijayakumar
We are already building a digital public infrastructure for AI—similar to the India Stack—targeting agriculture, healthcare, and education, with support at chip, data‑center, and architecture layers.
Positions AI as a national public good, extending the discussion from corporate strategy to country‑wide implementation.
Created a turning point that linked corporate initiatives to government policy, reinforcing the narrative of inclusive, large‑scale AI deployment.
Speaker: Salil Parekh
Overall Assessment

The discussion pivoted around three core insights: (1) AI will transform—not eliminate—existing SaaS and services models, as emphasized by Arundhati and Krithivasan; (2) the workforce will evolve, requiring new orchestration and validation skills, a point reinforced by Vijayakumar and Krithivasan’s skilling examples; and (3) India can leverage AI as a public‑good infrastructure, a vision articulated by Salil. These comments collectively shifted the tone from alarmist market speculation to a constructive, forward‑looking roadmap, prompting the panel to explore concrete opportunities (new service domains, IP development, R&D investment) and inclusive strategies (MSME empowerment, national DPI). The interplay of these thought‑provoking remarks shaped a narrative of optimism, responsibility, and strategic action for India’s AI future.

Follow-up Questions
What is the actual impact of AI agents on the traditional SaaS business model and market valuations?
Understanding whether AI agents truly threaten SaaS revenues is crucial for investors, enterprises and policy makers.
Speaker: Arundhati Bhattacharya
What are the projected headcount and revenue per employee for TCS in 2030, and how will the transition to AI orchestration be communicated to the workforce?
Concrete metrics are needed for workforce planning and to manage employee expectations during the AI‑driven shift.
Speaker: Amitabh Kant, K. Krithivasan
How can Indian IT firms close the execution gap identified by Nandan Nilekani and capture the $300‑$400 billion AI services opportunity?
Bridging the execution gap determines whether the large market potential can be realized by Indian service providers.
Speaker: Amitabh Kant, Salil Parekh
What IP strategies should Indian IT services companies adopt to own parts of the AI stack rather than remain builders for hire?
Owning AI IP could create sustainable competitive advantage and new revenue streams for Indian firms.
Speaker: Amitabh Kant, Salil Parekh
Is it feasible for Indian IT firms like HCL to become hyperscalers, and what would be required in terms of investment and capabilities?
Assessing the possibility of moving up the stack informs long‑term strategic decisions and capital allocation.
Speaker: Amitabh Kant, C. Vijayakumar
What specific national‑level skilling and reskilling curricula are being developed with the Ministry of IT, and how will they be scaled across the country?
A clear curriculum and scaling plan are essential to address the massive up‑skilling challenge for millions of graduates.
Speaker: K. Krithivasan
How can AI tools be democratized for MSMEs, considering unit economics and scalability?
Making AI affordable and usable for small businesses is key to broad‑based productivity gains in the Indian economy.
Speaker: Arundhati Bhattacharya
What architecture and governance model will underpin a Digital Public Infrastructure for AI in India?
A national AI infrastructure requires a defined technical architecture, data policies and governance to be effective and inclusive.
Speaker: Salil Parekh
What level of R&D intensity (budget, talent, timelines) is required for Indian IT firms to capture the projected AI services market?
Quantifying R&D needs helps firms plan investments and ensures they are not left behind in the AI race.
Speaker: C. Vijayakumar
What types of new jobs will AI create in India, and what specific skills will be in demand?
Identifying emerging job categories and skill requirements guides education, training programs and career planning.
Speaker: Navneet Kaul, K. Krithivasan, C. Vijayakumar
What measures can be implemented to prevent misuse of AI, such as disinformation or unrest?
Developing safeguards and policy frameworks is critical to ensure AI benefits society without causing harm.
Speaker: Harswar (audience)
How can AI solutions be built that reflect Indian culture, heritage, and values?
Culturally aligned AI can improve adoption, relevance, and ethical compliance within the Indian context.
Speaker: Mamanama Venkatana Rasimahati (audience)
How can startups engage with large enterprise ecosystems like Salesforce for mentorship and market access?
Clear pathways for startup collaboration can accelerate innovation and broaden the AI ecosystem.
Speaker: Mania Sharma (audience), Arundhati Bhattacharya

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by noting that the world is at a “defining moment” for work, with AI promising new productivity and jobs while also generating anxiety about disruption to white-collar roles [1][2][3]. Deepak Bagla emphasized that no clear playbook exists for the coming transition, recalling how bank tellers-once thought immune-were the first to disappear with digitisation, and warning that the next five years will be especially turbulent [6-16]. He argued that psychological readiness and rapid reskilling will be essential, urging workers to identify new skill sets to stay employable [27-29].


Radhika presented research showing that only about 3-4 % of global occupations are at high risk of full automation, rising to roughly 6 % in high-income countries, while around 20 % face partial task automation that can free time for higher-value work [34-42]. She stressed that supporting the small fraction of displaced workers will require not just training but broader industrial, macro-economic and social-protection policies, whereas the larger “middle” group should focus on augmenting productivity with generative AI [43-49].


Sanjeev Bikhchandani reported that Naukri’s hiring volumes have not yet declined, underscoring the difficulty of forecasting impacts, and he drew a parallel to the 1980s computer adoption that ultimately boosted productivity rather than causing layoffs [56-60][64-68]. He advised individuals to continuously learn multiple AI platforms-four per quarter-to remain employable in a rapidly changing landscape [65-68].


Prashant Warier explained that in healthcare AI will primarily upskill doctors by automating radiology interpretation, note-taking, and test recommendation, thereby expanding capacity especially in low-resource settings [95-104]. However, he cautioned that strict regulatory approval and liability concerns mean AI will remain a decision-support tool rather than a full replacement for clinicians for the foreseeable decade [108-119].


Returning to the gig and informal economy, Radhika highlighted that over 45 % of India’s workforce is in agriculture and 95 % of enterprises have fewer than ten employees, making them vulnerable to being left out of AI gains unless digital infrastructure and tailored financial support are provided [161-172]. She also noted that existing labour laws lag behind the rise of platform work, prompting calls for updated conventions and social-security schemes to protect non-standard workers [173-176].


In the rapid-fire closing, Bagla called for coordinated action among government, academia and society to harness AI’s “delta multiplier” for national benefit [175]. Prashant projected that a successful AI rollout could lift global GDP growth to 10 % or more by 2030 [176]. Radhika concluded that an inclusive transition-creating better, more productive jobs while ensuring agricultural and MSME sectors and informal workers are not abandoned-should be the benchmark of success [178].


Keypoints

Major discussion points


Uncertainty and the need for psychological readiness and reskilling – The panel stressed that there is “no playbook” for the coming AI era and that the next five years will be “the toughest times of disruption.” Preparing people mentally for possible job loss and identifying new skill sets are seen as the first priorities [12-16][27-29].


Limited full-automation risk but widespread task-level change – Research cited by Radhika shows that only 3-6 % of jobs worldwide have a high likelihood of total displacement, while about 20 % will see some tasks automated, creating opportunities to boost productivity. Managing this transition therefore requires not only reskilling but also broader industrial, macro-economic and social-protection policies [34-42][44-48].


Current labour market signals and the importance of continuous AI upskilling – Sanjeev noted that hiring at Naukri has not yet declined, echoing the 1980s computer adoption story where productivity rose without mass layoffs. He advises individuals to “learn how to use three AI platforms every quarter” to stay employable [57-60][64-68][70-84].


Healthcare as a sector where AI augments rather than replaces professionals – Prashant explained that AI can help address radiology shortages, automate routine primary-care tasks, and provide decision-support tools, but regulatory clearance and liability issues mean doctors will remain central for the next 5-10 years [95-104][108-119][120-124].


Implications for the informal, gig and MSME workforce and the need for inclusive policy – Radhika highlighted that over 90 % of India’s workers are in agriculture, self-employment or micro-enterprises, sectors that risk being left out of AI gains. She called for digital infrastructure, financing, and updated labour-law frameworks to extend AI benefits to these workers [161-170][172-176][177-178].


Overall purpose / goal of the discussion


The panel convened to assess how generative AI will reshape the future of work-especially knowledge-intensive and informal jobs-by weighing disruption against productivity gains, identifying skill and policy gaps, and proposing concrete actions for businesses, policymakers, educators and workers to navigate the transition responsibly.


Overall tone and its evolution


The conversation began with a tone of caution and anxiety about “disruption” and job loss [1-3][12-16]. As speakers shared data and historical analogues, the tone shifted to a more measured, evidence-based perspective that emphasized opportunities and the importance of upskilling [34-42][57-68]. When discussing specific sectors (healthcare) and the vast informal economy, the tone became constructive and forward-looking, calling for inclusive policies and collaborative effort [95-124][161-178]. The discussion closed on an optimistic, collaborative note, stressing collective action and a vision of AI-driven growth for India [175-178].


Speakers

Speaker 1 – Role: Moderator / host (appears to be the event moderator) – Area of expertise: (not specified)[S1]


Prashant Warier – Area of expertise: Healthcare AI, radiology-AI applications – Role/Title: Panelist (no specific title mentioned)[S4]


Deepak Bagla – Area of expertise: AI innovation, policy & entrepreneurship – Role/Title: Mission Director, Atal Innovation Mission[S6][S7]


Sanjeev Bikhchandani – Area of expertise: Employment platforms, AI in recruitment – Role/Title: Founder, InfoEdge (Naukri.com)[S8][S9]


Radhika – Area of expertise: Labour economics, AI-impact research – Role/Title: Researcher (Podar International School)[S10][S11]


Additional speakers


Nadeka(no role, title or area of expertise mentioned in the transcript or sources)


jiv(no role, title or area of expertise mentioned in the transcript or sources)


Full session reportComprehensive analysis and detailed insights

The panel opened by framing the present as a “defining moment” for work, where generative AI promises fresh productivity gains and new jobs while simultaneously fuelling anxiety about disruption to white-collar occupations [1-3]. The moderator then asked Deepak Bagla to outline how businesses and policy-makers should navigate this transition [4-5].


Bagla responded that there is “no playbook” for the coming era [12-13]. He recalled that, in 1986, bank tellers were taught they were the only immutable role, yet they became the first victims of digitisation [7-11]. Projecting forward, he warned that the next five years are likely to be among the toughest periods of disruption [15-18] and that workers must first prepare psychologically for possible job loss before thinking about reskilling [27-29]. To build future resilience, he highlighted a school-level initiative that introduces AI and tinkering, encouraging a shift toward task-oriented learning rather than traditional, formal education pathways [21-25].


Bagla also recounted an anecdote about an Ivy-League professor whose master’s students began questioning high tuition fees because AI could provide answers, illustrating early signs of credential pressure [124-129].


Radhika complemented this view with empirical data. She noted that only 3-4 % of global occupations have a high likelihood of full automation, rising to about 6 % in high-income economies [37-41]. Around 20 % of jobs will experience partial task automation, freeing time for higher-value work [42]. Consequently, she argued that managing the transition requires more than reskilling; it also demands coordinated industrial, macro-economic, trade and social-protection policies to absorb displaced workers and to support those whose roles are only partially automated [44-48][49-51].


Sanjeev Bikhchandani, representing the online recruitment platform Naukri, reported that hiring volumes have not yet declined, underscoring the difficulty of forecasting AI’s impact [57-60]. He drew a parallel with the 1980s introduction of personal computers, which ultimately boosted productivity without massive layoffs [64-68]. From this history he distilled a personal prescription: individuals should master three new AI platforms each quarter – roughly twelve a year – to remain employable [65-68][70-84].


Prashant Warier turned to the health sector, pointing out the acute shortage of radiologists in India (one per 100 000 people) and many low-resource countries [95-99]. He explained that AI can upscale clinicians by automating image interpretation, note-taking, test recommendation and triage, thereby expanding capacity [99-104][105-107]. However, he cautioned that regulatory clearance (e.g., FDA, CDSCO) and liability concerns will keep doctors and nurses central to decision-making for at least the next five to ten years, positioning AI as a decision-support tool rather than a replacement [108-119][120-124].


The conversation then shifted to education and credentials. Bagla suggested that AI may erode the value of long-duration degrees, potentially allowing teenagers as young as 13 years to perform task-based work and that the traditional “age barrier” could disappear [130-135]. Sanjeev counter-pointed that elite degrees such as those from IITs still serve as strong filters of ability, perseverance and problem-solving, and that leadership roles continue to require people-skills and maturity beyond technical fluency [136-144].


Addressing the vast informal economy, Radhika reminded the panel that over 45 % of India’s workforce remains in agriculture, 55 % is self-employed and 95 % work in enterprises with fewer than ten employees [165-167][169-172]. For this segment, the primary risk is not automation but exclusion from AI-driven productivity gains due to limited digital infrastructure, financing and skill access [170-173]. She called for updated labour-law frameworks that cover platform and gig work, alongside social-security schemes such as India’s “code” for platform workers [173-176].


When asked which layer of the AI stack should receive priority, Bagla advocated focusing on the application side, where small, executable solutions can be rapidly scaled [150-152]. He also stressed that success will hinge on coordinated action among government, academia, industry and civil society, describing the AI “delta-multiplier” as a core engine for India’s growth [175].


Across the discussion, the panel found common ground on several points: continuous upskilling is essential-whether through school-level tinkering, broad reskilling programmes, or personal mastery of multiple AI tools [21-28][43-49][65-67]; AI is expected to augment productivity rather than wholesale replace jobs, with only a modest share of occupations facing full displacement and many roles being reshaped at the task level [24-25][41-43][99-104][119-121]; and an effective transition requires coordinated policy beyond training, encompassing industrial strategy, macro-economic measures and social protection [175][46-48].


Nevertheless, notable disagreements persisted. Bagla warned of a severe, near-term disruption wave, whereas Radhika’s data suggested that full-automation risk is limited to a small fraction of jobs [16][27-28][37-42]. Bagla’s emphasis on early education and psychological readiness contrasted with Radhika’s call for broader macro-policy interventions to absorb displaced workers [21-28][46-48]. On the relevance of formal degrees, Bagla envisaged a future where AI flattens credential hierarchies, while Sanjeev maintained that elite degrees remain valuable signals of ability and commitment [130-135][136-144]. Finally, the panel differed on how to define “success” by 2030: Bagla spoke of a coordinated, inclusive AI multiplier for India; Sanjeev focused on net job creation; Prashant projected a 10 %+ rise in global GDP; and Radhika insisted on an inclusive transition that benefits agriculture, MSMEs and the informal sector [175][176][177][178].


Key take-aways


Anticipate disruption: the next five to ten years will be the most disruptive period, yet no definitive playbook exists [15-18].


Recognise automation limits: only 3-6 % of jobs face near-total automation while ~20 % will see partial task automation, creating productivity opportunities [37-42].


Deploy coordinated policies: industrial, macro-economic, trade and social-protection measures are required alongside reskilling [44-48].


Master multiple AI tools: individuals should aim to master three new AI platforms each quarter [65-68][70-84].


Introduce AI early in schools: early exposure fosters task-oriented mindsets [21-25].


Augment, not replace, clinicians: AI will support doctors and nurses, with regulatory and liability frameworks limiting full replacement [108-119][120-124].


Value elite credentials for soft skills: while AI challenges traditional degrees, elite qualifications still signal essential people-skills [136-144].


Include the informal sector: bring agriculture, MSMEs and gig workers into the AI fold through digital infrastructure, financing and updated labour laws [170-173][173-176].


Prioritise the application layer: focus on executable AI solutions that can be scaled quickly [150-152].


Measure success inclusively: by 2030, success should be gauged through inclusive job creation, productivity gains and contribution to high-growth economies [175-178].


The rapid-fire closing reinforced this vision: Bagla called for all stakeholders to work together to unleash the AI delta-multiplier for India [175]; Prashant projected that AI-driven growth could lift global GDP by more than 10 % by 2030 [176]; Sanjeev defined success as a net increase in jobs [177]; and Radhika summed up an inclusive AI transition that delivers better, more productive employment across agriculture, MSMEs and the informal sector [178]. The panel therefore concluded that, despite uncertainty, a coordinated, skill-focused yet policy-rich approach can steer the AI revolution toward broad-based prosperity.


Session transcriptComplete transcript of the session
Speaker 1

Thank you. We’re at a very defining moment in the history of work. On one end, we’re seeing new possibilities, new productivity unlocks, new jobs being created. And on the other, there’s a lot of growing anxiety around what would it mean and the kind of disruption it will bring to work, especially the knowledge work, the white collar jobs, as they say. Let me start with Mr. Bagla. How should businesses and policymakers think about this transition?

Deepak Bagla

It’s very interesting. First, I don’t think any of us have any answers. We will try. The fundamental point, you know, and I remember when I joined banking and I take you back to 1986, we went for training and the first thing we were told that the only job which will never change. And is stable and safe in the banking world is that of the teller. You have to go get your. The first job to go when digitization happened was the teller. Because you started taking it out of the machine. Now the challenge which remains for all of us is that we are entering into an era where there’s no playbook. What is it which it is going to move into?

So we’ve got to put it into time spans if I look at it. What is going to happen in the next 5 years, 10 years and then after that no one knows. I think next 5 years is going to be one of the toughest times of disruption. How many of you have ever been laid off? Excellent. You’re the only one ready for the next 10 years. That is the most important thing going forward. And I think one of the things which we are trying to do at the Aatil Tinkering Lag, because I have a team here, Dipali is here and with her she is the one who is putting it. At the school level. we are trying to bring AI and tinkering.

The idea of innovation that you… And what I’ve also started seeing as a trend from there that many of them may not be looking at going to a very formal education system, but getting into a job profile there and then. And it’s more task -oriented. So I’ll start off with this, and I know we’ll go on with the questions. But let me end here. But as I see it, I think that disruption in the next five years and 10 -year period will be a lot for all of us to learn psychologically on how can we be without a job when we are asked. That’s the first most important point. And then tend to see what is it which we can pick up to take on next, because that’s where we all talk about will be that reskilling piece coming in.

Speaker 1

Radhika, you have done the research recently on this. Let me ask the same question to you. But let me add, are we overestimating near -term job loss? Are we overestimating the long -term transformation which it’s bringing?

Radhika

somewhat yes first let’s let me also somewhat endorse what Mr. Bagla said I think that this is there’s immense uncertainty and we really do need to have more granular and more nuanced understanding of what this transition actually entails because you know different segments of the population different segments of the workforce are going to be impacted differentially by this transition now there is this narrative of this doomsday prediction and we’re all going to lose our jobs and we’ve got to be psychologically prepared for losing our jobs I think yes it is indeed the case that most of our jobs are going to be exposed to automation and to gen AI but it doesn’t mean that our jobs are going to be destroyed or that they’re going to be completely dispased because if you go and look at the academic literature and a lot of the research the IMF that the managing director was spoke in the session before at the ILO we know that an occupation essentially entails many different tasks it’s a bundle of tasks now there are some tasks in those occupations which are going to get automated.

And there are others which are not going to be done, not going to be. Now, last year, the ILO, late last year, the ILO actually put out a study where they looked at all the different occupations and they did a gradation of the extent to which they were exposed to automation. Now, if you look at the share of jobs where almost all the tasks had a high likelihood of automation and therefore were likely to be displaced, that number was actually somewhere between 3 % or 4%. And that’s a global average. If you actually break it down and look at it in countries with low income, middle income, it was even lower. In high income countries, that was close to 6%.

But the share of jobs where some tasks were going to be automated, but that also meant that there was more scope for freeing up time to bring in new tasks, enhance their productivity, was actually quite high. That was about 20 % of the jobs. So what I’m saying is that in order to manage this transition, there are two things we’re going to have to do. One, of course, it is indeed the case that a small proportion of people will lose their jobs and they will be displaced. We need to think about how they are going to be absorbed in other sectors. And that, to my mind, is going to require more than skilling and reskilling. It’s also going to require thinking more carefully about industrial policy, about macroeconomic policy, trade policies, labor market policies, in particular, social protection.

But for those who are actually in the middle, where some tasks will be automated and others will not, we need to think carefully about how those occupations can actually augment their productivity, how they can engage more meaningfully with Gen AI and enhance their productivity. Because remember, all of this then also has an implication that enhanced productivity, which has an implication in wages and prices. All of that also boosts demand in the economy, which then drives more job creation and investment. And that virtuous cycle of growth, investment, job creation. So I would say that, you know. So, yes, support those who are, you need policies to support those who will be displaced, but at the same time, augment productivity in the other jobs, which are somewhere in the middle, and there is some buffer against automation.

Speaker 1

Sanjeev, with Naukri, you have a front seat to what’s happening in this space. Like, are you seeing structural shifts?

Sanjeev Bikhchandani

You know, there’s a lot of feedback we get from media, from social media, from panelists. But you know what, as of now, Naukri growth has not been impacted. So on the ground, we are not seeing a reduction in hiring. But at the same time, we are careful and cautious and say, what will happen now? Answer, I don’t know. Right. And the truth is, nobody knows. And anybody who is telling you he knows is… is wrong. They don’t know. So because there’s so much happening, and it’s so chaotic, that you can’t really figure out, right, what is going to happen. Right. But I’ll go back in history a bit. 1982 I was in college Deepak was in college we were in college together actually in Delhi University and these two new companies were set up Aptek and NIIT saying we are going to teach you how to use a personal computer nobody cared a few cared but it was not mainstream it was not ok so most people didn’t care by 1985 you know it had become somewhat a requirement that if you go and learn how to use a computer maybe your prospects of getting a job go up or if you got a job maybe you will become more productive at your job in 1985 to the Rajiv Gandhi government the government said we are going to introduce computers in banks at that time banks mostly public sector banks the All India Bank Employees Association which is one of the most powerful trade unions in the country then went ballistic so you got to lose jobs you are going to lose jobs government said never mind we are putting them in anyway so computers came into banks they weren’t used for a while then they began to get used and guess what nobody lost jobs people got more productive people got MIS that they weren’t getting earlier they served their customers better nobody lost jobs so new technology increased productivity did not cause job losses now I am not saying that is exactly what will happen this time but you know maybe now will some jobs or tasks be get automated possibly so but will others come up almost certainly yes so what I tell individuals never mind policy guys and governments and you know multilaterals what I an organization what an individual is look you don’t bother about will jobs be lost will my job be lost will I lose my job and will I get a new job will I get a new job Then my answer is simple.

Learn how to use three AI platforms every quarter. By the end of one year, you know 12 AI platforms. Believe me, you will be employable. I’ll give you an illustration of this. I finished business school in 1989. By then, I had finished college. I had done three years of work in an ad agency and had done business school. That’s very important year. Why is it an important year? Because the classes of 1988 and 1989 were the first two batches to have graduated from the IIMs who had actually used PCs at the IIMs because the PCs came into the IIMs in 1987. So I walked into my job as PC literate. There were two PCs in the marketing department at the company where I was working.

All the other people were senior, very highly qualified. IITs were senior. But I was the only guy who was PC literate. Believe me, if they were sacking then, I would have been the only guy who was PC literate. in that department, I would have been the last to go. I knew how to use that technology. So if AI is coming, it has come. It is inexorable. It is relentless. It will come. It has come. Learn how to use it. So if you don’t do AI, AI will be done to you.

Speaker 1

Very insightful. I think if you optimize local optima, we are somewhere going to find the global balance. Prashant, with that, let me Radhika referred to, you know, job is a bundle of tasks. Tasks will get disrupted. But the role might shift for all of us. Let’s make it real. You are closer to the medical community. How does the role of a doctor or a nurse change going forward? Can we envision an AI doctor in the future? What would the job look like?

Prashant Warier

I think healthcare is slightly different from a lot of other industries. I think it is highly regulated, number one. So I think about three things from a healthcare perspective. From a futuristic perspective as well. One is that we have to be able to able to make sure that we are able to able to the capacity is limited especially if you’re talking about the global south right india has i mean we operate in the radiology ai space we automatically interpret radiology images with ai and if you look at india india has got one radiologist for every hundred thousand people which is about and us has one radiologist for 10 000 people kenya if you look at kenya has the same number of radiologists as marginal hospital so um and and many african countries have like one or two radiologists very very small number right and so there is not enough capacity to meet that demand so when you look at job loss per se i mean there is not enough capacity to meet the demand that is there for health care so in many ways i mean you’re not going to lose jobs it is going to upskill people health care workers and doctors who are on the ground supporting patients so that’s that’s one is about upskilling uh people right and supporting uh making health care workers able to uh support patients maybe there is an ai doctor that can do primary care i mean primary care is something that can be significantly automated i mean you’re looking at three things that you’re doing in primary care one is to understand patient symptoms so ai can prompt the patient can understand what symptoms they might have Second is to recommend tests, which again, AI can identify the right tests and recommend what testing should do.

And third is around diagnosis and treatment, right? Again, which AI can potentially do or even sort of triaging to the specialist. So these are things which AI can do. So I think in general, AI is going to upscale doctors and healthcare workers to do better and meet more patients and save time, right? One of the things that we are seeing across the world is you are using AI agents to scribe and take notes of the doctor -patient conversations, which is a task which, I mean, if a doctor is meeting 40, 50 patients a day, and after every one of those conversations, they have to write down, take notes from that conversation, AI can do that automatically. We use that, we use note takers in our meetings.

Why can’t you use note takers in a doctor -patient conversation? So we are seeing that, I mean, upscaling sort of one area. Second area, I think, which is going to, at least from a healthcare perspective, I see is a tough one is around regulation, right? Everything that… AI does today in… healthcare across the world. In the US, it’s FDA cleared. You have to get FDA cleared to be able to actually provide a clinical decision support to a doctor. So that is not going away right now. And that FDA equivalent, India, CDSCO, every country has its own regulatory body. So you have to figure out how to cross that barrier. That hurdle is still there. And that is not something that is going away right now.

And that brings me to the third point, which is that today, I mean, if a doctor is taking a decision on a patient saying that this patient has tuberculosis, for example, or lung cancer, or any of those, right, they are taking liability for that decision. And till AI is going to be able to take that liability, that is going to be a decision that doctors will make. And so what I see today, and for the next at least five to 10 years, is that AI is going to be supporting doctors in making better decisions. It is helping, it is providing all the data in the right format. For example, what we do is we are able to bring in the right data, and we are able to bring in the right data.

And so that is going to be bring data from electronic medical records, PAX, basically imaging data of the patient, pathology data of the patient, bring all of that together into one place for the doctors to help. diagnose better. So you’re providing that support to the doctor in making a clinical decision and also providing treatment planning, sort of automated treatment planning of treatment plans, which they can use to then provide the treatment plan for that patient. So it is a supportive tool and I see that for the next several years, AI is going to be upskilling doctors in providing better care and providing more care to patients, especially in the global

Speaker 1

If, you know, multiple areas or multiple playgrounds where action is happening, like there’s startups, there’s infra, there is energy, you know, yesterday our Honourable Minister spoke about the five layers of AI. Where do you see most amount of action needed? Like if you had to pick one area to double down on, what does India need?

Deepak Bagla

Within the AI stack or generally?

Speaker 1

Within the AI stack.

Deepak Bagla

I think on the application side is where we will have… a very interesting play on actually the small ones and actually getting them executed. That’s where we’ve had some strength in any case. But let me just step back a minute beyond this question, if I may, with your permission. You know, one very interesting thing. Yesterday, the plenary, I was sitting right there and next to me was a professor from an Ivy League. Let me not just say it, but one of the top five Ivy Leagues. And I was asking the professor, when are you going to go back and start teaching? Because he was taking a break to do it. He told me a very interesting thing.

One of the big things which is happening in this university is that the master’s students are feeling that they don’t longer need to pay that big tuition fees because they are no longer getting challenged. Because AI is giving them all the answers. Now see the repercussion of that. When we say that we have a million people coming into the job market every year, every month in India. that is because we go through a bachelor’s and a master’s and then they’re coming in so let’s say like sanjeev and i started 22 23 24 one of the most interesting elements which was pointed out was that maybe that age barrier no longer remains you may have somebody who’s 13 year old and ready to do a job in a task and that is another trend which might just picked up because the moment you’re going to see a complete change in the educational system think of two industries which have so far withstood or been having a pushback on the huge change which can come to them the financial industry is one and the education industry but now they’re being challenged on it in a big way you’re four years master of two years master’s four years bachelor’s maybe nobody needs to do it but they’re being challenged on it in a big way you’re four years master of two years master’s so see the number of people which will get into the task creation and the task doing force that is another element which we’ve not yet been able to quantify

Speaker 1

very insightful answer jiv this will allow me to go back to the first question i asked you are you seeing a structural shift like for example are people now instead of asking degree pedigree asking for more afluency basic skills instead of

Sanjeev Bikhchandani

oh i people talk about it i’m not sure how many people actually do it right at the end of the day if you’ve got a credential it matters see uh what does an iit degree mean at what level it means you’ve learned something another level it means boss you were you you have demonstrated commitment to a prepare to get it so you you are able to work hard you know some level of physics chemistry maths that’s how you cleared the entrance exam right and you were at the top of the academic heap and that’s how you got into the place in the first place So when we go to IIT to recruit, we don’t hire for the specific knowledge they got at IIT.

We hire for the fact that it’s a fantastic filter on several accounts, right? Also, right? And to some extent, you know, a 13 -year -old, you know, ready for work. Look, business is about people. Business is about people and managing people and working with people and selling to people and, you know, running teams, being a good team player, being a good leader. So that comes with at least some years of experience, some years of, you know, maturity, right? So can I be a forex trader in front of a computer at 16? If I’m technically good enough, answer is yes. But can I be a forex trader in front of a computer at 16? Can I lead a team of salespeople out in the field?

who are calling on clients who are 20 years older than me, I don’t know. Maybe you can, maybe you can’t. So, you know, some stuff, I mean, people are still people.

Speaker 1

Nadeka, what does it mean for the gig workers and the temp workforce? And, you know, the labor laws were written long back. What would it mean as we move ahead? How should we even think of the labor laws or the role that the temporary labor brings in? Like, we are done with the age of working in the same organization for 30 years, as I just mentioned.

Radhika

So you’re talking about temp workers and gig workers. And before I answer that more directly, I just want to reflect on the comments that have been made by the other panelists. You know, the conversation that we’re having here on displacement and productivity enhancement, including the comments that I made earlier, we’re really talking only about 10 % of India’s workforce at this point. The conversation on AI is right now, you know, today we are having this summit in the global south. And the Global South still, vast proportion of the workforce is in the informal sector. For India, 45 % of its workforce is still in the agricultural sector. 55 % of the people are self -employed. 95 % of employment is in enterprises with less than 10 workers.

So, you know, that part of the conversation, we are completely missing out in the future of work. And I think we need to bring that in here as well, because a lot of the gig work and the casual work that you’re referring to is essentially what we see in the informal sector. And for that sector, the risk is not excessive automation. They might completely miss the bus and not realize any of these gains or productivity gains from AI. So we also need to think more carefully about how all of this can enhance productivity in the agricultural sector. How there could be greater AI adoption amongst micro and small. All enterprises, which are basically the engines of job creation in India.

and that’s again going to require a lot more than skilling and credentials but also they’re going to need a lot of financial support for adopting AI they are going to need digital infrastructure access to broadband so on and so forth and now going back to your question on the changes in the world of work and labor regulations indeed there is no denying the fact that labor regulations have not kept pace with the changes in the employer -employee relationships we now live in a world of work where there is a proliferation of non -standard employer employment arrangements the platform economy is a manifestation of that as well and certainly there’s a need to update that at the ILO for example there is a conversation for two years which is happening on what are the kinds of conventions and recommendations that are required to bring decent work into the platform economy and of course India is leading in that conversation with the code and social security which seeks to provide social protection even to platform workers so that’s a very forward looking ambition.

Speaker 1

Yes well we’re at time but I’ll just say that I think it’s a very important point that I think it’s a very important point to just end with one last question rapid fire one word maximum five second answer a lot of still unknowns what does success look like in 2030 what would you be proud of we’ll go in a row

Deepak Bagla

most critical point when everyone works together the government the society the people the academia i think that joining the dots is absolutely core to seeing any element of success for anyone and last point i think the biggest benefit of the delta multiplier of ai is india or will be india

Prashant Warier

krishan i think success for ai is the world’s gdp growing at 10 or more by 2030

Sanjeev Bikhchandani

i think uh if there is net job increase which means the jobs lost if any are less than the jobs created i think that is success

Radhika

i think an inclusive ai transition where we have better jobs, more productive jobs, and where the agricultural sector and the MSME sector have benefited from this transition and we don’t leave the informal sector behind.

Speaker 1

With that, we’ll wrap this panel discussion. Thank you so much for the insightful comments.

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The moderator framed the present as a “defining moment” for work, where generative AI promises fresh productivity gains and new jobs while simultaneously fuelling anxiety about disruption to white‑collar occupations.”

The moderator’s description of a defining moment with dual possibilities and anxiety is directly echoed in the knowledge base entries describing the panel’s framing of the AI transition as a pivotal moment for work with new productivity and job creation alongside growing anxiety [S4] and [S5] and [S70].

Confirmedmedium

“Bagla responded that there is “no playbook” for the coming era.”

Deepak Bagla’s statement that there is no playbook aligns with the knowledge-base note that highlights his view of working without a playbook as a key strength in the AI-driven future [S14].

Additional Contextmedium

“Projecting forward, he warned that the next five years are likely to be among the toughest periods of disruption.”

Historical analyses in the knowledge base note that transition periods after major technological shifts can be especially challenging, providing context for Bagla’s warning about a tough five-year disruption window [S68].

Additional Contextlow

“He highlighted a school‑level initiative that introduces AI and tinkering, encouraging a shift toward task‑oriented learning rather than traditional, formal education pathways.”

The knowledge base discusses a broader move toward competency-based, task-oriented education as opposed to duration-based degree programs, adding nuance to Bagla’s school-level AI initiative claim [S74].

Additional Contextmedium

“He drew a parallel with the 1980s introduction of personal computers, which ultimately boosted productivity without massive layoffs.”

The knowledge base references historical precedent that new technologies (e.g., personal computers) tend to create jobs over the long term, even if the transition is challenging, supporting the analogy used by Bagla [S68].

External Sources (78)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — – Radhicka Kapoor- Prashant Warier – Sanjeev Bikhchandani- Prashant Warier
S6
AI Innovation in India — -Deepak Bagla- Role: Mission Director; Title: Atal Innovation Mission
S7
From India to the Global South_ Advancing Social Impact with AI — -Deepak Bagla- Mission Director for Atal Innovation Mission
S8
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — -Sanjiv Bikhchandani- Founder of InfoEdge (Naukri.com)
S9
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — The discussion revealed nuanced perspectives on AI’s employment effects. Sanjiv Bikhchandani, founder of InfoEdge and op…
S10
WS #53 Promoting Children’s Rights and Inclusion in the Digital Age — – Radhika Gupta: Podar International School Radhika Gupta: All right. Thank you. I like the way you situated in this…
S11
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Radhicka Kapoor provided a crucial counterbalance to doomsday predictions by introducing concrete research data from int…
S12
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — – Deepak Bagla- Radhicka Kapoor – Deepak Bagla- Radhicka Kapoor- Sanjeev Bikhchandani – Radhicka Kapoor- Prashant Wari…
S13
Responsible AI in India Leadership Ethics & Global Impact — Thank you, Shantiri. I’m very happy to be here. Thank you for having me with the, you know, esteemed panelists here. You…
S14
AI Innovation in India — Deepak Bagla argues that India stands to benefit the most from AI as a transformative force due to its massive and growi…
S15
How AI Drives Innovation and Economic Growth — Yes. Thank you. Thanks very much. You know, I don’t want to minimize the existence of forces that may widen gaps. I thin…
S16
https://app.faicon.ai/ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-sidharth-madaan — Thank you. We’re at a very defining moment in the history of work. On one end, we’re seeing new possibilities, new produ…
S17
https://app.faicon.ai/ai-impact-summit-2026/multistakeholder-partnerships-for-thriving-ai-ecosystems — I think. I was part of the task force that was set up by. principal scientific advisor to Indian government, Professor K…
S18
(Day 1) General Debate – General Assembly, 79th session: morning session — Luiz Inácio Lula da Silva – Brazil: My greetings to the President of the General Assembly, Mr. Yang. I would like to gr…
S19
The future of work: preparing for automation and the gig economy — Concerns about the future of work also come from ongoing technological advancements in automation and AI. Some worry tha…
S20
Empowering Workers in the Age of AI — – **AI’s Impact on Jobs – Augmentation vs. Automation**: Research shows that while AI will affect many jobs, most impact…
S21
Bridging the Digital Divide for Transition to a Greener Economy — It is argued that trade policies should enable access to technologies, goods, and services required for the transition. …
S22
From summer disillusionment to autumn clarity: Ten lessons for AI — In contrast, the focus on existing harms – education, discrimination, job loss, etc. – frames the problem in terms of ac…
S23
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — Managing the transition for the 3.3% of jobs at risk of full automation, particularly administrative roles held by women
S24
Manufacturing’s Moonshots Are Landing . . . Are You Ready for the Next Wave? — Furthermore, it highlights the significance of collaboration between the public and private sectors in future skills tra…
S25
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — This comment completely reframed the job displacement discussion, moving it from theoretical fears to empirical evidence…
S26
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Soi explains that while the long-term goal is institutional adoption where AI becomes intrinsic to the organization, cur…
S27
DC-DH: Health Digital Health &amp; Selfcare – Can we replace Doctors in PHCs — Peter Preziosi argues that AI and technology can support healthcare workers by enhancing their capabilities. He emphasiz…
S28
AI for Social Empowerment_ Driving Change and Inclusion — India’s labor market is characterized by over 90% informal employment, meaning only one in ten people have formal sector…
S29
Building Inclusive Societies with AI — -Systemic challenges facing India’s informal workforce: The panel identified five key roadblocks – being discovered and …
S30
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — ## Areas of Consensus and Implementation Approaches ## European Union’s Comprehensive Policy Response Despite diverse …
S31
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Agreed with:Deepak Bagla — Need for comprehensive policy responses beyond just reskilling Agreed with:Radhicka Kapoor —…
S32
Contents — Beyond school and university-level education, a range of opportunities are currently available to workers looking to ite…
S33
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 127. The Inspector found that many organizations take a narrow approach to learning and talent management – one that is …
S34
Human rights — Job Displacement: Automation driven by AI can disrupt labor markets, leading to job displacement and economic inequality…
S35
Generative AI: Steam Engine of the Fourth Industrial Revolution? — AI must be implemented in a manner that aligns with ethical considerations and societal impact. This ensures that the po…
S36
Shaping the Future AI Strategies for Jobs and Economic Development — -Workforce Transformation and Job Impact: A central theme throughout both panels was whether AI will replace or enhance …
S37
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Radhicka Kapoor provided a more nuanced perspective, citing research showing that while most jobs will be exposed to AI …
S38
Who Benefits from Augmentation? / DAVOS 2025 — Kumar argues that AI can lead to increased productivity and the creation of new job opportunities. He suggests that this…
S39
AI: The Great Equaliser? — It is worth noting that the analysis acknowledges that AI technology may not significantly reduce job numbers. Instead, …
S40
AI for Social Empowerment_ Driving Change and Inclusion — Education and Skills System Overhaul:Investment requires fundamental reimagining rather than incremental improvement. Cu…
S41
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S42
Comprehensive Report: Preventing Jobless Growth in the Age of AI — – Valdis Dombrovskis- Jonas Prising- Elizabeth Shuler Economic | Future of work | Development Successful management of…
S43
AI for Social Empowerment_ Driving Change and Inclusion — The required policy responses span multiple domains:
S44
Building Trustworthy AI Foundations and Practical Pathways — “But similarly now, econ of maybe writing novels is gone.”[20]. “The movie industry is worried.”[21]. “That entire econo…
S45
Why science metters in global AI governance — She points out that predictions of massive job displacement require policies such as universal basic income, reskilling …
S46
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — These technological disparities will coincide with massive job displacement and economic disruption across all sectors s…
S47
How AI Drives Innovation and Economic Growth — Summary:Both speakers identified job displacement, particularly for entry-level and routine work, as a major risk that n…
S48
New Colours of Knowledge — Until 2013, the higher education and science system had no well-defined and distinctive employment policy, nor a policy …
S49
!” — In these circumstances, tailored redistributive policies are likely to be effective for promoting growth – for example, …
S50
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-empowerment_-driving-change-and-inclusion — Yeah, I just want to make a fairly random point, I think. And that is, in addition to the Artificial Intelligence for De…
S51
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — A participant with English Literature background references T.S. Eliot’s essay on traditional and individual talent, sug…
S52
Diplomacy Reimagined: Competencies 2040 | Talents, knowledge, and skills for diplomats in the AI Era — Talents, knowledge, and skills are interrelated. Talent often forms the foundation upon which skills and knowledge are b…
S53
Empowering India & the Global South Through AI Literacy — Explanation:The unexpected consensus emerges around the government’s commitment to introduce AI education from class thr…
S54
What are diplomatic competencies for the AI era? — Talents, knowledge, and skills are all interconnected. Talents form the foundation for building skills and knowledge. Th…
S55
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Bagla argues that we are entering an era without a playbook for managing AI disruption, and the next 5-10 years will be …
S56
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Next 5-10 years will bring unprecedented disruption requiring psychological preparation for job loss
S57
From summer disillusionment to autumn clarity: Ten lessons for AI — In contrast, the focus on existing harms – education, discrimination, job loss, etc. – frames the problem in terms of ac…
S58
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — 3.3% of jobs at risk of full automation, mainly administrative roles held by women, with higher risk in Global North
S59
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — I want to give a couple of examples of real things in our office. So we also invest in startups. So we’ve invested in ab…
S60
Labour market remains stable despite rapid AI adoption — Surveys show persistent anxiety aboutAI-driven job losses. Nearly three years after ChatGPT’s launch, labour data indica…
S61
Shaping the Future AI Strategies for Jobs and Economic Development — -Workforce Transformation and Job Impact: A central theme throughout both panels was whether AI will replace or enhance …
S62
DC-DH: Health Digital Health &amp; Selfcare – Can we replace Doctors in PHCs — Peter Preziosi argues that AI and technology can support healthcare workers by enhancing their capabilities. He emphasiz…
S63
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — This comment completely reframed the job displacement discussion, moving it from theoretical fears to empirical evidence…
S64
Building Inclusive Societies with AI — What guardrails are needed to ensure technology augments rather than replaces workers
S65
The rise of tech giants in healthcare: How AI is reshaping life sciences — The intersection of technology and healthcareis rapidly evolving, fuelled by advancements in ΑΙ and driven by major tech…
S66
Building Inclusive Societies with AI — Artisan face sort of market access. Middlemen, dependents. Textile workers face skills and technology gaps, and trade wo…
S67
AI for Social Empowerment_ Driving Change and Inclusion — India’s labor market is characterized by over 90% informal employment, meaning only one in ten people have formal sector…
S68
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Historical precedent suggests new jobs will emerge long-term, but the transition period may be particularly challenging
S69
High Level Session 3: AI &amp; the Future of Work — ### Panel Discussion: Key Themes and Debates Jonathan Charles: Good morning, ladies and gentlemen. Thank you for gettin…
S70
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-sidharth-madaan — Thank you. We’re at a very defining moment in the history of work. On one end, we’re seeing new possibilities, new produ…
S71
https://dig.watch/event/india-ai-impact-summit-2026/ai-innovation-in-india — It is going to be so fast. It is so rapid. And the biggest benefit there which comes is two things about India, which ar…
S72
https://app.faicon.ai/ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-ananya-birla-birla-ai-labs — No single institution, no matter how large or how well resourced, can navigate this epoch alone. The journey from $4 tri…
S73
Thinking through Augmentation — In conclusion, workers’ concerns regarding job security in the face of technological advancements, especially in call ce…
S74
AI (and) education: Convergences between Chinese and European pedagogical practices — Future education may shift toward competency-based rather than duration-based degree programs
S75
Inclusive AI Starts with People Not Just Algorithms — Education, upskilling, and future skills for youth
S76
WS #283 AI Agents: Ensuring Responsible Deployment — Luciana Benotti: Sure. So apart from privacy that we already mentioned, I also wanted to talk about biases. And here I w…
S77
For the record: AI, creativity, and the future of music — These key comments collectively transformed what could have been a typical ‘humans vs. machines’ debate into a nuanced e…
S78
Opening of the session — This view is complemented by Ecuador’s endorsement of implementing the criminal justice instrument in tandem with a huma…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Deepak Bagla
5 arguments176 words per minute842 words286 seconds
Argument 1
No playbook; next 5‑10 years will be the toughest period of disruption (Deepak Bagla)
EXPLANATION
Deepak says we lack a playbook for the AI transition and predicts that the coming five to ten years will be the most challenging period of disruption for the labour market.
EVIDENCE
He states that “there’s no playbook” for the era we are entering [12] and adds that “next 5 years is going to be one of the toughest times of disruption” [16].
MAJOR DISCUSSION POINT
AI disruption timeline
DISAGREED WITH
Radhika
Argument 2
Introduce AI and tinkering at school level; focus on task‑oriented roles and psychological readiness for job change (Deepak Bagla)
EXPLANATION
Deepak describes initiatives to bring AI and hands‑on tinkering into schools, emphasizing task‑oriented learning and preparing students psychologically for future job changes, with an eye on reskilling.
EVIDENCE
He mentions that at the school level they are “trying to bring AI and tinkering” and that many students may move into task-oriented job profiles, stressing the need to learn psychologically how to cope with job loss and then pick up new skills for reskilling [21-28].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bagla’s point about early AI tinkering and task-oriented roles aligns with remarks that 13-year-olds could be ready for such work, indicating a shift in education timelines [S5].
MAJOR DISCUSSION POINT
Education and reskilling
AGREED WITH
Radhika, Sanjeev Bikhchandani, Prashant Warier
DISAGREED WITH
Radhika
Argument 3
Emphasise the application layer of the AI stack, enabling small, executable solutions at scale (Deepak Bagla)
EXPLANATION
Deepak argues that the most promising area for AI in India is the application layer, where small, quickly executable solutions can be built and scaled.
EVIDENCE
He says, “I think on the application side is where we will have… a very interesting play on actually the small ones and actually getting them executed” [130-132].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
In the panel discussion Bagla was cited as recommending that India concentrate on the application layer of the AI stack for rapid, scalable solutions [S5].
MAJOR DISCUSSION POINT
AI stack focus
Argument 4
Success = coordinated effort across government, academia, society; AI as a multiplier for India’s growth (Deepak Bagla)
EXPLANATION
Deepak stresses that AI success depends on joint action by government, academia and society, with AI acting as a multiplier for India’s overall development.
EVIDENCE
He notes that “when everyone works together the government the society the people the academia… joining the dots is absolutely core to seeing any element of success” and that “the biggest benefit of the delta multiplier of AI is India” [175].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of joint action across government, academia and society, and AI’s role as a growth multiplier for India, were reiterated in the discussion [S4][S14].
MAJOR DISCUSSION POINT
Coordinated AI strategy
AGREED WITH
Radhika
DISAGREED WITH
Sanjeev Bikhchandani, Prashant Warier, Radhika
Argument 5
AI challenges traditional degree value; younger learners (even 13‑year‑olds) may enter task‑based work, reshaping education pathways (Deepak Bagla)
EXPLANATION
Deepak observes that AI is eroding the perceived value of long‑duration degrees, with students questioning high tuition fees and younger individuals potentially entering task‑based employment, signalling a shift in education models.
EVIDENCE
He recounts a conversation with an Ivy League professor about master’s students questioning tuition because AI provides answers, and notes that “maybe that age barrier no longer remains you may have somebody who’s 13 year old and ready to do a job in a task” [134-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bagla’s observation that AI may erode the value of long-duration degrees and enable very young workers is supported by comments on 13-year-olds performing task-based jobs [S5].
MAJOR DISCUSSION POINT
Degree relevance and early talent
DISAGREED WITH
Sanjeev Bikhchandani
S
Speaker 1
1 argument191 words per minute474 words148 seconds
Argument 1
Work is at a defining moment with new productivity gains and growing anxiety (Speaker 1)
EXPLANATION
Speaker 1 frames the current era as a defining moment for work, highlighting emerging productivity opportunities alongside rising anxiety about disruption, especially for white‑collar jobs.
EVIDENCE
He opens with “We’re at a very defining moment in the history of work” and notes “new possibilities, new productivity unlocks, new jobs being created” while also mentioning “growing anxiety” about disruption to knowledge work [1-4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel opened by describing a defining moment for work, with new productivity opportunities and rising anxiety, matching this argument [S16].
MAJOR DISCUSSION POINT
Defining moment for work
R
Radhika
5 arguments186 words per minute1081 words346 seconds
Argument 1
Only 3‑6 % of jobs globally face near‑total automation; ~20 % will see some tasks automated, creating productivity gains (Radhika)
EXPLANATION
Radhika cites ILO research showing that only a small share of occupations (3‑6 %) are at high risk of full automation, while about 20 % will have some tasks automated, opening opportunities for productivity gains.
EVIDENCE
She references the ILO study reporting that “share of jobs where almost all the tasks had a high likelihood of automation” is 3-4 % globally and 6 % in high-income countries, and that “about 20 % of the jobs” will see partial automation [37-42].
MAJOR DISCUSSION POINT
Automation exposure statistics
Argument 2
Managing transition requires industrial, macro‑economic, trade and social‑protection policies, not just reskilling (Radhika)
EXPLANATION
Radhika argues that beyond reskilling, the transition to an AI‑augmented economy needs coordinated industrial, macro‑economic, trade, labour‑market and social‑protection policies to absorb displaced workers and support those with partially automated jobs.
EVIDENCE
She states that “we need to think about how they are going to be absorbed in other sectors” and that this will “require more than skilling and reskilling”; it also “requires thinking more carefully about industrial policy, macroeconomic policy, trade policies, labour market policies, in particular, social protection” [46-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Kapoor’s call for broader industrial, macro-economic, trade and social-protection policies beyond reskilling is documented in the discussion [S5][S4].
MAJOR DISCUSSION POINT
Policy framework for transition
AGREED WITH
Deepak Bagla
DISAGREED WITH
Deepak Bagla
Argument 3
Informal sector (≈45 % agriculture, 55 % self‑employed) risks being left out of AI gains; needs digital infrastructure, financing, and AI adoption support (Radhika)
EXPLANATION
Radhika highlights that a large share of India’s workforce is in agriculture and self‑employment, and without digital infrastructure, broadband, and financing, these workers may miss out on AI‑driven productivity gains.
EVIDENCE
She provides data that “45 % of its workforce is still in the agricultural sector, 55 % self-employed, 95 % of employment is in enterprises with less than 10 workers” and calls for “digital infrastructure, access to broadband, financial support for adopting AI” for micro-SMEs [165-173].
MAJOR DISCUSSION POINT
AI inclusion for informal economy
Argument 4
Existing labour regulations must evolve to cover platform and non‑standard employment arrangements (Radhika)
EXPLANATION
Radhika points out that current labour laws lag behind the rise of gig and platform work, urging updates to conventions and recommendations to ensure decent work and social protection for non‑standard workers.
EVIDENCE
She notes that “labour regulations have not kept pace” with the proliferation of non-standard employer-employee relationships, references ILO discussions on conventions, and mentions India’s code and social security initiatives for platform workers [166-172].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to update labour regulations for platform and non-standard work arrangements was highlighted in the panel’s remarks [S4][S5].
MAJOR DISCUSSION POINT
Labour law adaptation
Argument 5
Inclusive AI transition delivering better, more productive jobs across agriculture and MSMEs without leaving the informal sector behind (Radhika)
EXPLANATION
Radhika envisions an AI transition that improves job quality and productivity in agriculture and MSMEs while ensuring the informal sector is not excluded.
EVIDENCE
In her rapid-fire response she says “an inclusive ai transition where we have better jobs, more productive jobs, and where the agricultural sector and the MSME sector have benefited… we don’t leave the informal sector behind” [178].
MAJOR DISCUSSION POINT
Inclusive AI outcomes
DISAGREED WITH
Deepak Bagla, Sanjeev Bikhchandani, Prashant Warier
S
Sanjeev Bikhchandani
3 arguments163 words per minute978 words358 seconds
Argument 1
Learn three new AI platforms each quarter (≈12 per year) to stay employable (Sanjeev Bikhchandani)
EXPLANATION
Sanjeev advises individuals to continuously upskill by mastering three AI tools every quarter, amounting to twelve new platforms per year, as a strategy to remain employable in an AI‑driven market.
EVIDENCE
He says, “Learn how to use three AI platforms every quarter” and adds that “by the end of one year, you know 12 AI platforms” [65-67].
MAJOR DISCUSSION POINT
Continuous AI skill acquisition
AGREED WITH
Deepak Bagla, Radhika
Argument 2
Elite degrees act as a filter of ability and commitment; however, people skills and experience remain essential for leadership roles (Sanjeev Bikhchandani)
EXPLANATION
Sanjeev argues that prestigious degrees signal ability and perseverance, but effective leadership still depends on people skills, maturity and experience that cannot be replaced by credentials alone.
EVIDENCE
He explains that an IIT degree is a “fantastic filter” for ability and commitment, yet emphasizes that “business is about people” and that leadership requires years of experience and maturity [144-154].
MAJOR DISCUSSION POINT
Degree signaling vs soft skills
DISAGREED WITH
Deepak Bagla
Argument 3
Net increase in jobs – created jobs exceed any displaced jobs (Sanjeev Bikhchandani)
EXPLANATION
Sanjeev defines success as a net rise in employment where the number of new jobs created outweighs any losses caused by automation.
EVIDENCE
He answers the rapid-fire question: “if there is net job increase which means the jobs lost if any are less than the jobs created i think that is success” [177].
MAJOR DISCUSSION POINT
Net job creation
DISAGREED WITH
Deepak Bagla, Prashant Warier, Radhika
P
Prashant Warier
2 arguments210 words per minute840 words239 seconds
Argument 1
AI will upscale doctors by handling image interpretation, note‑taking, test recommendation and triage, but regulatory clearance and liability remain critical (Prashant Warier)
EXPLANATION
Prashant outlines how AI can augment healthcare by interpreting radiology images, automating note‑taking, recommending tests and triaging patients, while stressing that regulatory approval and liability issues limit full deployment.
EVIDENCE
He discusses radiology shortages, AI-based image interpretation, AI note-taking, test recommendation, triage, the need for FDA/CDSCO clearance, and that doctors retain liability for clinical decisions [99-119].
MAJOR DISCUSSION POINT
AI in healthcare
AGREED WITH
Deepak Bagla, Radhika, Sanjeev Bikhchandani
Argument 2
AI‑driven global GDP growth of 10 %+ by 2030 (Prashant Warier)
EXPLANATION
Prashant envisions AI contributing to a global GDP increase of at least ten percent by the year 2030.
EVIDENCE
In the rapid-fire round he says, “world’s gdp growing at 10 or more by 2030” [176].
MAJOR DISCUSSION POINT
Economic impact of AI
DISAGREED WITH
Deepak Bagla, Sanjeev Bikhchandani, Radhika
Agreements
Agreement Points
Continuous upskilling/reskilling is essential to remain employable in the AI era
Speakers: Deepak Bagla, Radhika, Sanjeev Bikhchandani
Introduce AI and tinkering at school level; focus on task‑oriented roles and psychological readiness for job change (Deepak Bagla) Managing transition requires industrial, macro‑economic, trade and social‑protection policies, not just reskilling (Radhika) Learn three new AI platforms each quarter (≈12 per year) to stay employable (Sanjeev Bikhchandani)
All three speakers stress that individuals must continuously acquire new AI-related skills – whether through early school programmes, broader reskilling policies, or personal commitment to mastering multiple AI tools – to cope with the coming disruption and stay employable [27-29][43-49][65-67].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors EU-wide strategies that stress digital skills development as a complement to infrastructure investment [S30] and is echoed in panel discussions calling for comprehensive policy beyond mere reskilling [S31]. Reports also highlight the under-development of lifelong learning systems and the need for coordinated investment in upskilling pathways [S32][S33][S40][S53].
AI will largely augment productivity and create new tasks rather than cause massive wholesale job loss
Speakers: Deepak Bagla, Radhika, Sanjeev Bikhchandani, Prashant Warier
Introduce AI and tinkering at school level; focus on task‑oriented roles and psychological readiness for job change (Deepak Bagla) Only 3‑6 % of jobs face near‑total automation; ~20 % will see some tasks automated, yielding productivity gains (Radhika) Learn three new AI platforms each quarter … to stay employable (Sanjeev Bikhchandani) AI will upscale doctors by handling image interpretation, note‑taking, test recommendation and triage, but regulatory clearance and liability remain critical (Prashant Warier)
The panelists converge on the view that AI will mostly transform work by automating parts of tasks and enhancing productivity, creating new roles and opportunities rather than eliminating large numbers of jobs [24-25][41-43][65-67][99-104][119-121].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple expert panels emphasise “collaboration not displacement” and view AI as a productivity enhancer that reshapes tasks rather than eliminates jobs [S36][S38][S39].
A coordinated policy response beyond skill training is required to manage AI‑driven transition
Speakers: Deepak Bagla, Radhika
Success = coordinated effort across government, academia, society; AI as a multiplier for India’s growth (Deepak Bagla) Managing transition requires industrial, macro‑economic, trade and social‑protection policies, not just reskilling (Radhika)
Both speakers argue that effective AI transition depends on joint action among government, academia, industry and civil society, and on broader policy measures such as industrial strategy, macro-economic frameworks and social protection, not merely on individual reskilling [175][46-48].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions across EU and international forums underline the need for macro-economic measures, labour-market programmes and close education-industry collaboration, not just reskilling initiatives [S31][S42][S43].
Similar Viewpoints
Both emphasize that AI will not wipe out entire occupations; instead, it will affect specific tasks, requiring workers to adapt psychologically and through skill development [24-25][41-43].
Speakers: Deepak Bagla, Radhika
Introduce AI and tinkering at school level; focus on task‑oriented roles and psychological readiness for job change (Deepak Bagla) Only 3‑6 % of jobs face near‑total automation; ~20 % will see some tasks automated, yielding productivity gains (Radhika)
Both recognize that while prestigious degrees continue to signal ability, AI is reshaping the relevance of formal education and opening pathways for younger or non‑traditional talent [134-144][144-154].
Speakers: Deepak Bagla, Sanjeev Bikhchandani
AI challenges traditional degree value; younger learners (even 13‑year‑olds) may enter task‑based work, reshaping education pathways (Deepak Bagla) Elite degrees act as a filter of ability and commitment; however, people skills and experience remain essential for leadership roles (Sanjeev Bikhchandani)
Both stress that staying employable in the AI era requires systemic support (policy frameworks) combined with individual continuous learning [46-48][65-67].
Speakers: Radhika, Sanjeev Bikhchandani
Managing transition requires industrial, macro‑economic, trade and social‑protection policies, not just reskilling (Radhika) Learn three new AI platforms each quarter … to stay employable (Sanjeev Bikhchandani)
Unexpected Consensus
Degree relevance and early talent in the AI era
Speakers: Deepak Bagla, Sanjeev Bikhchandani
AI challenges traditional degree value; younger learners (even 13‑year‑olds) may enter task‑based work, reshaping education pathways (Deepak Bagla) Elite degrees act as a filter of ability and commitment; however, people skills and experience remain essential for leadership roles (Sanjeev Bikhchandani)
It is unexpected that both speakers, despite different emphases, converge on the notion that AI is altering the traditional role of long-duration degrees while still acknowledging the continued importance of demonstrated ability and soft skills. This dual recognition bridges the seemingly opposing views on education transformation [134-144][144-154].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs from India note the push to introduce AI education from primary school and to recognise talent beyond traditional degree pathways, highlighting the emergence of very young talent in the AI workforce [S51][S53].
Overall Assessment

The panel shows strong convergence on three core themes: (1) the necessity of continuous upskilling/reskilling; (2) AI as a productivity‑enhancing tool that will reshape tasks rather than eliminate whole occupations; and (3) the need for coordinated policy action beyond individual training. These shared positions cut across speakers from business, academia, and the tech sector, indicating a broad consensus on how to navigate the AI transition.

High – the majority of speakers align on the nature of AI’s impact (augmentation over displacement) and on the combined need for skill development and systemic policy support. This consensus suggests that future initiatives should prioritize integrated skill programmes, inclusive policy frameworks, and application‑focused AI deployment to harness AI’s growth potential while mitigating disruption.

Differences
Different Viewpoints
Magnitude of job displacement due to AI
Speakers: Deepak Bagla, Radhika
No playbook; next 5‑10 years will be the toughest period of disruption (Deepak Bagla) Only 3‑6 % of jobs globally face near‑total automation; ~20 % will see some tasks automated (Radhika)
Deepak warns that the coming five to ten years will be the toughest period of disruption and stresses psychological readiness for job loss, implying a large-scale impact [16][27-28]. Radhika cites ILO research showing that only a small share of occupations face full automation and that most impacts will be partial, suggesting a more limited displacement risk [37-42].
POLICY CONTEXT (KNOWLEDGE BASE)
Human-rights analyses flag potential large-scale displacement and call for reskilling programmes [S34]; research cited in panels suggests only 3-4 % of jobs face total automation while the rest experience task-level changes, indicating divergent views on displacement magnitude [S37][S45][S46].
Policy response focus – reskilling vs broader macro‑policy measures
Speakers: Deepak Bagla, Radhika
Introduce AI and tinkering at school level; focus on task‑oriented roles and psychological readiness for job change (Deepak Bagla) Managing transition requires industrial, macro‑economic, trade and social‑protection policies, not just reskilling (Radhika)
Deepak proposes early AI education, task-oriented learning and psychological preparation as the main route to cope with change [21-28]. Radhika argues that beyond skilling, coordinated industrial, macro-economic, trade and social-protection policies are essential to absorb displaced workers and support partially automated jobs [46-48].
POLICY CONTEXT (KNOWLEDGE BASE)
Experts argue that effective transition requires active labour-market policies, universal basic income considerations and macro-economic coordination, not solely reskilling schemes [S31][S42][S45][S49].
Relevance of formal degrees and the emergence of very young talent
Speakers: Deepak Bagla, Sanjeev Bikhchandani
AI challenges traditional degree value; younger learners (even 13‑year‑olds) may enter task‑based work, reshaping education pathways (Deepak Bagla) Elite degrees act as a filter of ability and commitment; however, people skills and experience remain essential for leadership roles (Sanjeev Bikhchandani)
Deepak suggests AI will erode the value of long-duration degrees and enable teenagers to perform task-based jobs, indicating a shift in education models [134-144]. Sanjeev counters that prestigious degrees still serve as strong signals of ability, but stresses that leadership still depends on people skills and experience, limiting the role of early-stage talent [144-154].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates highlight tension between traditional degree pathways and early AI education initiatives, with some policymakers advocating for AI curricula from elementary levels to broaden talent pools [S51][S53].
What constitutes a successful AI transition
Speakers: Deepak Bagla, Sanjeev Bikhchandani, Prashant Warier, Radhika
Success = coordinated effort across government, academia, society; AI as a multiplier for India’s growth (Deepak Bagla) Net increase in jobs – created jobs exceed any displaced jobs (Sanjeev Bikhchandani) AI‑driven global GDP growth of 10 %+ by 2030 (Prashant Warier) Inclusive AI transition delivering better, more productive jobs across agriculture and MSMEs without leaving the informal sector behind (Radhika)
Deepak frames success as joint action among government, academia and society with AI acting as an economic multiplier for India [175]. Sanjeev defines success purely in terms of net job creation [177]. Prashant looks at macro-economic impact, targeting a 10 %+ rise in global GDP by 2030 [176]. Radhika emphasizes an inclusive transition that benefits agriculture, MSMEs and the informal sector, avoiding exclusion [178]. The speakers share a common goal of positive outcomes but diverge on the primary metric of success.
POLICY CONTEXT (KNOWLEDGE BASE)
Roadmaps and policy research stress that success hinges on robust macro-economic policy, strong labour markets and coordinated education-business partnerships, rather than isolated training programmes [S41][S42][S46].
Unexpected Differences
Optimism about AI as a growth multiplier vs concern that the informal sector may be excluded
Speakers: Deepak Bagla, Radhika
Success = coordinated effort across government, academia, society; AI as a multiplier for India’s growth (Deepak Bagla) Informal sector (≈45 % agriculture, 55 % self‑employed) risks being left out of AI gains; needs digital infrastructure, financing, and AI adoption support (Radhika)
While Deepak portrays AI as a universal growth engine for India, Radhika warns that without targeted support the large informal and agricultural workforce could miss out on AI benefits, highlighting a gap between a broad growth narrative and inclusive development concerns [175][165-173].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses note AI’s potential to boost productivity and create new jobs, yet warn that entry-level and informal workers risk being left behind without inclusive skill-development policies [S47][S39][S40].
Current hiring trends vs forecasted severe disruption
Speakers: Sanjeev Bikhchandani, Deepak Bagla
Naukri growth has not been impacted; we are not seeing a reduction in hiring (Sanjeev Bikhchandani) Next 5 years is going to be one of the toughest times of disruption (Deepak Bagla)
Sanjeev reports no immediate hiring slowdown, suggesting stability in the near term, whereas Deepak predicts a near-future period of intense disruption, indicating a divergence between observed market signals and projected systemic impact [57-58][16].
POLICY CONTEXT (KNOWLEDGE BASE)
Observations from recent panels show AI’s impact presently concentrated on white-collar task augmentation, while forecasts warn of broader disruption, especially for routine and entry-level roles, prompting calls for forward-looking labour policies [S36][S47][S50].
Overall Assessment

The panel shows substantial disagreement on the scale of AI‑driven job loss, the primary policy response (skill‑centric vs macro‑policy), the future relevance of formal education, and the metric for a successful AI transition. While there is consensus that AI will reshape work and that upskilling is needed, the speakers diverge sharply on how severe the disruption will be, which levers should dominate policy, and whether growth should be measured by GDP, net job creation, or inclusive outcomes.

High – the differing views on magnitude, policy focus, education pathways and success criteria indicate that stakeholders are not aligned on core strategic choices. This lack of consensus could lead to fragmented interventions, risking either over‑regulation or insufficient support for vulnerable groups, and may impede the formulation of a coherent national AI strategy.

Partial Agreements
All speakers acknowledge that AI will transform work and that upskilling/reskilling is necessary. Deepak stresses early education and psychological preparation, Radhika points to partial automation creating productivity gains, Sanjeev recommends continuous learning of AI tools, and Prashant highlights sector‑specific upskilling for doctors. They converge on the need for skill development but differ on the target audience, timing and sector focus [21-28][37-42][65-67][99-104].
Speakers: Deepak Bagla, Radhika, Sanjeev Bikhchandani, Prashant Warier
Introduce AI and tinkering at school level; focus on task‑oriented roles and psychological readiness for job change (Deepak Bagla) Only 3‑6 % of jobs globally face near‑total automation; ~20 % will see some tasks automated (Radhika) Learn three new AI platforms each quarter (~12 per year) to stay employable (Sanjeev Bikhchandani) AI will upscale doctors by handling image interpretation, note‑taking, test recommendation and triage (Prashant Warier)
All three agree that coordinated policy action is essential for a positive AI outcome. Deepak emphasizes multi‑stakeholder coordination, Radhika adds specific policy levers (industrial, macro‑economic, social protection), while Prashant focuses on macro‑economic growth targets. The common ground is the necessity of systemic policy, but the pathways (coordination vs specific policy instruments vs growth targets) differ [175][46-48][176].
Speakers: Deepak Bagla, Radhika, Prashant Warier
Success = coordinated effort across government, academia, society; AI as a multiplier for India’s growth (Deepak Bagla) Managing transition requires industrial, macro‑economic, trade and social‑protection policies, not just reskilling (Radhika) AI‑driven global GDP growth of 10 %+ by 2030 (Prashant Warier)
Takeaways
Key takeaways
AI-driven disruption will intensify over the next 5‑10 years, but there is no clear playbook; uncertainty is high. Only a small share (3‑6 %) of occupations face near‑total automation; about 20 % will see partial task automation that can boost productivity. Effective transition requires more than reskilling – it needs coordinated industrial, macro‑economic, trade, and social‑protection policies. Continuous learning is essential: individuals should master multiple AI platforms regularly to stay employable. Early exposure to AI (school‑level tinkering) and task‑oriented education can prepare future workers psychologically and skill‑wise. In healthcare, AI will augment doctors (image interpretation, note‑taking, test recommendation, triage) but regulatory clearance and liability issues remain. Traditional degree credentials are being challenged; younger, task‑focused talent may enter the labour market, yet people‑skills and experience remain critical for leadership roles. The informal sector (agriculture, self‑employment, micro‑SMEs) risks being left behind; it needs digital infrastructure, financing, and AI adoption support. Labour laws must evolve to cover platform and non‑standard employment arrangements. Within the AI stack, the application layer (small, executable solutions) should be prioritised for rapid impact in India. Success by 2030 is envisioned as a coordinated, inclusive AI transition that drives net job creation, raises productivity, and contributes to high global GDP growth.
Resolutions and action items
Introduce AI and tinkering programmes at the school level to build early familiarity and task‑oriented mindsets (Deepak Bagla). Encourage individuals to learn at least three new AI platforms each quarter (≈12 per year) to maintain employability (Sanjeev Bikhchandani). Develop and implement broader policy packages – industrial, macro‑economic, trade, and social‑protection measures – to support workers displaced by automation (Radhika). Update labour regulations to cover platform and gig work, including social security provisions for non‑standard employment (Radhika). Prioritise development and scaling of AI applications (the application layer of the AI stack) that can be executed by small firms (Deepak Bagla). Provide digital infrastructure, broadband access, and financing mechanisms to enable AI adoption in micro‑SMEs and the agricultural sector (Radhika). Create regulatory pathways (e.g., faster FDA/ CDSCO clearances) for AI tools in healthcare while defining liability frameworks (Prashant Warier).
Unresolved issues
Exact magnitude and timing of job displacement versus job creation remain unknown. How to operationalise the suggested industrial and macro‑economic policy interventions at national and state levels. Specific mechanisms for financing AI adoption in micro‑SMEs and the agricultural sector are not detailed. Concrete steps to redesign the higher‑education system to balance degree value with task‑based skill acquisition are not resolved. Clear regulatory and liability frameworks for AI‑driven clinical decision support are still pending. Methods to measure and monitor the transition of tasks within occupations over time are not established.
Suggested compromises
Balance reskilling initiatives with broader industrial and social‑protection policies rather than relying solely on skill upgrades. Combine the push for AI‑driven productivity gains with safeguards for workers likely to be displaced, ensuring net job creation. Acknowledge the continued relevance of traditional degrees as filters of ability while also promoting early, task‑oriented AI education for younger entrants.
Thought Provoking Comments
The only job that would never change in banking was the teller – and it was the first to disappear when digitisation arrived.
Highlights that even the most ‘secure’ roles can be upended by technology, underscoring the absence of a safe haven and the need for a new playbook.
Set the tone of uncertainty for the whole panel, prompting others to frame their answers around disruption timelines and the necessity of reskilling.
Speaker: Deepak Bagla
Only 3‑4 % of jobs globally have a high likelihood of full automation; about 20 % will see some tasks automated, freeing time for new tasks.
Provides concrete, data‑driven nuance that counters the doomsday narrative and introduces the idea of partial automation as an opportunity rather than a threat.
Shifted the conversation from fear‑based speculation to a more balanced view, leading the panel to discuss targeted policy measures (social protection, industrial policy) for the small displaced group while focusing on productivity gains for the majority.
Speaker: Radhika
Learn how to use three AI platforms every quarter – by the end of a year you’ll have twelve and will be employable.
Offers a concrete, actionable personal strategy rooted in a historical analogy (PC literacy in the 1980s) that makes the abstract threat of AI tangible.
Moved the discussion from macro‑level uncertainty to practical guidance for individuals, and reinforced the earlier point that technology adoption can be a career safeguard.
Speaker: Sanjeev Bikhchandani
In radiology India has one radiologist per 100,000 people; AI can upscale doctors and health‑care workers, but regulation and liability will keep doctors in the loop for at least 5‑10 years.
Combines sector‑specific data (radiologist shortage) with realistic constraints (regulatory approval, liability), illustrating both the upside and limits of AI in a critical field.
Introduced a sector‑focused thread, prompting the panel to consider how AI’s role varies across industries and to acknowledge that adoption is not uniform.
Speaker: Prashant Warier
Master’s students feel they don’t need to pay high tuition because AI gives them all the answers – the age barrier may disappear and task‑based hiring could replace traditional degree pathways.
Challenges the entrenched higher‑education model, suggesting AI could flatten credential hierarchies and enable very young workers to enter the labour market.
Created a turning point toward discussing the future of education and hiring, leading Sanjeev and others to reflect on the continued relevance of credentials versus skills.
Speaker: Deepak Bagla
The AI conversation currently covers only about 10 % of India’s workforce; the informal sector (45 % agricultural, 55 % self‑employed, 95 % in firms <10 workers) risks being left behind unless we address digital infrastructure, finance and social protection.
Broadens the scope from formal employment to the vast informal economy, highlighting a major blind spot in most AI‑of‑work debates.
Redirected the panel to consider macro‑level inclusion policies and the need for infrastructure investment, setting up the final rapid‑fire reflections on inclusive AI transition.
Speaker: Radhika
IIT degrees are valued not for the specific knowledge but as a filter of commitment, perseverance and problem‑solving ability; however, people are still people – experience and maturity matter beyond credentials.
Provides a nuanced view that while credentials signal certain traits, they cannot replace the human elements of leadership and teamwork, tempering the earlier push toward pure skill‑based hiring.
Balanced the earlier discussion on credential erosion, reinforcing that AI‑driven task automation will still require human soft skills, and nudged the conversation toward a hybrid future of work.
Speaker: Sanjeev Bikhchandani
Success for AI by 2030 means an inclusive transition where better, more productive jobs exist for the agricultural and MSME sectors and the informal workforce is not left behind.
Synthesises the panel’s earlier points into a concise vision that ties productivity, inclusivity, and sectoral balance together.
Served as a concluding anchor, tying together the disparate threads (reskilling, policy, sector‑specific impacts, education) into a shared goal for the audience.
Speaker: Radhika (rapid‑fire)
Overall Assessment

The discussion was steered by a handful of data‑rich and perspective‑shifting remarks. Deepak Bagla’s opening anecdote shattered the myth of ‘safe’ jobs, prompting a focus on uncertainty. Radhika’s automation statistics reframed the narrative from catastrophic loss to nuanced, partial disruption, which opened space for policy‑oriented dialogue. Sanjeev’s historical analogy and concrete learning roadmap gave the audience actionable takeaways, while Prashant’s sector‑specific analysis showed how AI’s impact varies across industries. Deepak’s education‑disruption insight and Radhika’s reminder of the informal‑sector majority broadened the conversation beyond the formal economy. Sanjeev’s credential‑vs‑skill comment added balance, preventing an overly technocratic view. Collectively, these pivotal comments redirected the panel from speculative fear to a layered, inclusive vision of AI‑driven work, shaping both the tone and the substantive direction of the entire discussion.

Follow-up Questions
How can we develop a more granular, task‑level understanding of which parts of occupations are automatable versus those that remain safe?
A detailed task‑based analysis is essential to target reskilling programs and avoid over‑ or under‑estimating job loss.
Speaker: Radhika
What industrial, macro‑economic, trade, labour‑market and social‑protection policies are needed to absorb workers displaced by AI?
Beyond skilling, comprehensive policy design is required to ensure displaced workers can transition to other sectors.
Speaker: Radhika
What regulatory frameworks are required for AI applications in healthcare, especially concerning approval (e.g., FDA, CDSCO) and liability for clinical decisions?
Clear regulations are critical to safely deploy AI tools while addressing liability and ensuring trust in medical settings.
Speaker: Prashant Warier
How large could the emerging task‑creation workforce be, including very young workers (e.g., 13‑year‑olds), and what types of tasks will they perform?
Quantifying this new labour pool will help anticipate shifts in employment patterns and education needs.
Speaker: Deepak Bagla
What will be the future relevance of traditional higher‑education pathways (bachelor’s, master’s) in an AI‑driven economy?
Understanding the evolving value of degrees versus skill‑based credentials will guide education policy and investment.
Speaker: Deepak Bagla, Sanjeev Bikhchandani
Will hiring practices shift from degree pedigree to demonstrated skill fluency and basic skills, and how should organisations adapt?
If employers prioritize skills over credentials, recruitment, training and talent development strategies will need to change.
Speaker: Sanjeev Bikhchandani
What are the barriers and enablers for AI adoption in agriculture, micro‑small enterprises, and the informal sector (e.g., financing, broadband access, digital infrastructure)?
Addressing these factors is vital to ensure that the majority of India’s workforce benefits from AI gains.
Speaker: Radhika
How should labour laws be updated to protect gig and platform workers and other non‑standard employment arrangements?
Current regulations lag behind the platform economy; new legal frameworks are needed for decent work and social security.
Speaker: Radhika
What metrics should define success of AI‑driven growth by 2030 (e.g., GDP growth rate, net job creation, inclusive transition for informal sector)?
Establishing clear, inclusive success indicators will guide policy and investment decisions.
Speaker: Deepak Bagla, Prashant Warier, Sanjeev Bikhchandani, Radhika
What are the psychological impacts of potential job displacement and how can workers be prepared mentally for a rapidly changing labour market?
Understanding mental resilience and coping mechanisms is crucial for a smooth transition and workforce wellbeing.
Speaker: Deepak Bagla

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel on adoption and acceleration of AI brought together leaders from philanthropy, finance, and government to discuss how artificial intelligence can be deployed responsibly worldwide [5-15]. Moderator Rudra Chaudhry framed the debate as a tension between policy and adoption, noting recent calls from India and France for responsible diffusion of AI [22-27].


Rwanda’s minister Paula Ingabire explained that the country adopts an adaptive regulatory posture, first building concrete use-cases and then tailoring rules to the specific problems being solved rather than imposing abstract frameworks [36-44]. She emphasized that partnerships must include capacity-building, with foreign firms co-developing solutions and training local talent to create a closed loop between regulation and implementation [45-50]. Rwanda is also constructing a national data hub and has already enacted a data-protection and privacy law to safeguard sovereignty and ensure that AI deployments are “by design” safe [51-54].


When asked about a global AI compact, Ingabire said it is feasible but must accommodate diverse cultural and linguistic contexts and be built on non-negotiable shared standards that can be contextualized locally [61-66]. Tara Lyons of JPMorgan Chase observed that the fundamental policy questions raised a decade ago-fairness, transparency, interoperability-remain unchanged, but the field has moved from theoretical debate to large-scale commercial application, making governance issues more urgent [71-80]. She argued that the hardest challenges are not technical but human and institutional, requiring trustworthy, responsibly scaled AI to gain public acceptance [82-88].


John Palfrey of the MacArthur Foundation stressed that AI must serve humans, calling for a stable regulatory regime and highlighting philanthropy’s role in funding civil-society voices that can shape ethical deployment [95-99][101-110]. He noted that philanthropic commitments of over a billion dollars are being directed toward AI research and collaboration, but sustained impact depends on continued support for civil society and academia [120-121].


Lyons added that the financial sector’s long-standing risk-management practices and its need for regulatory harmonisation across more than 100 countries illustrate how use-case-level governance can enable safe, large-scale AI deployment [156-174]. Both speakers called for broader multi-stakeholder participation, urging inclusion of deployers from retail, energy, manufacturing and especially voices from the Global South in future summit venues such as Switzerland or Kigali [179-186][205-216].


The discussion concluded that quantifying impact, strengthening South-South cooperation, and institutionalising continuous dialogue are essential steps to turn AI’s promise into tangible, equitable benefits worldwide [205-209][215-216].


Keypoints


Major discussion points


Rwanda’s adaptive, use-case-driven regulatory approach and capacity-building partnerships – The minister explains that Rwanda prefers to identify high-impact AI use cases first and then craft specific regulations, rather than imposing abstract rules, and stresses the importance of co-development with partners to build local expertise and ensure data sovereignty [40-44][45-49][50-54].


Possibility of a global AI compact that respects diverse contexts – The moderator asks whether a worldwide agreement on AI risks is feasible, and the minister replies that it is possible but must accommodate cultural, linguistic, and national differences, establishing non-negotiable shared standards while allowing contextual adaptation [59-60][61-65].


Human-centred AI and the shift from technical to institutional challenges – The finance executive notes that the toughest issues are not technical breakthroughs but making AI useful for people, building trust, and ensuring responsible scaling; governance and trust are seen as the cornerstones of diffusion [82-88].


Philanthropy and multi-stakeholder involvement as a catalyst for responsible AI – The foundation president argues that philanthropy must fund civil-society voices, foster partnerships (e.g., with the Center for Exponential Change), and support long-term, stable financing for AI initiatives that serve humanity rather than pure technological advancement [93-99][101-108][119-121].


Need for regulatory harmonisation and risk-management frameworks for cross-border deployment – The banking leader highlights that large financial institutions have built risk-management “muscles” that can be applied to AI, calling for sector-specific oversight, use-case-level risk assessment, and greater international regulatory alignment to enable scalable, responsible AI deployment [160-169][171-174].


Overall purpose / goal of the discussion


The panel was convened to explore how AI can be adopted and accelerated globally while balancing policy, governance, and practical deployment. Participants shared national experiences, industry perspectives, and philanthropic viewpoints to identify concrete pathways-adaptive regulation, capacity-building, trustworthy diffusion, and international cooperation-that can turn AI’s potential into inclusive, sustainable societal benefits.


Overall tone and its evolution


– The conversation begins formal and courteous, with introductory remarks and acknowledgments of the summit’s significance.


– It quickly becomes inquisitive and probing, as the moderator challenges speakers on the feasibility of global compacts and the practicalities of scaling AI.


– Throughout, the tone remains constructive and collaborative, emphasizing shared learning, optimism about AI’s benefits, and a commitment to responsible implementation.


– By the end, the tone shifts to forward-looking and action-oriented, with concrete calls for more diverse stakeholder participation, South-South cooperation, and concrete next steps for the summit’s institutional legacy.


Overall, the discussion maintains a positive, solution-focused atmosphere, moving from framing the challenges to proposing collaborative mechanisms for responsible AI diffusion.


Speakers

Speaker 1


– Role/Title: Event host/moderator (introduces panel)


– Area of Expertise: 


Rudra Chaudhry


– Role/Title: Vice President, Observer Research Foundation; Moderator of the panel


– Area of Expertise: AI policy, governance, adoption


Paula Ingabire


– Role/Title: Minister of ICT and Innovation, Government of Rwanda [​S3][​S4]


– Area of Expertise: Digital transformation, AI governance, data sovereignty


John Palfrey


– Role/Title: President, John D. and Catherine T. MacArthur Foundation [​S9]


– Area of Expertise: Philanthropy, AI ethics, law


Terah Lyons


– Role/Title: Managing Director & Global Head of AI and Data Policy, JPMorgan Chase [​S1]


– Area of Expertise: AI policy, finance, risk management


Additional speakers:


Stephen Bird


– Role/Title: Global Head of Thematic Research, Morgan Stanley


– Area of Expertise: Investment analysis, AI market assessment


Full session reportComprehensive analysis and detailed insights

Opening – The host opened the panel on Adoption and Acceleration of Artificial Intelligence with a formal welcome, invoking the summit’s theme “AI for all” and stressing the need for every nation’s contribution to global cooperation [1-2]. The moderator, Rudra Chaudhry (Observer Research Foundation), introduced the panelists as John Palfrey (MacArthur Foundation), Tara Lyons (JPMorgan Chase), Her Excellency Paula Ingabire (Rwanda, Minister of ICT & Innovation), and noted that Stephen Bird would be joining later but did not speak during the recorded segment [12-15].


Framing the debate – Chaudhry contrasted the twin pressures of rapid AI adoption and the need for robust policy, citing recent calls from India’s prime minister and France’s president for responsible diffusion of AI in the Global South [22-27].


Rwanda’s adaptive, use-case-driven regulatory model – Ingabire explained that Rwanda first identifies AI applications that deliver the greatest societal benefit and then crafts regulations specific to those use-cases, avoiding abstract, one-size-fits-all rules [40-44]. She highlighted that more than 70 % of Rwanda’s population are in the youth bracket, a demographic that shapes the country’s AI priorities [84-86]. Partnerships are central: foreign firms must co-develop solutions and train Rwandan staff, creating a “closed loop” between implementation and regulation [45-49]. To safeguard data sovereignty, Rwanda is building a national data hub “by design” [78-80] and has enacted a data-protection and privacy law that operates “by design” to guard personal data [71-73].


Global AI compact – When Chaudhry asked whether a universal AI compact was realistic, Ingabire answered affirmatively, insisting that any compact must accommodate diverse cultural, linguistic and national contexts while establishing a set of non-negotiable core standards that can be locally contextualised [59-66].


Human-centred and institutional challenges – Lyons shifted the focus from technical breakthroughs to institutional issues. She noted that the policy questions raised during the Obama administration-fairness, transparency, interoperability, bias mitigation-remain unchanged, but the field has moved from theoretical debate to large-scale commercial deployment, making governance issues more urgent [71-80]. Lyons argued that the most pressing challenges are institutional and trust-related rather than purely technical [82-88].


Human-centred regulation and civil-society funding – Palfrey echoed the human-centred stance, arguing that AI should be regulated like any other technology to serve people rather than being treated as a “magical” entity [95-99]. He emphasized the MacArthur Foundation’s role in championing a stable, people-first regulatory regime and stressed that civil-society voices must be funded to influence AI governance [101-108]. Palfrey disclosed that philanthropic initiatives have already mobilised over a billion dollars in commitments for humanity-focused AI research and collaboration [115-119].


Risk-management expertise and regulatory harmonisation – Lyons described JPMorgan Chase’s risk-management approach as a template for AI governance. The bank has been deploying AI at the use-case level for more than a decade, applying sector-specific oversight and proportionate risk assessment [156-162]. She argued that scaling AI globally requires regulatory harmonisation to provide consistent rules for multinational operators, linking this need to the broader discussion of sovereign AI and a global baseline [172-174].


Financial sustainability of AI diffusion – Chaudhry probed whether Rwanda had an operational-expenditure (OPEX) or revenue model to support long-term deployment [124-128]. Ingabire replied that value should not be measured solely in monetary terms; instead, AI’s impact is judged by improvements in health diagnostics, education quality, and agricultural productivity, which in turn raise incomes and reduce poverty [129-146]. While acknowledging the importance of financial metrics, she stressed that societal benefits and capacity-building are the primary drivers of Rwanda’s AI strategy.


Forward-looking questions and summit format – The moderator invited the panelists to comment on future summit cycles. Lyons highlighted the need for broader stakeholder inclusion from retail, energy, manufacturing and African nations; Palfrey called for sustained philanthropic support for civil-society participation; Ingabire suggested hosting the next summit in Kigali to deepen South-South cooperation [179-183][205-216][217-218].


Closing – Chaudhry thanked the participants and organisers, reaffirming the summit’s commitment to turning AI’s promise into tangible, equitable benefits worldwide [219-220].


Key take-aways


– AI for all requires active participation from every country and a focus on equitable, human-centred outcomes [1-2].


– Regulation should be adaptive, evidence-based, and driven by concrete use-cases, complemented by a global baseline of non-negotiable standards [40-44][172-174].


– Partnerships that embed capacity-building protect data sovereignty and ensure local ownership [45-49][78-80][71-73].


– The most pressing challenges are institutional-building trust, responsible scaling, and sustainable financing-rather than technical [82-88].


– Philanthropy must continue to fund civil-society voices; more than $1 billion has already been pledged for humanity-focused AI initiatives [115-119].


– Financial institutions bring mature risk-management practices that can inform sector-specific oversight and support regulatory harmonisation [156-162][172-174].


– Future summit processes should broaden stakeholder representation, especially from the Global South, and consider Kigali as a host to deepen inclusive AI governance [205-216][217-218].


Session transcriptComplete transcript of the session
Speaker 1

Thank you so much, Your Excellency, Eta Bush, for your valuable insights and for elevating the summit. And it’s really interesting to listen to the perspectives of countries like Sweden, because when we talk of AI for all and global cooperation, the role of each and every country becomes very, very important. Ladies and gentlemen, before I move on, I need to announce that there’s a rupee card which we found. If somebody has lost this rupee card, though I don’t know how much money is there, but if you’ve lost this rupee card, kindly come to me and collect it from me. Thank you. And ladies and gentlemen, now we move to the next panel discussion, which is on adoption and acceleration of artificial intelligence.

The panelists joining us represent some of the most thoughtful voices on how AI is being built and adopted around the world. Mr. John Palfrey is the president of the John D. and Catherine T. MacArthur Foundation, one of the world’s most influential philanthropies, where he has championed the idea that technology must serve the public interest. His perspective on how AI can be deployed equitably, not just efficiently, is essential to the conversation. Ms. Tara Lyons is the managing director and global head of AI and data policy at JPMorgan Chase. AI at one of the world’s largest financial institutions, she is navigating the frontier where AI meets regulation, risk and responsible deployment, ensuring that AI in finance is not just powerful, but trustworthy.

Her Excellency Paula Njibar is the minister of ICT and innovation for the government of Rwanda. Under her leadership, Rwanda has emerged as one of Africa’s most ambitious digital economies, proving that visionary governance can leapfrog traditional development pathways. And we also have Mr. Stephen Bird as the panelist, who is the global head of thematic research at Morgan Stanley, bringing the investor’s lens to the question of which AI bets are real and which are hype. And this discussion will be moderated by Mr. Rudra Chaudhry, Vice President of Observer Research Foundation. Ladies and gentlemen, please join me in welcoming Mr. John Palfrey, Ms. Tara Leons, Her Excellency Paula Ngibar, and also Mr. Rudra Chaudhry. Please kindly come to the stage for this very interesting conversation, a panel on adoption and acceleration of AI.

Mr. Bird will be joining us very soon. Thank you.

Rudra Chaudhry

All right. Hi, everyone. There’s a good bit of distance between me and the panelists, which might be a good thing. We’ll see. We’ve got about 25 minutes, so I’m going to keep it quite swift. The general panel is about policy on the one side, adoption on the other. And I wonder if that’s actually the case. Yesterday in the inaugural, the prime minister made very clear that adoption is a huge opportunity for India and other parts of the global south. But we have to do it responsibly. President Macron made a very similar pitch in his inaugural speech. And I want to start with that framing. And I want to come to you, Minister. Rwanda is a fascinating country in general.

But you’re particularly fascinating on the African continent because you were way ahead of the AI curve in a sense. You invested in a startup ecosystem. You were looking at scale before many of us thought of use case scales. Give us a sense of how Rwanda. Manages these minefields between governance policy on the one side and adoption at population scale on the other.

Paula Ingabire

Thank you very much, Rudy, and great to see you all. I think for us, the decision has always been clear around how we leverage technology as a country to drive socioeconomic development. And so AI, like many other technologies that we’ve experimented with as a country, we took the same posture. And so the idea was figuring out how we leverage this particular technology to address societal challenges. And there were certain trade -offs that we had to make. When it comes to governance, it was a posture around, rather than try to focus more on regulating, we’d rather figure out where do we see AI creating the biggest benefits and gains for society. And then we’re able to build regulations according to the use cases that we’re implementing.

And so the regulatory posture that we take then is more adaptive. And it’s one where it’s evidenced best because we’re already building use cases, using that today. And so we’re able to determine what kind of regulations are needed, and they’re very specific to the problems that we are solving. as opposed to trying to create a very abstract regulatory framework, which may not necessarily address whatever risks and concerns that we foresee. The second one has always been on partnerships because that’s been key. The level of development, digital development that we’ve achieved as a country is thanks to the various partners that we’ve been able to attract into Rwanda. But partnerships, we also look at it very closely to determine how do we make sure that these partnerships are helping us to build capacity.

So, for example, we’re not going to acquire a foreign solution, invite them to train on our data and just leave us with an application. We want them to be able to train our people, co -develop this with our people so that at least we have the skill set and the mastery of what we’re trying to deploy, which will then create that closed loop around the regulatory environment that we put in place. And last, again, I think it’s a conversation that we’ve had throughout this week around sovereignty, thinking about data sovereignty. By design, we’re building our national data hub. and we’re really making sure we understand, you know, what are the guardrails that we put in place.

We don’t want to wait for a crisis to start, you know, worrying about who is using our data, what are they accessing that for. And so we started with already putting in place the data protection and privacy law that governs how you collect, use, and process data. And that has been the foundation through which we can then start to ensure that everything that we do from a data sovereignty perspective, we’re doing it by design.

Rudra Chaudhry

So I’m going to come back to the question on the benefits of AI for all of you and for you, Minister, in a minute. You know, this entire summit process started with Bletchley, where I think the general philosophy was that can we come to some kind of a global compact when it comes to risk and risk aversion, when it comes to early warning systems. The institutional outcomes was these AI safety institutes that were built out. Can I ask a challenging question? Is, from your perspective, is a global compact on something like AI actually possible? Or are there norms that we should generally be thinking about and fitting into our national jurisdictions?

Paula Ingabire

So I believe a global compact is possible. However, it has to reflect the different contexts, cultural, linguistic, everything. And so to a certain extent, what you’re looking at is what are some of those shared standards that we all subscribe to as countries, which are non -negotiables for everyone that is building and deploying AI products and solutions. And then obviously, you then get to contextualize it to whatever problems that you’re solving for. And so, again, it’s going to come back to what are nations deploying AI to solve for? And how do we make sure that these standards are reflective of what we’re looking to adopt through the global compact?

Rudra Chaudhry

Dara, if I could come to you. You were… You’re leading AI at J .P. Morgan. you’ve been in the Obama administration in a very different office on science and technology and policy, way before the AI wave kind of hit us, although people have been working for AI for three decades now. Just give us a sense, just before I come to the immediate, take us a sense back to those second term of the Obama administration. Give us a sense of how were you thinking about AI?

Terah Lyons

Well, I would say that era was the first in which global governments started considering AI policy questions at all. And honestly, a lot of the same questions were being asked then as are being asked now. The question of global governance that the minister just spoke to, I think, was top of mind then as it is today. Questions of standards generation and interoperability were certainly part of the conversation. Issues of fairness, transparency, bias mitigation. sort of localization and other questions were all very much germane. So, you know, in many respects, the field has completely transformed, especially from a commercial perspective, given the level of investment that we’re seeing globally in the last five years, especially. But in many other respects, the foundational questions remain the same that policymakers were considering over 10 years ago.

And those questions, I think, are applicable in a lot of different directions. You know, I think one of the big differences in the current moment is that I really feel like we’ve moved from an era where these conversations have been more theoretical to an era in which they are much more applied and made much more real by the questions being asked by organizations like ours, for example, as AI deploying entities. Where the, you know, the issues of applied AI organizations are really where the rubber meets the road when it comes to these governance issues that we’re talking about from the stage and that policymakers have been considering for the last decade.

Rudra Chaudhry

so I think if I talk to most people who’ve been for the first three summits and I talk to them about this summit there’s a lot of argument about there’s a lot of energy there’s a lot of discussion on use cases diffusion getting this out to humanity getting it out to people and now we have to work downstream and upstream and figure out how best to do the diffusion piece let me ask you a question is you’ve been here for three four days for the summit era what’s really struck you in terms of the diffusion argument the adoption argument and then if you put your policy regulatory lens to it what are you thinking right now

Terah Lyons

well I actually don’t think the hardest questions in this field maybe this is a controversial answer but I’ll try it on for size here I don’t think the hardest questions in this field are technical right now I think they are questions of human issues and institutional issues. And I hear that no matter where I am, talking to clients and other large enterprises, speaking to governments globally, whether in New York, California, Brussels, or Delhi here this week, where the hard problem really isn’t frontier advancement right now. It’s actually making this technology useful to real organizations and making it helpful to real people in their everyday lives. And core to that set of issues are the governance questions that have been so top of mind here at the summit, I think.

And questions of how we scale responsibly, how we engender trust in the technology, because in order for AI to be useful, it has to be applied. And in order for it to be applied and widely adopted, it needs to be trusted. And so these are, I think, are cornerstones of what we need to be thinking about when we’re actually thinking about the frontier of AI in many ways.

Rudra Chaudhry

John, you run one of the most important organizations in the world, and you’re a largest philanthropic organization in the world. If there are students there, you should corner John afterwards for all sorts of things, if there are professors in the audience. But you’ve also got a very strong legal background. So the same question to Tara is, when you think of diffusion, when you think of impact use cases, and you think of what Paula said, which is we have to be adaptive about the regulatory architecture, where are you at?

John Palfrey

Rudy, thank you. And first, let me please, on behalf of MacArthur Foundation, congratulate our hosts in India. What a wonderful global stage to be on, to be having this important conversation. The point of view that I come from as a law professor and as leader of a philanthropy MacArthur Foundation is, of course, that we need to make the technology, the AI, work for humans and to put humans at the center. And I’ve been delighted on this main stage and throughout the summit to hear that as the focus here in India and, of course, around the world. And I think the way to do that is not to treat the AI as something magical and separate, but rather connected to all of the things that we’re trying.

So whether it’s lifting people out of poverty or improving health care or… Thank you. a bank providing capital as needed, we need a stable regulatory regime that makes that possible and puts humans at the center rather than just seeking to advance the technology at all costs and then treating it as something magical and other than forms of mathematics, forms of science that we have been able through human history to regulate so that it serves humans, not for its own sake.

Rudra Chaudhry

From your perspective in terms of philanthropy, but also from the perspective perhaps of peers that you talk to, is the current moment with the verb for adoption, the verb for getting this out to people, changing the way you’re thinking about grantees, partners, and the philosophical way in which you’re thinking about releasing money?

John Palfrey

Yes and no. I think there are some constants in philanthropy that are very important and maybe more important than ever in this moment. You think about the amount of capital that is flowing towards AI and its development, mostly of course by the private sector. I think there are some constants in philanthropy that are very important in the private sector, sometimes by sovereign wealth funds and so forth. What we need to ensure is that civil society has a voice. And of course, again, I credit our hosts for including civil society in this conversation and continuing to do that from Bletchley to today and onward. And the civil society world doesn’t come for free. Somebody has to pay for it, right?

And philanthropy has been historically the source of funding that. And I’m very impressed by the Indian philanthropic environment that is developing. We’re excited to partnership with the Center for Exponential Change and others who are developing homegrown both philanthropy as well as ideas that are coming from India to the rest of the world. But if we don’t invest in civil society, there will be many, many fewer voices able to bring the kind of sensibility that we’re talking about to the world. It doesn’t come without actually thinking about it carefully. So no, we are thinking that long -term capital that is for academia, that is for organizations. And I think about, of course, the Observer Research Foundation, which you’re involved in, Partnership for AI, for which Tara was the founding ED.

These organizations, along with academia, are going to be able to bring the kind of sensibility that we’re talking about to the sensibility that we’re talking about to the world. And I think about the fact that we have to be world. And I think about the fact that we have to be able to bring the kind of sensibility that we’re talking about to the world. And I think about the fact that we have to be able to bring the kind of sensibility that we’re talking about to the world. And I think about the fact that we have to be able to bring the kind of sensibility that we have to be able to bring the kind of in a stable long -term way by philanthropy.

We’ve been able with colleagues to raise half a billion dollars for humanity AI and effort in the US, close to that amount for current AI led by Martin Tisnay and AI Collaborative for global efforts. So we’re over a billion dollars in commitments between these two efforts, but we have to be

Rudra Chaudhry

Minister, let me ask you a question on, you talked about the benefits of AI in Rwanda. Can you open that box up for us a little bit? You know, one of the arguments has been, is that, and there are a lot of arguments about how is this stuff going to pay for itself? Use case and diffusion is all great, but is there an OPEX model or a revenue model for beneficial deployment? It needs to be sustainable over a period of time. And there’s another argument which says, when people actually start using things that are useful, and they see value in it, the rest will follow. What are your citizens in Rwanda feeling in terms of value?

Paula Ingabire

So I’ll defer a little bit because I think value cannot just be seen in monetary terms and how are we going to have the return on investment? How do we just sustain this financially? It’s a good metric to use for sure. But I think the way we are looking and when I look at the use cases that we’ve already identified, one, it speaks to our government’s decision to make sure that we are delivering better services to our citizens. So whether it’s healthcare, whether it’s making sure that we’re giving quality education to our students in Rwanda, whether it’s making sure that a majority of our population, which is made up of farmers, have access to the right data and extension services that then ensure that they have a growth.

And productivity, which will translate essentially also in them being able to have more income and getting out of poverty and building wealth for their families. But a starting point for us has always been. what problem are we trying to solve? And is AI the best way to solve for this? Or is it a combination of AI and many other technologies that can solve for that? We’re a country that has been on a journey of digital transformation for more than 20 years. And so we’ve already started to see the benefit of that. So when I look at the education use cases, we are ranging from being able to facilitate teachers with assessment tools that can help with faster and better assessment.

We’re looking at AI solutions that support with better lesson planning. And so if you’re able to have better lesson planning, you’re able to deliver quality education and make sure that it’s similar across the country, then I think those are benefits that one can easily quantify. For the health sector, we’re looking at our frontline health workers or the community health workers delivering primary health care, giving them decision support tools that enable them to have better diagnosis, and at the same time to reduce the burden of the health care system. So we’re looking at AI solutions that support with better backlog of their in the health care references. him. Essentially, that’s also going to translate into less wastage, into better care, but also even bringing down the cost of care per person, if you look at it that way.

So for our people, they’re very optimistic. Obviously, like any other country, everyone has to wonder, okay, there’s lots of data that you’re going to be using. Some of it, a lot of it is going to be personal data. What guardrails are we putting in place? We have the data protection and privacy law that I talked about earlier. But the most important thing, even for people that they need, is how are we building capacity in -country? So that a lot of these things are not solutions we are acquiring from elsewhere, but we also have more than 70 % of our population that are in the youth bracket. It means these are already people that are very excited about technology, that if you train them the right way, they’ll also be part of building these solutions.

And so I think there’s a lot of optimism on what it can do. it doesn’t mean we’re shying away from what the risks are we think that’s why we’re doing everything by design use case by use case trying to understand for each use case that we are deploying what could be the risks that could be unique to that particular application and how we addressing it

Rudra Chaudhry

no i think that’s fantastic i think the way you’re thinking about disaggregated risk rather than just one big banner sticker on top is perhaps the way we all need to go and as we think about how is this use case risky but how is it actually useful and adds value in different ways um so that’s fantastic um keeping an eye on the clock um tara i just want to talk a little bit about deployment and scale um we all love diffusion we want this stuff out to everybody how do we get it right when it comes to deployment and scale because none of this is going to be easy it’s going to require some kind of a sustainable financial model it’s going to require a lot of time and a lot of time and a lot of time and a lot of time and a lot of time and a lot of work across the board and across borders so just give us someone who works on scale and deployment give us a viewpoint

Terah Lyons

Sure. And maybe just a few words on census scale in our context here at JPMorgan Chase. We operate in over 100 countries globally. We spend close to $20 billion a year on technology. And we are investing really, really deeply in AI. So, you know, I think to answer your question, one of the paradigms from which we come to this issue is certainly from the unique risk management capabilities of finance and regulated banks specifically. We’ve been using AI technologies at the use case level for over 10 years, you know, starting first with more traditional analytic techniques, moving into the era of machine learning models, now introducing large language models, looking in the direction of agentic capabilities and beyond. And I think underscoring one of the points that John raised earlier, which I think is important here.

You know, the sort of risk management posture and considering what effective governance and controls looks like in order to scale in the way that you’re describing is something we have built muscles to do before. We know how to do this pretty well. And one of the superpowers, I think, that we have is sector -specific lens on regulation and oversight. I think that also speaks to some of the great points that the minister just made with respect to really evaluating risk at the use case level. You know, make this conversation about risk management grounded and practical in ways that address the real ways in which AI is getting deployed at the level of individual use cases.

And then making rules of the road that are applicable to that specific context. I think that’s really crucial. The other kind of piece of the equation is, and this speaks to the point I made at the top about our global operations. I think that’s really crucial. We really need regulatory harmonization to the extent possible in order to allow for consistency of rules across borders. And I think that there’s been a lot of really, really rich conversation this week at the summit about sovereign AI as a part of the global governance conversation. I think that that has its own unique and important goals, and I think it needs to be held in the same sort of space as a realization that we also need to be considering what a global baseline looks like, what clarity enables for global operators so that they can really get responsibility at scale right.

Rudra Chaudhry

I’m going to ask you one question before I come back. What would you like to see going ahead? From this summit, the baton has been handed to Switzerland, and from Switzerland, there’s possibly another likely candidate. But what would you like this summit process to do in an institutional setting, perhaps, to keep these conversations going?

Terah Lyons

Well, I think that John’s earlier point about the need for multi -stakeholder diversity is really key. I think that looking across sectors, government, civil society, and industry is deeply important, and making sure all those voices are at the table is critical. I think a sub -point there, from my perspective, is that I would like to see more deployers sitting in seats like this one. We are one of the largest financial institutions in the world, and we use AI in really, really deep ways, as I mentioned before. But I want to see folks from retail, energy, I want to see people from manufacturing, I want to see folks who really represent the real economy sitting on stages like this one next year in Switzerland and speaking to how we deliver real value in the hands of customers and citizens every day using these technologies.

Rudra Chaudhry

And John, very quickly to you, I’m going to ask you a cheeky question. The kind of philanthropy I think that we require now in AI is for MacArthur to be working with a frontier lab. That’s working with a local lab that’s deploying. Is that in your imagination?

John Palfrey

Sure, Enredi, thank you. And I think it’s an exciting idea of going from here to Switzerland and imagining what could come next. And I think what could come next for philanthropy is absolutely an important piece of the story. And I think if you think about the way in which technology works, it often begets innovation in other sectors. So I think what’s exciting is that the technology itself can inform the way we practice philanthropy in ways you suggest, but it also can figure out how to regulate better. And it turns out, of course, regulation is not just against innovation. In fact, regulation sometimes prompts further innovation, and then this wonderful cycle can continue. So my sort of key point on this would be to say, let’s not have a false binary.

Either you regulate or you innovate. Let’s figure out the way that the regulation and the governance drives innovation. And I think that’s an exciting idea, not just for governments, as the minister said, or for banks. It’s true for philanthropy, too, which can improve its work a little bit along the way, too.

Rudra Chaudhry

No, bang on. And Minister, last word to you. We would love to see the summit. hosted in Kigali. From your vantage point, and a lot of this is about South -South cooperation, a lot of it has been about global cooperation. What would you like to see between now and Switzerland? What can we all actively do to make this more palpable by the time we get to Zurich or Geneva or Davos or wherever it is?

Paula Ingabire

I think it’s great that since we started with the Birchley Park convenings, we’re looking at safety, governance, and now it’s about impact, execution, implementation. It would be great that we start to quantify what that impact has looked like and also to create a way where these exchanges are truly happening. And I couldn’t agree more. If we have more of the people that are building and deploying some of these solutions here, we could have some of the communities that have either benefited positively or negatively here so we can have their voices. So as we go ahead with how large -scale adoption of this technology is going to be, I think it’s going to be a very, very important thing.

is going to happen across the world. We’re taking into consideration this conversation. And I think the last one for me is to make sure we have more voices coming from the African continent and elsewhere, so that we can sort of balance between where are we seeing the biggest impact? Is it in emerging economies? Is it in the middle economies or the big ones? And what could be the nuances as we continue to deploy massively? And I think to do that, we need to take this to the African continent sooner rather than later. And we’re happy to host you.

Rudra Chaudhry

There you are. Good offer there. Minister, John, Tara, thank you so much. Thank you for being with us at the Impact Summit. And back to the organizers. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (40)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Rwanda first identifies AI applications that deliver the greatest societal benefit and then crafts regulations specific to those use‑cases, avoiding abstract, one‑size‑fits‑all rules”

The knowledge base describes Rwanda’s regulatory posture as adaptive, building regulations according to specific use cases rather than blanket rules [S28] and highlights a nuanced, use-case-specific governance approach [S119]

Confirmedhigh

“Partnerships are central: foreign firms must co‑develop solutions and train Rwandan staff, creating a “closed loop” between implementation and regulation”

Rwanda’s approach emphasizes co-creation with stakeholders and partners, noting collaborative implementation with external firms and targeted partnerships with countries such as Saudi Arabia, UAE and Singapore [S22] and [S23]

Confirmedhigh

“To safeguard data sovereignty, Rwanda is building a national data hub “by design” and has enacted a data‑protection and privacy law that operates “by design” to guard personal data”

Rwanda enacted a data-protection and privacy law two years ago, aimed at protecting personal data, confirming the existence of such legislation [S127]; the source does not use the exact phrase “by design” but the law’s intent aligns with safeguarding data sovereignty

Confirmedhigh

“A universal AI compact is realistic if it accommodates diverse cultural, linguistic and national contexts while establishing a set of non‑negotiable core standards that can be locally contextualised”

Discussions of the Global Digital Compact stress an inclusive, multi-stakeholder model that sets core standards adaptable to local cultural and linguistic contexts [S99] and highlight the need for reusable frameworks that respect diverse languages and cultures [S20]

!
Correctionmedium

“More than 70 % of Rwanda’s population are in the youth bracket”

The knowledge base provides a statistic that about 70 % of Africa’s population is under 30 years old, not specifically Rwanda, so the claim about Rwanda’s youth proportion is not directly supported and may be inaccurate [S125]

External Sources (129)
S1
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — – John Tass-Parker- Terah Lyons – Terah Lyons- Harshil Mathur
S2
The Power of Satellites in Emergency Alerting and Protecting Lives — Alexandre Vallet: Thank you very much Dr. Zavazava. Thank you very much both of you for this introductory remark. I will…
S3
Reinventing Digital Inclusion / DAVOS 2025 — – Paula Ingabire: Minister of Innovation, Technology and Innovation of Rwanda A major theme of the discussion was the l…
S4
AI: Lifting All Boats / DAVOS 2025 — – Paula Ingabire: Minister of Information, Communication Technology and Innovation of Rwanda Paula Ingabire: Maybe Vij…
S5
UNECA Role in the Internet Ecosystem in Africa | IGF 2023 Open Forum #110 — Hon. Paula Ingabire, Minister of Information and Communications Technology (ICT)
S6
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S7
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S8
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S9
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion — The panelists joining us represent some of the most thoughtful voices on how AI is being built and adopted around the wo…
S10
Building Trusted AI at Scale – Keynote Anne Bouverot — -John Palfrey: Representative from the MacArthur Foundation (mentioned by Anne Bouverot but did not speak in this transc…
S11
FOSTERING FREEDOM ONLINE — – Deibert, Ronald, John Palfrey, Rafal Rohozinski and Jonathan Zittrain (eds). April 2010. Access Controlled: The Shapin…
S12
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion — Her Excellency Paula Njibar is the minister of ICT and innovation for the government of Rwanda. Under her leadership, Rw…
S13
Subrata K. Mitra Jivanta Schottli Markus Pauli — An analysis of India’s foreign policy over seven decades will inevitably reveal evidence of both change and continuity i…
S14
Democratizing AI: Open foundations and shared resources for global impact — ## Infrastructure and Support ## Introduction and Switzerland’s Strategic Position Bernard Maissen: Yes, thank you. He…
S15
Opening keynote — It is crucial for all nations, including developing and least-developed countries, to be part of these conversations. Re…
S16
Open Forum #33 Building an International AI Cooperation Ecosystem — ### Regional Perspectives **Additional speakers:** Dai Wei: Distinguished guests, ladies and gentlemen, good day to yo…
S17
What is it about AI that we need to regulate? — Evolving Discourse:The governance narrative has evolved significantly. In theBook Launch session, Jovan Kurbalija traced…
S18
Main Session 2: The governance of artificial intelligence — Should focus on adaptive governance models including regulatory sandboxes for responsible innovation
S19
Setting the Rules_ Global AI Standards for Growth and Governance — Global cooperation, regulatory interplay and local adaptation
S20
Setting the Rules_ Global AI Standards for Growth and Governance — Combine global standards frameworks with local adaptations for specific use cases, languages, and cultural contexts Dev…
S21
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/4/OEWG 2025 — Rwanda: Mr. Chair, as this is our first time addressing this session, please allow us to extend our deepest appreciati…
S22
Fixing Healthcare, Digitally — Building trust in the context of emerging technologies is crucial for their successful implementation. Rwanda acknowledg…
S23
AI: The Great Equaliser? — Partners are chosen based on Rwanda’s needs and the value that these countries can bring Paula Ingabire acknowledges th…
S24
Open Forum #26 High-level review of AI governance from Inter-governmental P — Thelma Quaye: Thank you very much. Good evening, everybody. So I’d like to clarify, Smart Africa is not a multinatio…
S25
Do we really need specialised AI regulation? — The Apex: AI useis where AI’s societal, legal, and ethical consequences come into sharp focus. Whether it’s deepfakes, b…
S26
Part 2.5: AI reinforcement learning vs human governance — Governance structures are designed to maintain order, protect rights, and promote welfare, often requiring consensus and…
S27
Rethinking AI regulation: Are new laws really necessary? — Specialised AI regulation may not be necessary, as existing laws already cover many aspects of AI-related concerns. Jova…
S28
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion — Either you regulate or you innovate. Let’s figure out the way that the regulation and the governance drives innovation. …
S29
WS #462 Bridging the Compute Divide a Global Alliance for AI — Elena Estavillo Flores: What topic? Well, governance, we have been talking about governance, but we’re supposing that th…
S30
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S31
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — Multi-stakeholder participation must include civil society, technical communities, academia, private sector and marginal…
S32
Building Public Interest AI Catalytic Funding for Equitable Compute Access — India is proving that you can design AI ecosystems that are both globally competitive and globally competitive. And loca…
S33
How to make AI governance fit for purpose? — – **Multi-stakeholder involvement** – All speakers acknowledged the need for collaboration between governments, private …
S34
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Alain Ndayishimiye:Yes thank you moderator once again let me take the opportunity to greet everyone whatever you are in …
S35
Agenda item 5 : Day 4 Afternoon session — Rwanda:Thank you, Chair. This being the first time that our delegation is taking the floor, Rwanda would like to express…
S36
WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches — The SADC region’s cross-border financial inclusion project demonstrates this principle, focusing on solving real problem…
S37
Safeguarding Children with Responsible AI — Global standards needed for certain protections while allowing cultural adaptation in implementation
S38
High Level Session 4: Securing Child Safety in the Age of the Algorithms — Establishing global standards while respecting national sovereignty and cultural differences
S39
WS #110 AI Innovation Responsible Development Ethical Imperatives — Find common ground through shared ethical frameworks while allowing for cultural and contextual differences
S40
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Fadi Daou:Okay, thank you. Thank you Michel and this is definitely a tension and maybe a balance at some point between t…
S41
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — This comment elevated the discussion from technical considerations to geopolitical implications, connecting AI infrastru…
S42
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — Impact:This comment elevated the discussion from technical considerations to geopolitical implications, connecting AI in…
S43
Collaborative AI Network – Strengthening Skills Research and Innovation — This comment shifted the discussion from technical implementation to governance and trust frameworks. It influenced othe…
S44
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Aubra Anthony: So I was going to say we’re going to be publishing this soon. So hopefully, everyone can see this. But I …
S45
IN CONVERSATION WITH MICHELE JAWANDO — Elected officials and private companies have different responsibilities and constraints. Philanthropy has the freedom to…
S46
WS #283 AI Agents: Ensuring Responsible Deployment — Government Perspectives and Regulatory Approaches Lazanski points out that regulatory frameworks are emerging different…
S47
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Evidence:JPMorgan Chase has been using AI for over a decade across fraud detection, payments, markets, and compliance. T…
S48
WS #98 Towards a global, risk-adaptive AI governance framework — Russo highlights the challenge of moving from high-level AI principles to practical implementation. She emphasizes the n…
S50
WS #283 AI Agents: Ensuring Responsible Deployment — Enhanced education and relationship building between policymakers, private sector, and civil society stakeholders is ess…
S51
Open Forum #33 Building an International AI Cooperation Ecosystem — Development | Economic | Capacity development The speaker argues that public-private partnerships are not optional but …
S52
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Development | Legal and regulatory Evidence-Based Policymaking and Research Integration Part of the roadmap emphasizes…
S53
A bottom-up approach: IG processes and multistakeholderism | IGF 2023 Open Forum #23 — In conclusion, the analysis highlights the importance of multi-stakeholder engagement in policy processes, with specific…
S54
Defending Our Voice: Global South Participation in Digital Governance — Effectiveness of multi-stakeholder model versus need for multilateral implementation Multi-stakeholder processes need i…
S55
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — Mothibi Ramusi: Thank you very much. Good afternoon. I think from our side, I’m just going to use the South African cont…
S56
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion — And philanthropy has been historically the source of funding that. And I’m very impressed by the Indian philanthropic en…
S57
Responsible AI for Shared Prosperity — Disagreement level:Very low disagreement level. All speakers aligned on core issues: the need for multilingual AI, the i…
S58
Responsible AI for Shared Prosperity — Very low disagreement level. All speakers aligned on core issues: the need for multilingual AI, the importance of addres…
S59
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion — Minister, let me ask you a question on, you talked about the benefits of AI in Rwanda. Can you open that box up for us a…
S60
Artificial Intelligence &amp; Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S61
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Alain Ndayishimiye:So the technical development and deployment of AI is… So here I’m referring to ethical consideratio…
S62
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Anshul Sonak: Yeah, my minute, I mean, this requires a balanced, responsible public-private partnership and a great lead…
S63
WS #462 Bridging the Compute Divide a Global Alliance for AI — Elena emphasizes that sustainable collaborative models need credibility and trust to maintain participation and continue…
S64
Opening Plenary: Working Together for a Human-Centred Digital Future – Parliamentary Cooperation for Democratic Digital Governance — Regulation should focus on use cases rather than technology itself, similar to how medicines are regulated for safety
S65
Conversational AI in low income &amp; resource settings | IGF 2023 — Sameer Pujari:Thank you, Rajendra. And you rightly said all the member states are actually getting very excited about th…
S66
How Trust and Safety Drive Innovation and Sustainable Growth — Explanation:Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was une…
S67
Building Trustworthy AI Foundations and Practical Pathways — “Frontier risks are risks which are very, very difficult to observe, right?”[59]. “There are social risks which are easi…
S68
Leveraging AI4All_ Pathways to Inclusion — The discussion revealed that many AI products remain stuck in pilot stage due to surrounding system challenges rather th…
S69
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S70
Safeguarding Children with Responsible AI — As Davin concluded with “measured optimism,” the discussion highlighted both tremendous potential and significant risks,…
S71
Skilling and Education in AI — While significant investments are flowing into computational infrastructure, there’s a need for economic models and inve…
S72
AI for Social Good Using Technology to Create Real-World Impact — And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very in…
S73
Fixing Healthcare, Digitally — Rwanda used the case of Zipline to develop performance-based regulations This incremental approach not only fosters tru…
S74
WS #180 Protecting Internet data flows in trade policy initiatives — Jennifer Brody: Sure, my pleasure. And thank you so much for having me here today. It’s a real honor and a pleasure. …
S75
WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches — The SADC region’s cross-border financial inclusion project demonstrates this principle, focusing on solving real problem…
S76
AI: The Great Equaliser? — Partnerships and collaborations with other countries have been pursued intentionally by Rwanda. They have partnered with…
S77
WS #110 AI Innovation Responsible Development Ethical Imperatives — Find common ground through shared ethical frameworks while allowing for cultural and contextual differences
S78
Safeguarding Children with Responsible AI — Global standards needed for certain protections while allowing cultural adaptation in implementation
S79
High Level Session 4: Securing Child Safety in the Age of the Algorithms — Establishing global standards while respecting national sovereignty and cultural differences
S80
Global AI Policy Framework: International Cooperation and Historical Perspectives — Baumann argues for a balanced approach that establishes shared global norms while allowing flexibility for countries to …
S81
WS #98 Towards a global, risk-adaptive AI governance framework — Participants agreed on the need for interoperability between different governance frameworks while allowing for cultural…
S82
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — This opening comment set the foundational framework for the entire discussion, shifting focus from AI capabilities to in…
S83
AI as critical infrastructure for continuity in public services — These key comments fundamentally shifted the discussion from a technical and regulatory focus to a human-centered perspe…
S84
Collaborative AI Network – Strengthening Skills Research and Innovation — This comment shifted the discussion from technical implementation to governance and trust frameworks. It influenced othe…
S85
Building Population-Scale Digital Public Infrastructure for AI — Diffusion spreads know‑how, trust and institutional capability
S86
Democratizing AI Building Trustworthy Systems for Everyone — It’s notable that speakers from different sectors (technical benchmarking, corporate, and development) all converged on …
S87
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion — And philanthropy has been historically the source of funding that. And I’m very impressed by the Indian philanthropic en…
S88
How to make AI governance fit for purpose? — – **Multi-stakeholder involvement** – All speakers acknowledged the need for collaboration between governments, private …
S89
IN CONVERSATION WITH MICHELE JAWANDO — Elected officials and private companies have different responsibilities and constraints. Philanthropy has the freedom to…
S90
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — Multi-stakeholder participation must include civil society, technical communities, academia, private sector and marginal…
S91
WS #209 Multistakeholder Best Practices: NM, GDC, WSIS &amp; Beyond — Multi-stakeholder model serves to bring civil society voices to the table
S92
WS #283 AI Agents: Ensuring Responsible Deployment — Government Perspectives and Regulatory Approaches Lazanski points out that regulatory frameworks are emerging different…
S93
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Evidence:JPMorgan Chase has been using AI for over a decade across fraud detection, payments, markets, and compliance. T…
S94
AI Meets Cybersecurity Trust Governance &amp; Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S95
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S96
Ad Hoc Consultation: Thursday 1st February, Morning session — In a formal and courteous address, the speaker began by respectfully acknowledging the presiding official, Madam Chair, …
S97
Ad Hoc Consultation: Friday 9th February, Morning session — The speaker begins by extending a respectful acknowledgment to the chairperson, which indicates the formal nature of the…
S98
High-Level Track Facilitators Summary and Certificates — ## Opening Remarks and Summit Achievements
S99
First round of informal consultations with member states, observers and stakeholders (2024) — The speaker began by thanking the organisers for both arranging the meeting and providing guiding questions, which they …
S100
From KW to GW Scaling the Infrastructure of the Global AI Economy — -Moderator- Various moderators conducting different sessions and managing discussions
S101
Global Enterprises Show How to Scale Responsible AI — Excellent point. Error also scales. Good point. For POCs and experience and for some internal. internal use case. So le…
S102
Open Forum #30 High Level Review of AI Governance Including the Discussion — Juha Heikkila: Thank you Yoichi and thank you very much for this invitation. So I think it’s very useful to understand t…
S103
WS #232 Innovative Approaches to Teaching AI Fairness &amp; Governance — Melissa El Feghali: Hi everyone so we’ve seen through the interactive example that Ayaz gave us how we can transform v…
S104
Open Forum #33 Building an International AI Cooperation Ecosystem — Sajid Rahman: Thank you, and good afternoon. You know, it’s a great pleasure to speak about something which is not only …
S105
High-level AI Standards panel — The discussion maintained a consistently collaborative and optimistic tone throughout. It was formal yet enthusiastic, w…
S106
Setting the Rules_ Global AI Standards for Growth and Governance — The discussion maintained a consistently collaborative and constructive tone throughout. Panelists demonstrated remarkab…
S107
Democratizing AI Building Trustworthy Systems for Everyone — The discussion maintained a constructive and collaborative tone throughout, with participants building on each other’s p…
S108
Knowledge Café: WSIS+20 Consultation: Two Decades of WSIS: Advancing Digital Cooperation Through Action Lines — The discussion maintained a constructive and collaborative tone throughout. It began with technical assessments and evol…
S109
(Plenary segment) Summit of the Future – General Assembly, 4th plenary meeting, 79th session — The tone of the discussion was generally optimistic and forward-looking, with speakers emphasizing the need for urgent a…
S110
Harnessing digital public goods and fostering digital cooperation: a multi-disciplinary contribution to WSIS+20 review — The discussion maintained a professional, collaborative tone throughout, with speakers building on each other’s points c…
S111
WS #226 Strengthening Multistakeholder Participation — The discussion maintained a collaborative and constructive tone throughout, with participants openly acknowledging chall…
S112
Leaders TalkX: ICT application to unlock the full potential of digital – Part I — The discussion maintained a consistently professional, collaborative, and solution-oriented tone throughout. Speakers de…
S113
Panel Discussion: 01 — Ladies and gentlemen, I would now like to introduce the speakers for a ministerial conversation. The speakers in this fi…
S114
Keynote-HE Emmanuel Macron — Artificial intelligence Reference to previous address by Antonio Guterres; formal titles and protocol; mention of the A…
S115
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 2 — The AI Impact Summit held in New Delhi brought together ministers and senior officials from multiple countries for discu…
S116
Keynote-Rishi Sunak — Artificial intelligence | International collaboration (captured under Artificial intelligence) The moderator formally w…
S117
https://app.faicon.ai/ai-impact-summit-2026/conversation-01 — Ladies and gentlemen, I would now like to invite on stage speakers for our next remarkable panel discussion. I would lik…
S118
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S119
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — Impact:This analogy became a touchstone for the rest of the discussion, with other panelists referencing similar concept…
S120
Driving Indias AI Future Growth Innovation and Impact — Right. Well, thank you, gentlemen, for that absolutely incredible conversation. You know, the takeaway is clear that inv…
S121
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S122
Multistakeholder Dialogue on National Digital Health Transformation — Sean Blaschke: Thanks, Leah. I’m going to try to apply the same architecture framework to legislation, policy, complia…
S123
Thinking Big on Digital Inclusion — To ensure access, usability, and affordability for all citizens, Rwanda has prioritised the establishment of a national …
S124
Stronger digital voices from Africa — and strategies – -ICT Hub Strategy – -Broadband Policy – -ICT Sector Strategic Plan – -Smart Rwanda Master Plan – -Nati…
S125
African Priorities for the Global Digital Compact: A Comprehensive Discussion Report — Cited that 70% of Africa’s population is under 30 years old, and referenced the need for infrastructure in rural areas w…
S126
WSIS Action Line C7 E-environment — Extended Producer Responsibility (EPR) schemes face common implementation challenges including unclear policy frameworks…
S127
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — Policies play a crucial role in creating a conducive data ecosystem. Rwanda’s implementation of a data protection and pr…
S128
Regional perspectives on digital governance | IGF 2023 Open Forum #138 — Luis Barbosa:Yeah. I’m thinking again about what Nibal was saying. I think there is a path that international organizati…
S129
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — ## Challenges and Unresolved Issues ## Key Agreements and Consensus – **Mattie Yeta** – CGI (UK), presented via video …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument142 words per minute396 words167 seconds
Argument 1
Opening framing: AI for all and global cooperation – Emphasizing the importance of every country’s role in AI development (Speaker 1)
EXPLANATION
Speaker 1 opened the session by thanking the host and highlighting that AI for all requires the active participation of every nation. He framed global cooperation as essential for equitable AI development and deployment.
EVIDENCE
He thanked the host and noted that listening to perspectives from countries like Sweden shows how each country’s role is crucial when discussing AI for all and global cooperation [1-2].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for universal participation in AI governance is highlighted in the opening keynote that stresses inclusion of developing and least-developed countries [S15] and in discussions about building an international AI cooperation ecosystem [S14][S30].
MAJOR DISCUSSION POINT
Global cooperation in AI
P
Paula Ingabire
5 arguments188 words per minute1412 words449 seconds
Argument 1
Adaptive, use‑case‑driven AI regulation and governance – Advocacy for regulation that evolves from concrete use‑cases rather than abstract rules (Paula Ingabire)
EXPLANATION
Paula argued that Rwanda prefers an adaptive regulatory posture that is built after concrete AI use‑cases are piloted, rather than imposing abstract, one‑size‑fits‑all rules. Regulations are then tailored to the specific problems being solved.
EVIDENCE
She explained that Rwanda focuses on identifying where AI creates the biggest societal benefits first, then builds regulations specific to those use-cases, resulting in an evidence-based, adaptive framework [40-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for adaptive governance models and regulatory sandboxes aligns with the panel on adaptive AI governance [S18]; the recommendation to combine global standards with local, use-case adaptations reinforces this approach [S19][S20]; Paula’s own remarks in the Davos panel echo the same theme [S4].
MAJOR DISCUSSION POINT
Use‑case‑driven regulation
DISAGREED WITH
Terah Lyons
Argument 2
Adaptive, use‑case‑driven AI regulation and governance – Call for a global AI compact with shared non‑negotiable standards, adaptable to local contexts (Paula Ingabire)
EXPLANATION
She expressed confidence that a global AI compact is feasible, provided it incorporates universal non‑negotiable standards while allowing contextual adaptation for different cultural and linguistic settings. The compact would set shared baselines that nations can tailor to their specific challenges.
EVIDENCE
She stated that a global compact must reflect diverse contexts and that shared non-negotiable standards should be combined with local problem-specific adaptations [61-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Proposals for a global AI compact that blends universal, non-negotiable standards with contextual adaptation are discussed in the Global AI Standards for Growth and Governance documents [S19][S20]; multi-stakeholder governance frameworks that stress such shared baselines are also referenced [S31].
MAJOR DISCUSSION POINT
Global AI compact
Argument 3
Partnerships, capacity building, and data sovereignty – Rwanda’s partnership model prioritises co‑development and skill transfer rather than simple technology import (Paula Ingabire)
EXPLANATION
Paula highlighted that Rwanda avoids merely buying foreign AI solutions; instead, it insists on partnerships that train Rwandan staff and co‑develop technologies, ensuring local capacity and ownership.
EVIDENCE
She gave the example that Rwanda does not acquire a foreign solution and let the vendor train on its data alone; instead, partners must train Rwandan people and co-develop solutions to build in-country skill sets [48-50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Rwanda’s partner-selection strategy based on needs and value-addition is described in the “Great Equaliser” briefing [S23]; her statements at Davos further illustrate the emphasis on co-development and capacity building [S4].
MAJOR DISCUSSION POINT
Co‑development partnerships
Argument 4
Partnerships, capacity building, and data sovereignty – Development of a national data hub and a data‑protection/privacy law “by design” to safeguard sovereignty (Paula Ingabire)
EXPLANATION
She described Rwanda’s proactive steps to protect data sovereignty by building a national data hub and enacting a data protection and privacy law before any crisis arises, embedding safeguards into system design.
EVIDENCE
She noted that Rwanda is building a national data hub and has already put in place a data protection and privacy law that governs collection, use, and processing of data, forming the foundation for data-sovereignty-by-design [51-54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Rwanda’s focus on trust, a national data hub and a by-design data-protection law is outlined in the digital health trust discussion [S22]; the broader partnership narrative also highlights data-sovereignty considerations [S23].
MAJOR DISCUSSION POINT
Data sovereignty by design
Argument 5
Diffusion, adoption, scaling, and sustainable financial models – Rwanda’s AI use‑cases in health, education, and agriculture deliver tangible societal benefits that go beyond monetary ROI (Paula Ingabire)
EXPLANATION
Paula enumerated concrete AI applications in Rwanda’s health, education, and agriculture sectors, explaining how they improve service quality, reduce costs, and boost productivity, thereby generating social value beyond simple financial returns.
EVIDENCE
She cited AI-enabled teacher assessment tools, lesson-planning support, decision-support for community health workers, and data services for farmers that improve diagnosis, reduce waste, lower health-care costs, and increase agricultural productivity, all of which are observable benefits for citizens [133-146].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concrete AI applications in health, education and agriculture that improve service quality and productivity are documented in the Davos panel transcript [S4] and reinforced by the trust-building health case study [S22]; the “Great Equaliser” overview also lists these sectoral use-cases [S23].
MAJOR DISCUSSION POINT
AI use‑cases delivering social value
DISAGREED WITH
Rudra Chaudhry, John Palfrey
J
John Palfrey
5 arguments240 words per minute844 words210 seconds
Argument 1
Adaptive, use‑case‑driven AI regulation and governance – Assertion that AI must be regulated to serve humans, not treated as a magical, self‑justifying technology (John Palfrey)
EXPLANATION
John emphasized that AI should be governed like any other technology, with regulations that keep humans at the centre rather than treating AI as a mysterious, autonomous force. He linked this view to the broader goal of using AI to lift people out of poverty and improve health care.
EVIDENCE
He argued that AI must be regulated to serve humans, warning against treating it as magical, and stressed that stable regulatory regimes are needed for AI to support poverty alleviation, health care, and banking [95-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The evolution of AI governance from early “magic” narratives to human-centred regulation is traced in the discourse analysis [S17]; calls for regulation that keeps humans at the centre appear in the specialised AI regulation debate [S25] and in the human-governance perspective [S26]; the regulation-vs-innovation panel further stresses this point [S28].
MAJOR DISCUSSION POINT
Human‑centred AI regulation
Argument 2
Role of philanthropy and civil society in AI governance – Philanthropy should underwrite civil‑society voices so they can influence AI policy and practice (John Palfrey)
EXPLANATION
John stated that civil society cannot participate in AI governance without financial support, and philanthropy has historically filled that gap. He called for continued philanthropic backing to ensure diverse voices are heard.
EVIDENCE
He noted that civil-society does not come for free, that someone must pay for it, and that philanthropy has historically provided that funding [104-107].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multi-stakeholder participation that includes civil society is advocated in the Open Forum on global AI governance [S31]; the Rockefeller Foundation’s description of philanthropy as catalytic for equitable AI access underscores the need for funding civil-society actors [S32].
MAJOR DISCUSSION POINT
Funding civil‑society participation
DISAGREED WITH
Rudra Chaudhry, Paula Ingabire
Argument 3
Role of philanthropy and civil society in AI governance – Philanthropic funding must sustain civil‑society participation to ensure inclusive, long‑term AI adoption (John Palfrey)
EXPLANATION
John reiterated the need for philanthropic capital to keep civil‑society engaged in AI debates, arguing that without such support the sector would lose critical perspectives needed for responsible AI development.
EVIDENCE
He highlighted that civil-society needs funding, that philanthropy has historically provided it, and that continued investment is essential for inclusive AI outcomes [104-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same multi-stakeholder calls and philanthropic catalytic funding narratives provide evidence for sustained civil-society support [S31][S32].
MAJOR DISCUSSION POINT
Sustaining civil‑society through philanthropy
Argument 4
Role of philanthropy and civil society in AI governance – Commitment of over a billion dollars by philanthropic initiatives to steer AI toward humanity‑focused outcomes (John Palfrey)
EXPLANATION
John disclosed that philanthropic initiatives led by his foundation and partners have mobilised more than a billion dollars to fund AI projects aimed at benefiting humanity, underscoring the scale of private‑sector commitment.
EVIDENCE
He reported that his colleagues have raised half a billion dollars for Humanity AI in the US and a similar amount for a global AI effort, totaling over a billion dollars in commitments [120-121].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Large-scale philanthropic investment in AI for public interest is highlighted in the discussion of catalytic funding for equitable compute access, which references substantial philanthropic commitments [S32].
MAJOR DISCUSSION POINT
Scale of philanthropic AI funding
Argument 5
Future summit direction: multi‑stakeholder inclusion and South‑South cooperation – Recommendation to avoid a false binary between regulation and innovation; instead, let governance stimulate further innovation (John Palfrey)
EXPLANATION
John cautioned against viewing regulation and innovation as opposing forces, arguing that thoughtful governance can actually spur further technological breakthroughs. He called for a collaborative approach where regulation and innovation reinforce each other.
EVIDENCE
He argued that regulation should not be seen as opposed to innovation; instead, it can drive innovation, creating a virtuous cycle for both governments and philanthropy [195-198].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel on regulatory sandboxes and responsible innovation argues that regulation can drive innovation [S18]; the “Either you regulate or you innovate” discussion directly addresses this synergy [S28]; broader multi-stakeholder governance frameworks also promote this view [S33].
MAJOR DISCUSSION POINT
Synergy between regulation and innovation
T
Terah Lyons
4 arguments165 words per minute1020 words369 seconds
Argument 1
Adaptive, use‑case‑driven AI regulation and governance – Observation that many of today’s policy questions (fairness, transparency, standards) were already raised during the Obama era (Terah Lyons)
EXPLANATION
Terah observed that the policy concerns being debated now—fairness, transparency, standards, bias mitigation—were already on the agenda during the Obama administration, indicating continuity in AI governance challenges.
EVIDENCE
She explained that the Obama era was the first time global governments considered AI policy, raising questions about standards, fairness, transparency, bias mitigation, and localization, many of which persist today [71-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The historical analysis of AI policy evolution notes that fairness, transparency and standards were first raised during the Obama administration [S17].
MAJOR DISCUSSION POINT
Historical continuity of AI policy questions
Argument 2
Diffusion, adoption, scaling, and sustainable financial models – The toughest challenges are human and institutional, requiring trust and responsible scaling rather than purely technical solutions (Terah Lyons)
EXPLANATION
Terah argued that the most difficult problems in AI are not technical but revolve around human and institutional issues such as building trust, responsible scaling, and governance, which are essential for widespread adoption.
EVIDENCE
She stated that the hardest questions are about human and institutional issues, emphasizing the need for responsible scaling, trust, and governance to make AI useful in everyday life [82-88].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Building trust in digital health and the importance of transparent, responsible scaling are discussed in the Rwanda healthcare trust brief [S22]; multi-stakeholder governance that emphasizes human and institutional factors is highlighted in the global AI governance forum [S31][S33].
MAJOR DISCUSSION POINT
Human‑centric challenges to AI diffusion
Argument 3
Diffusion, adoption, scaling, and sustainable financial models – Financial sector’s risk‑management expertise and the need for regulatory harmonisation to enable large‑scale AI deployment (Terah Lyons)
EXPLANATION
Terah described how JPMorgan’s extensive risk‑management experience and its global footprint inform a pragmatic approach to AI deployment, stressing the importance of regulatory harmonisation across borders to support scalable AI.
EVIDENCE
She noted JPMorgan operates in over 100 countries, spends about $20 billion on technology, and leverages risk-management expertise to build governance controls, while calling for regulatory harmonisation to enable consistent AI deployment worldwide [160-174].
MAJOR DISCUSSION POINT
Risk‑management and regulatory harmonisation
Argument 4
Future summit direction: multi‑stakeholder inclusion and South‑South cooperation – Call for broader representation of deployers from retail, energy, manufacturing, and other real‑economy sectors at the next summit (Terah Lyons)
EXPLANATION
Terah urged that future summit panels include AI deployers from sectors such as retail, energy, and manufacturing, so that the perspectives of the real economy are reflected alongside finance and government.
EVIDENCE
She expressed a desire to see more deployers from retail, energy, manufacturing, and other real-economy sectors speaking at the next summit in Switzerland [179-183].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for inclusive, multi-stakeholder participation that brings in real-economy actors are made in the Open Forum on global AI governance and the multi-stakeholder fit-for-purpose discussion [S31][S33].
MAJOR DISCUSSION POINT
Inclusive representation of real‑economy deployers
R
Rudra Chaudhry
1 argument197 words per minute1177 words357 seconds
Argument 1
Adaptive, use‑case‑driven AI regulation and governance – Highlighting the tension between policy formulation and rapid adoption of AI (Rudra Chaudhry)
EXPLANATION
Rudra pointed out the inherent tension between developing AI policy and the fast‑paced adoption of AI technologies, questioning whether the current separation of policy and adoption truly reflects reality.
EVIDENCE
He framed the panel as being about policy on one side and adoption on the other, then asked whether that division is accurate, highlighting the tension between the two domains [22-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between fast AI adoption and policy development is addressed in the adaptive governance and regulatory sandbox session [S18]; the regulation-vs-innovation debate also foregrounds this policy-adoption gap [S28].
MAJOR DISCUSSION POINT
Policy vs. adoption tension
Agreements
Agreement Points
A global AI compact is feasible if it combines shared non‑negotiable standards with contextual adaptation.
Speakers: Paula Ingabire, Rudra Chaudhry
Adaptive, use‑case‑driven AI regulation and governance – Call for a global AI compact with shared non‑negotiable standards, adaptable to local contexts
Rudra asked whether a global compact on AI is possible [59-60] and Paula responded that it is, provided it reflects diverse cultural and linguistic contexts and includes shared standards that can be contextualised [61-66].
POLICY CONTEXT (KNOWLEDGE BASE)
The concept mirrors calls for a global AI compact that blends core non-negotiable standards with flexibility for local contexts, as highlighted in discussions on digital public infrastructure and policy harmonisation [S49] and the International AI Cooperation Ecosystem roadmap [S51].
Regulation should be adaptive, evidence‑based and centred on human needs rather than treating AI as a magical technology.
Speakers: Paula Ingabire, John Palfrey
Adaptive, use‑case‑driven AI regulation and governance – Advocacy for regulation that evolves from concrete use‑cases rather than abstract rules Adaptive, use‑case‑driven AI regulation and governance – Assertion that AI must be regulated to serve humans, not treated as a magical, self‑justifying technology
Paula explained that Rwanda builds regulations after piloting concrete use-cases, making the framework evidence-based and specific [40-44]. John stressed that AI must be governed like any other technology, keeping humans at the centre and avoiding a “magical” view of AI [95-99].
POLICY CONTEXT (KNOWLEDGE BASE)
Emphasizes adaptive, evidence-based regulation, echoing the evidence-based AI policy roadmap that stresses capacity-building and research integration [S52] and the recommendation to regulate AI by use-case rather than technology alone [S64].
Partnerships that include capacity‑building and co‑development are essential for effective AI deployment.
Speakers: Paula Ingabire, John Palfrey
Partnerships, capacity building, and data sovereignty – Rwanda’s partnership model prioritises co‑development and skill transfer rather than simple technology import Role of philanthropy and civil society in AI governance – Philanthropy should underwrite civil‑society voices so they can influence AI policy and practice
Paula said Rwanda requires partners to train local staff and co-develop solutions rather than just delivering a finished product [48-50]. John argued that civil-society participation depends on philanthropic funding and that philanthropy must support these partnerships [104-108].
POLICY CONTEXT (KNOWLEDGE BASE)
Stresses public-private partnerships and capacity-building, aligning with calls for education and relationship building among policymakers, industry and civil society [S50] and examples of capacity-building tools for population-scale impact [S62].
The toughest challenges to AI diffusion are human and institutional, requiring trust, responsible scaling and use‑case‑specific risk management.
Speakers: Paula Ingabire, Terah Lyons
Adaptive, use‑case‑driven AI regulation and governance – Advocacy for regulation that evolves from concrete use‑cases rather than abstract rules Diffusion, adoption, scaling, and sustainable financial models – The toughest challenges are human and institutional, requiring trust and responsible scaling rather than purely technical solutions
Paula highlighted that Rwanda evaluates risks per use-case and builds regulations accordingly, emphasizing capacity and trust [40-44][129-132]. Terah stated that the hardest questions are about human and institutional issues such as building trust and responsible scaling [82-88].
POLICY CONTEXT (KNOWLEDGE BASE)
Highlights human and institutional barriers, consistent with concerns about ethical risks, trust and institutional support raised in AI governance forums [S61] and identified as key barriers to scaling AI in policy analyses [S69].
Regulatory harmonisation across borders is needed to enable large‑scale, use‑case‑driven AI deployment.
Speakers: Paula Ingabire, Terah Lyons
Adaptive, use‑case‑driven AI regulation and governance – Advocacy for regulation that evolves from concrete use‑cases rather than abstract rules Diffusion, adoption, scaling, and sustainable financial models – Financial sector’s risk‑management expertise and the need for regulatory harmonisation to enable large‑scale AI deployment
Paula described Rwanda’s adaptive, use-case-specific regulatory approach [40-44]. Terah argued that, given JPMorgan’s global footprint, regulatory harmonisation is crucial for consistent AI deployment worldwide [172-174].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for cross-border regulatory harmonisation, reflecting the emphasis on policy harmonisation for digital cooperation and AI governance [S49] and the identified need to overcome policy fragmentation to enable large-scale AI deployment [S69].
Future summit processes should ensure multi‑stakeholder inclusion, especially South‑South cooperation and representation of real‑economy deployers.
Speakers: John Palfrey, Terah Lyons, Paula Ingabire
Future summit direction: multi‑stakeholder inclusion and South‑South cooperation – Recommendation to avoid a false binary between regulation and innovation; instead, let governance stimulate further innovation Future summit direction: multi‑stakeholder inclusion and South‑South cooperation – Call for broader representation of deployers from retail, energy, manufacturing, and other real‑economy sectors at the next summit Diffusion, adoption, scaling, and sustainable financial models – Rwanda’s AI use‑cases in health, education, and agriculture deliver tangible societal benefits that go beyond monetary ROI
John urged that regulation and innovation should be synergistic and that multi-stakeholder participation is vital [195-198]. Terah called for more deployers from sectors like retail and energy at the next summit [179-183]. Paula emphasized the need for African voices and broader participation to assess impact [205-212].
POLICY CONTEXT (KNOWLEDGE BASE)
Advocates multi-stakeholder and South-South cooperation, resonating with analyses of multi-stakeholder processes and the need for improved Global South participation in digital governance [S53][S54] and the emphasis on inclusive summit design [S55].
Sustainable financial models and clear value for citizens are needed for AI diffusion, beyond pure monetary ROI.
Speakers: Rudra Chaudhry, Paula Ingabire
Diffusion, adoption, scaling, and sustainable financial models – Rwanda’s AI use‑cases in health, education, and agriculture deliver tangible societal benefits that go beyond monetary ROI
Rudra asked whether AI deployments have OPEX or revenue models and need sustainability [124-128]. Paula replied that value is not only monetary, citing health, education and agriculture benefits that improve services and livelihoods, while acknowledging the importance of financial sustainability [129-146].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for sustainable financing beyond ROI, echoing discussions on philanthropic funding models and benefit-sharing mechanisms for AI projects [S56][S63] and the call for economic models that prioritize social value [S71].
Similar Viewpoints
Both stress that AI regulation should be grounded in concrete use‑cases and keep humans at the centre, avoiding abstract or mystical approaches [40-44][95-99].
Speakers: Paula Ingabire, John Palfrey
Adaptive, use‑case‑driven AI regulation and governance – Advocacy for regulation that evolves from concrete use‑cases rather than abstract rules Adaptive, use‑case‑driven AI regulation and governance – Assertion that AI must be regulated to serve humans, not treated as a magical, self‑justifying technology
Both identify human‑centric and institutional issues (trust, capacity, risk management) as the primary barriers to AI diffusion, rather than purely technical challenges [40-44][82-88].
Speakers: Paula Ingabire, Terah Lyons
Adaptive, use‑case‑driven AI regulation and governance – Advocacy for regulation that evolves from concrete use‑cases rather than abstract rules Diffusion, adoption, scaling, and sustainable financial models – The toughest challenges are human and institutional, requiring trust and responsible scaling rather than purely technical solutions
Both argue that regulation can be an engine for innovation and that coordinated, harmonised regulatory frameworks are needed for scalable AI deployment [195-198][172-174].
Speakers: John Palfrey, Terah Lyons
Future summit direction: multi‑stakeholder inclusion and South‑South cooperation – Recommendation to avoid a false binary between regulation and innovation; instead, let governance stimulate further innovation Diffusion, adoption, scaling, and sustainable financial models – Financial sector’s risk‑management expertise and the need for regulatory harmonisation to enable large‑scale AI deployment
Both see a global AI compact as achievable, provided it balances universal standards with contextual flexibility [59-60][61-66].
Speakers: Paula Ingabire, Rudra Chaudhry
Adaptive, use‑case‑driven AI regulation and governance – Call for a global AI compact with shared non‑negotiable standards, adaptable to local contexts
Both highlight the necessity of partnerships—whether philanthropic support for civil society or co‑development agreements—to build local capacity and ensure inclusive AI governance [104-108][48-50].
Speakers: John Palfrey, Paula Ingabire
Role of philanthropy and civil society in AI governance – Philanthropy should underwrite civil‑society voices so they can influence AI policy and practice Partnerships, capacity building, and data sovereignty – Rwanda’s partnership model prioritises co‑development and skill transfer rather than simple technology import
Unexpected Consensus
Regulatory harmonisation across sectors and borders is essential for scaling AI, despite speakers coming from very different domains (government vs. global finance).
Speakers: Paula Ingabire, Terah Lyons
Adaptive, use‑case‑driven AI regulation and governance – Advocacy for regulation that evolves from concrete use‑cases rather than abstract rules Diffusion, adoption, scaling, and sustainable financial models – Financial sector’s risk‑management expertise and the need for regulatory harmonisation to enable large‑scale AI deployment
Paula, representing a national government, emphasises adaptive, use-case-specific regulation, while Terah, from a major financial institution, stresses the need for cross-border regulatory harmonisation. Their convergence on the importance of coordinated regulation for scaling AI was not anticipated given their distinct institutional perspectives [40-44][172-174].
POLICY CONTEXT (KNOWLEDGE BASE)
Reiterates need for sectoral and cross-border harmonisation, supported by policy-harmonisation literature for digital infrastructure [S49] and the broader framework identifying regulatory alignment as a prerequisite for AI scaling [S69].
Both the moderator and a philanthropy leader agree that AI diffusion must be underpinned by sustainable financing models that go beyond simple ROI calculations.
Speakers: Rudra Chaudhry, John Palfrey
Future summit direction: multi‑stakeholder inclusion and South‑South cooperation – Recommendation to avoid a false binary between regulation and innovation; instead, let governance stimulate further innovation Diffusion, adoption, scaling, and sustainable financial models – Rwanda’s AI use‑cases in health, education, and agriculture deliver tangible societal benefits that go beyond monetary ROI
Rudra’s query about OPEX/revenue models for AI deployments [124-128] and John’s emphasis that philanthropy must provide long-term, stable funding to support civil-society and innovation [195-198] reveal a shared belief that financial sustainability is a prerequisite for responsible AI diffusion, an alignment not explicitly foreseen at the start of the summit.
POLICY CONTEXT (KNOWLEDGE BASE)
Consensus on sustainable financing aligns with observations that philanthropy and blended financing are crucial for AI diffusion, as noted in philanthropy-focused panels [S56] and collaborative financing models for inclusive AI [S63].
Overall Assessment

The panel displayed strong convergence on several fronts: the feasibility of a global AI compact with adaptable standards; the need for adaptive, use‑case‑driven, human‑centred regulation; the centrality of partnerships and capacity‑building (including philanthropic support for civil society); the primacy of human and institutional challenges such as trust and risk management; the requirement for regulatory harmonisation to enable large‑scale deployment; and the importance of multi‑stakeholder, South‑South cooperation for future summit cycles. These agreements cut across AI governance, the enabling environment for digital development, capacity development, and financial mechanisms.

High consensus – most speakers aligned on the principles of adaptive regulation, partnership‑driven capacity building, and the need for coordinated, inclusive frameworks. This broad agreement suggests that forthcoming policy initiatives and summit outcomes are likely to focus on building flexible regulatory standards, fostering public‑private‑civil‑society partnerships, and developing sustainable financing models, thereby advancing AI diffusion in a responsible and equitable manner.

Differences
Different Viewpoints
Approach to AI regulation – adaptive, use‑case‑driven versus harmonised, baseline rules
Speakers: Paula Ingabire, Terah Lyons
Adaptive, use‑case‑driven AI regulation and governance – Advocacy for regulation that evolves from concrete use‑cases rather than abstract rules (Paula Ingabire) Financial sector’s risk‑management expertise and the need for regulatory harmonisation to enable large‑scale AI deployment (Terah Lyons)
Paula explains that Rwanda builds regulations after piloting specific AI use-cases, tailoring rules to the problems being solved and avoiding abstract frameworks [40-44]. Terah argues that scaling AI globally requires sector-specific risk management and regulatory harmonisation across borders to provide consistent rules for multinational operators [172-174]. This reflects a disagreement on whether regulation should be highly localized and adaptive or standardized internationally.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between adaptive, use-case-driven regulation and baseline harmonised rules reflects the debate captured in recommendations to focus regulation on specific harms rather than blanket legislation [S66] and the use-case-centric approach advocated for AI safety [S64].
Financing models for AI diffusion – social‑value focus vs OPEX/revenue model vs philanthropic funding
Speakers: Rudra Chaudhry, Paula Ingabire, John Palfrey
Is a global compact on something like AI actually possible? Or are there norms that we should generally be thinking about and fitting into our national jurisdictions? (Rudra Chaudhry) Diffusion, adoption, scaling, and sustainable financial models – Rwanda’s AI use‑cases in health, education, and agriculture deliver tangible societal benefits that go beyond monetary ROI (Paula Ingabire) Role of philanthropy and civil society in AI governance – Philanthropy should underwrite civil‑society voices so they can influence AI policy and practice (John Palfrey)
Rudra asks for a concrete OPEX or revenue model to sustain AI deployments [124-127]. Paula responds that AI’s value is primarily social and not purely monetary, though financial sustainability is a useful metric [129-132]. John stresses that philanthropic capital is essential to fund civil-society participation and notes that over a billion dollars have already been mobilised for humanity-focused AI initiatives [104-108][120-121]. All agree on the need for sustainable diffusion but disagree on the primary financing mechanism (market-based OPEX vs social-value-driven public-sector/philanthropic funding).
Unexpected Differences
National data sovereignty versus cross‑border regulatory harmonisation
Speakers: Paula Ingabire, Terah Lyons
Partnerships, capacity building, and data sovereignty – Development of a national data hub and a data‑protection/privacy law “by design” to safeguard sovereignty (Paula Ingabire) Financial sector’s risk‑management expertise and the need for regulatory harmonisation to enable large‑scale AI deployment (Terah Lyons)
Paula stresses Rwanda’s proactive creation of a national data hub and a by-design data protection law to ensure data sovereignty and guardrails [51-54]. Terah, by contrast, advocates for regulatory harmonisation across countries to provide consistent rules for multinational AI operators [172-174]. The tension between protecting national data assets and pursuing international regulatory alignment was not anticipated given the shared emphasis on governance, making this an unexpected disagreement.
POLICY CONTEXT (KNOWLEDGE BASE)
The clash between national data sovereignty and cross-border regulatory harmonisation is addressed in policy-harmonisation discussions for digital public infrastructure, which balance sovereignty concerns with the benefits of coordinated standards [S49].
Overall Assessment

The discussion revealed moderate disagreement centered on how AI regulation should be structured and financed. While participants agree on the importance of AI diffusion and responsible governance, they diverge on whether regulation should be highly adaptive and country‑specific or harmonised globally, and on whether sustainable financing should rely on market‑based OPEX models, social‑value metrics, or large‑scale philanthropic funding. An unexpected clash emerged between Rwanda’s data‑sovereignty approach and calls for cross‑border regulatory harmonisation.

Moderate – The disagreements are substantive regarding regulatory design and financing mechanisms but do not undermine the shared commitment to responsible AI adoption. These differences imply that future summit outcomes will need to balance localized adaptive policies with broader harmonisation efforts and devise blended financing models that combine public, private, and philanthropic resources.

Partial Agreements
All speakers share the overarching goal of responsible AI diffusion and adoption, but they propose different pathways: Paula emphasizes adaptive, use‑case‑specific regulation; Terah calls for sector‑wide risk management and regulatory harmonisation; John highlights philanthropic funding to empower civil‑society; Rudra questions the feasibility of a global compact and seeks concrete normative frameworks. Their convergence on the need for responsible AI is evident, while their preferred mechanisms diverge [40-44][172-174][104-108][59-60].
Speakers: Paula Ingabire, Terah Lyons, John Palfrey, Rudra Chaudhry
Adaptive, use‑case‑driven AI regulation and governance – Advocacy for regulation that evolves from concrete use‑cases rather than abstract rules (Paula Ingabire) Financial sector’s risk‑management expertise and the need for regulatory harmonisation to enable large‑scale AI deployment (Terah Lyons) Role of philanthropy and civil society in AI governance – Philanthropy should underwrite civil‑society voices so they can influence AI policy and practice (John Palfrey) Is a global compact on something like AI actually possible? Or are there norms that we should generally be thinking about and fitting into our national jurisdictions? (Rudra Chaudhry)
Takeaways
Key takeaways
AI for all requires active participation from every country, with a focus on equitable, human‑centered outcomes. Regulation should be adaptive and driven by concrete use‑cases rather than abstract, one‑size‑fits‑all rules. A global AI compact is feasible if it defines non‑negotiable core standards while allowing contextual adaptation. Partnerships that include co‑development and skill transfer are essential for building national capacity and preserving data sovereignty. Rwanda is building a national data hub and data‑protection/privacy law “by design” to safeguard sovereignty and enable AI deployment. The most pressing challenges are institutional and human – building trust, responsible scaling, and sustainable financing – rather than purely technical hurdles. Philanthropic funding must underwrite civil‑society participation to ensure inclusive governance; over $1 billion has already been pledged for humanity‑focused AI initiatives. Financial institutions bring mature risk‑management practices and call for regulatory harmonisation to support cross‑border AI scaling. Future summit processes should broaden multi‑stakeholder representation, especially from real‑economy sectors and the Global South, and consider hosting the next meeting in Kigali.
Resolutions and action items
Rwanda offered to host the next summit (potentially in Kigali) to promote South‑South cooperation and amplify African voices. Participants called for the inclusion of more deployers from retail, energy, manufacturing, and other real‑economy sectors in future summit panels. Commitment to continue developing adaptive, use‑case‑specific regulatory frameworks and to align them with emerging global baseline standards. Philanthropic organisations (e.g., MacArthur Foundation) will continue to fund civil‑society groups and explore collaborations with frontier labs for AI deployment.
Unresolved issues
How a global AI compact can be operationalised across diverse cultural, linguistic, and regulatory contexts. Specific sustainable financial models (OPEX/revenue streams) for large‑scale AI deployments in public services. Concrete mechanisms for achieving regulatory harmonisation across jurisdictions while respecting data sovereignty. Metrics and methodologies for quantifying AI impact and linking it to economic returns in emerging economies. Details on how to balance rapid diffusion of AI with the need for robust, trust‑building governance.
Suggested compromises
Adopt a hybrid regulatory approach: establish core, non‑negotiable global standards, then allow nations to tailor rules to specific use‑cases. Avoid a false binary between regulation and innovation; instead, use governance frameworks to stimulate further innovation. Implement risk assessment at the individual use‑case level rather than imposing blanket restrictions, enabling targeted safeguards. Prioritise partnership models that combine foreign technology with local capacity building, ensuring long‑term ownership and control.
Thought Provoking Comments
Rather than focus on abstract regulation, we first identify where AI creates the biggest societal benefits and then build regulations that are specific to those use‑cases. Our regulatory posture is adaptive and evidence‑based because we are already deploying solutions.
Highlights a pragmatic, use‑case‑driven regulatory model that flips the traditional top‑down approach, emphasizing flexibility and learning from deployment.
Set the foundation for Rwanda’s approach, prompting the panel to discuss global standards versus national adaptation and influencing later comments on data sovereignty and the feasibility of a global AI compact.
Speaker: Paula Ingabire (Minister of ICT and Innovation, Rwanda)
Is a global compact on AI actually possible, or should we think in terms of norms that fit within national jurisdictions?
Directly challenges the assumption that a universal agreement is feasible, opening a debate on the tension between global governance and national sovereignty.
Triggered Paula Ingabire’s response about contextualized global standards, steered the conversation toward the balance between shared norms and local adaptation, and set up later discussions on regulatory harmonization.
Speaker: Rudra Chaudhry (Moderator)
The hardest questions in AI right now are not technical; they are human and institutional – making the technology useful to real organizations, building trust, and scaling responsibly.
Shifts focus from technical breakthroughs to societal and organizational challenges, emphasizing that adoption hinges on trust and practical utility.
Reframed the panel’s perspective, leading others (e.g., John Palfrey) to stress human‑centered design and prompting deeper discussion on governance, trust, and the role of civil society.
Speaker: Terah Lyons (Managing Director, JPMorgan Chase)
We need to make AI work for humans and put humans at the center, not treat AI as a magical, separate entity. A stable regulatory regime is essential for that.
Reinforces a human‑centric philosophy and connects ethical considerations with concrete regulatory needs, bridging philanthropy and policy.
Echoed and amplified the human‑centered theme, influencing subsequent dialogue on the role of philanthropy in supporting civil‑society voices and responsible innovation.
Speaker: John Palfrey (President, MacArthur Foundation)
Philanthropy must ensure civil society has a voice; we have already mobilised over a billion dollars for humanity‑focused AI initiatives, but without funding civil society the conversation loses essential perspectives.
Highlights the strategic importance of funding non‑profit actors to balance private‑sector dominance, positioning philanthropy as a catalyst for inclusive AI governance.
Prompted recognition from other panelists of the need for multi‑stakeholder involvement, and set the stage for suggestions about involving deployers from diverse sectors and regions.
Speaker: John Palfrey (President, MacArthur Foundation)
Regulatory harmonisation is crucial so that global operators can have consistent rules across borders; this ties into sovereign AI and the need for a global baseline.
Identifies a concrete obstacle to scaling AI—fragmented regulations—and proposes harmonisation as a solution, linking national sovereignty concerns with practical business needs.
Extended the earlier debate on global compacts by offering a pragmatic pathway, influencing the conversation toward actionable steps for international coordination.
Speaker: Terah Lyons (Managing Director, JPMorgan Chase)
Value cannot be measured only in monetary terms; we look at problem‑solving impact—better health diagnostics, education tools, farmer data services—and build capacity so Rwandans develop and own the solutions.
Broadens the definition of AI benefit beyond ROI, emphasizing social impact, capacity building, and data sovereignty, which adds depth to the adoption discussion.
Deepened the analysis of AI’s societal benefits, reinforced the adaptive regulatory stance, and supported the call for more African participation and South‑South cooperation later in the session.
Speaker: Paula Ingabire (Minister of ICT and Innovation, Rwanda)
We would love to host the next summit in Kigali and ensure more African voices are at the table, so we can balance impact across emerging, middle, and large economies.
Moves the dialogue from abstract policy to concrete next steps, emphasizing geographic inclusivity and South‑South collaboration.
Served as a turning point toward actionable outcomes, prompting agreement from the moderator and setting a forward‑looking agenda for the summit’s continuation.
Speaker: Paula Ingabire (Minister of ICT and Innovation, Rwanda)
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from high‑level aspirations to concrete, human‑centered, and context‑specific strategies. Paula Ingabire’s adaptive‑regulation narrative and her emphasis on value beyond money grounded the debate in Rwanda’s practical experience. Rudra’s probing question about a global AI compact opened space for debating universal standards versus national sovereignty, a theme further explored through Terah Lyons’s call for regulatory harmonisation and John Palfrey’s advocacy for civil‑society funding. The collective shift from technical challenges to institutional, trust, and governance issues reframed the panel’s focus, culminating in a concrete proposal to host future summits in Africa, thereby anchoring the dialogue in actionable, inclusive next steps.

Follow-up Questions
Is a global AI compact feasible and what shared norms should be incorporated into national jurisdictions?
Determines the ability to coordinate international AI governance and ensure consistent standards across countries.
Speaker: Rudra Chaudhry (to Paula Ingabire)
How can adaptive, use‑case‑specific regulatory frameworks be designed and implemented effectively?
Ensures regulations are tailored to actual AI deployments rather than abstract rules, improving relevance and compliance.
Speaker: Paula Ingabire
What are the optimal designs and guardrails for a national data hub to ensure data sovereignty while enabling AI innovation?
Balancing data protection with AI development is critical for trust and effective use of national data assets.
Speaker: Paula Ingabire
What sustainable financial models (OPEX or revenue‑based) can support the long‑term deployment of AI solutions in Rwanda?
Identifies how AI projects can be financially viable beyond initial funding, ensuring continued impact.
Speaker: Rudra Chaudhry (to Paula Ingabire)
How do Rwandan citizens perceive the value of AI‑enabled services in health, education, and agriculture, and how can this be quantified?
Measuring user‑perceived benefits helps justify investments and guide future AI roll‑outs.
Speaker: Rudra Chaudhry (to Paula Ingabire)
What strategies are needed to deploy and scale AI responsibly across borders, including sustainable financing and operational models?
Large institutions require clear pathways for scaling AI while managing risk and financial sustainability.
Speaker: Rudra Chaudhry (to Tara Lyons)
How can regulatory harmonization be achieved globally to provide consistent AI governance for multinational operators?
Uniform rules reduce compliance complexity and enable smoother cross‑border AI deployment.
Speaker: Tara Lyons
What should a global baseline for AI governance look like to give clarity to global operators while respecting sovereign concerns?
A baseline would set minimum standards, facilitating responsible AI use worldwide.
Speaker: Tara Lyons
How can the impact of AI initiatives be quantified and mechanisms created for ongoing exchange of lessons learned among stakeholders?
Quantifying impact enables assessment of effectiveness and informs continuous improvement.
Speaker: Paula Ingabire
How can African and other emerging‑economy voices be more effectively included in global AI governance and deployment discussions?
Ensures that AI policies reflect diverse contexts and that benefits are equitably distributed.
Speaker: Paula Ingabire
What models of philanthropy could involve partnerships with frontier labs to drive AI innovation while informing regulation?
Explores new funding mechanisms that blend research, deployment, and policy development.
Speaker: Rudra Chaudhry (to John Palfrey)
How can future summits ensure broader multi‑stakeholder diversity, especially including deployers from retail, energy, manufacturing, and other real‑economy sectors?
Broad sector representation brings practical insights into AI adoption challenges and solutions.
Speaker: Tara Lyons
What concrete South‑South cooperation mechanisms can accelerate AI adoption and impact among developing nations before the next summit in Switzerland?
Facilitates knowledge sharing and joint projects among countries with similar development trajectories.
Speaker: Paula Ingabire
What methodologies are needed to assess risk at the individual use‑case level rather than applying blanket risk labels?
Granular risk assessment allows for proportionate regulation and better alignment with specific AI applications.
Speaker: Paula Ingabire and Tara Lyons (implied)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session featured Vivek Raghavan, co-founder of Sarvam, a company developing artificial-intelligence that understands India’s languages and context, introduced as the next keynote speaker[1-3]. Raghavan asserted that India is capable of training state-of-the-art AI models and delivering them to a billion citizens, which is the core motivation behind Sarvam’s founding[4-7]. He argued that while short-term technical leads such as model size or chip speed are transitory, long-term national sovereignty in AI is essential and can only be achieved by building the technology domestically[11-13]. Drawing on his experience with the Aadhaar digital-identity system and the open-source India Stack, he emphasized that self-created public infrastructure has historically enabled large-scale digital initiatives[14-19]. Raghavan warned that without indigenous AI capabilities India would become a “digital colony” dependent on foreign core technologies, making sovereign development a non-optional imperative[22-27].


He highlighted India’s unique advantage of linguistic diversity-22 official languages and regional variations every 50 km-which must be captured for AI to truly reflect the nation’s voice[28-34]. Coupled with a massive, cost-conscious economy, this diversity creates a huge demand for affordable, scalable AI services that can reach even the last person in the country[35-40]. Sarvam’s strategy is built around a full-stack platform comprising sovereign models, application layers, and infrastructure designed for Indian scale[42-46]. The company’s models are created from scratch without reliance on external data, yet aim to match world-class performance, beginning with the SARAS speech-to-text system that is native to Indian languages and outperforms global competitors in blind tests[46-65]. Their vision models, despite having only 3 billion parameters, surpass larger international systems in document digitisation, layout understanding and visual grounding, especially for Indian languages[84-90].


Sarvam has also trained large language models, including a 32 K-context, 16-trillion-token model that excels in multilingual conversational tasks, and a 105-billion-parameter LLM that rivals open-source and proprietary counterparts on complex reasoning and web-search benchmarks[92-107]. Remarkably, these achievements were produced by a team of just fifteen young engineers, demonstrating the depth of talent available in India and suggesting that even larger projects are feasible with appropriate support[112-119]. The models are already powering real-time voice conversations in eleven languages, supporting NGOs, enterprise and government use cases, and are being extended to content digitisation, dubbing and edge devices such as glasses, backed by large-scale compute infrastructure tailored for Indian cost and scale requirements[120-135]. Raghavan concluded that building sovereign AI is not optional but a strategic necessity for India’s future, enabling affordable, high-quality services for all citizens[24-27].


Keypoints


Major discussion points


Sovereignty in AI is essential for India’s future.


Vivek stresses that short-term technical leads (largest model, fastest chip) are fleeting, but long-term national sovereignty over AI is non-negotiable, otherwise India risks becoming a “digital colony.” [10-13][24-27]


India’s unique strengths make sovereign AI feasible.


The country’s linguistic diversity (22 official languages, regional dialects) and massive, cost-conscious market provide both the data and the demand needed to build AI that serves every citizen. [28-34][35-40]


Sarvam is building a full-stack sovereign AI platform.


The platform is organized around three pillars – home-grown models, AI-driven applications, and scalable infrastructure – all designed to be developed and operated entirely in India. [42-46][46-51]


World-class models have been created from scratch.


• Speech models (SARAS) that outperform global competitors in Indian languages. [53-66]


• Vision models (3 billion-parameter) that beat larger international models on document digitization and visual grounding. [84-90]


• Large language models, including a 105 billion-parameter LLM, trained on 16 trillion tokens, delivering superior performance to comparable global models. [92-106]


Real-world applications and infrastructure are already in use.


Sarvam’s models power over a million minutes of daily multilingual voice conversations, support NGOs, enable book digitization, dubbing, and are being optimized for edge devices and specialized hardware to deliver AI at “India scale at India cost.” [120-135]


Overall purpose / goal


The discussion aims to convince the audience that India not only can but must develop its own sovereign AI capabilities. By highlighting Sarvam’s technical achievements and the nation’s strategic advantages, Vivek calls for continued support and collaboration to scale these home-grown solutions for the benefit of all Indians and to position India as a global AI leader.


Tone of the discussion


The tone is consistently confident and rallying, beginning with a bold proclamation (“India can”) and maintaining an enthusiastic, persuasive stance throughout. It shifts briefly to a technical, detail-rich mode when describing models and performance metrics, then returns to an optimistic, forward-looking tone that celebrates youth talent and envisions broader impact. Overall, the tone moves from assertive advocacy to inspirational optimism.


Speakers

Speaker 1


– Role/Title: Event moderator / host (introduces the keynote speaker)


– Area of Expertise: (not specified)


– Source: [S1]


Vivek Raghavan


– Role/Title: Co-founder, Sarvam AI; Keynote speaker


– Area of Expertise: Sovereign AI, large language models, multilingual speech and text technologies for India


– Source: [S4], [S5]


Additional speakers:


(none identified beyond the listed speakers)


Full session reportComprehensive analysis and detailed insights

The session opened with Speaker 1 introducing the next keynote, Vivek Raghavan, co-founder of Sarvam – a company dedicated to building artificial-intelligence that can speak India’s many languages and understand its unique context[1-3].


His central mantra was simple: “India can.” He explained that this belief underpins Sarvam’s mission to train state-of-the-art models that can serve a billion citizens, demonstrating that indigenous AI development is both feasible and essential[4-7].


Raghavan contrasted fleeting technical races – such as who has the largest model or the fastest chip – with the enduring need for national AI sovereignty. Short-term advantages are transitory, whereas sovereignty determines long-term strategic autonomy; without it, India risks becoming a “digital colony” dependent on foreign core technologies[10-13][24-27].


Drawing on his own experience with the Aadhaar digital-identity programme, he highlighted how India has previously created open-source, public-infrastructure (the India Stack) from scratch, turning proprietary technologies into shared national assets. This history, he argued, shows that sovereign digital foundations can be built and sustained over decades[14-19].


India’s unique strengths make sovereign AI not only possible but advantageous. The country’s linguistic diversity – 22 official languages and dialectal shifts roughly every 50 km – provides rich, varied data[28-34], while its massive, cost-conscious market creates a huge demand for affordable AI services that can reach even the most remote users[35-40].


Sarvam’s response is a three-layer “full-stack” platform comprising (1) home-grown models, (2) downstream applications, and (3) scalable infrastructure, all designed, built, and operated entirely within India[42-46].


The sovereign-models rule is strict: the models are built from scratch with no reliance on external data or pre-existing models, yet they aim for world-class performance[46-51].


He noted that while many large global AI firms exist, India must develop its own models that can serve the people and be adopted worldwide[31-33].


In the speech domain, Sarvam’s SARAS system is an Indian-language speech-to-text solution trained on extensive, diverse Indian data. It consistently outperforms global competitors in blind tests, delivering the most preferred voice and text-to-speech quality across Indian languages, and includes a dubbing capability that preserves speaker modality and handles mixed-language content[53-66][69-71].


For visual tasks, Sarvam has produced a 3-billion-parameter vision model that excels at document digitisation, layout understanding, visual grounding and reading-order prediction. Despite its modest size, it surpasses larger international models in both English and Indian-language benchmarks[84-90].


The compact 32 K-context language model was trained on 16 trillion tokens and was made possible by a grant from the India AI Mission, which provided the necessary GPU resources[92-100][108-110].


Building on this, the company released a 105-billion-parameter large language model – the largest trained from scratch in India – which matches or exceeds peers such as GBT OSS 120 B and Gemini Flash on complex reasoning, web-search and other benchmarks[101-107].


Remarkably, these achievements were realised by a team of just fifteen young engineers, underscoring the depth of India’s talent pool and suggesting that even larger projects are attainable with appropriate support and funding[112-119].


The models are already powering real-world services: more than a million minutes of voice conversation each day in eleven Indian languages, extensive use by NGOs (over a crore minutes of calls in a month), and deployments across enterprises and government agencies. Additional applications include book digitisation, translation and video dubbing, illustrating a broad ecosystem of AI-driven content services[124-128][129-131].


To ensure that these capabilities reach every citizen, Sarvam is optimising models for edge deployment – on smartphones, smart glasses and other form factors – while building large-scale compute infrastructure that delivers AI at “India scale at India cost”[132-136].


He concluded that AI sovereignty is not optional; without it India would remain dependent on external technology, whereas a home-grown, cost-effective AI stack can deliver high-quality services to the last citizen, much as UPI transformed payments domestically[24-27][28-30].


Session transcriptComplete transcript of the session
Speaker 1

I move on to our next keynote speaker, who is Mr. Vivek Raghavan, the co -founder of Sarvam, a company building AI that speaks India’s languages and understands India’s context. In a world dominated by models trained on English language data, their work is a powerful demonstration that sovereign AI capability is not just a luxury, it is a necessity. So, ladies and gentlemen, please welcome Mr. Vivek Raghavan, co -founder of Sarvam.

Vivek Raghavan

I come here to say that India can. And I think that’s the message I want to say. India can. And India can train state -of -the -art models, bring AI to a billion Indians, and do it all. And that’s really the message of why we started Sarvam. I want to talk about the long arc. You know, when you look at, you know, today the world is moving so fast. Everybody talks about where is the largest model or who has the fastest chip. These are all transitory technical advantages. In the long run, it’s sovereignty that matters. And unless we build these things ourselves, it’s something that, you know, will be left behind in the race. In the past 15 years of my life, I worked on building Aadhaar.

Which is India’s… digital identity program. Prior to that, many of these technologies, many of these technologies were proprietary technologies. And we built this kind of self -created technology that is open source and a public infrastructure that is available to all of us. And that led to the creation of the India Stack. So when you look at it over the course of long periods of time, sovereignty will always trump technical leads that are short term. We have a mandate to build. It is not an option whether we want to build in these technologies. AI is a technology that has impact on every single aspect of human life. And it’s a core technology that a country like India must have.

And it’s a core technology that a country like India must understand. from the foundational level. Otherwise, we will become a digital colony which is dependent on other countries for this core, core technology. That’s something that is, it’s not an option. It is something that we must do. And we have unique advantages. And our unique advantage is actually our diversity. We have so many languages. We have 22 official languages. And in fact, you know, the way people speak in our country changes every 50 kilometers. And that diversity must be captured if we have to understand the voice of the people. And therefore, if we build AI from India, it must acknowledge that diversity and do this. The other thing, of course, is we are a huge economy.

There is demand. and if AI is there to help the citizen to do everything, this is we can be one of the largest consumers of AI in the world. And that demand is there and then we have to build. We know that we are a cost -conscious country, right? Everything needs to be at the lowest cost. So we need to build efficient AI that actually can be delivered at scale for the people so that the last person in the country can actually have a better experience, right? Today, if you look at UPI, one of the great success stories of the past decade, and if you look, it is for the first time we feel that in India, things can be better than everywhere else in the world.

But AI done the right way can make sure that every service to citizens actually is the best and the cheapest and actually done in the best possible way for the country and that’s the promise of AI and that’s why we said we need to do this in India. and I think it’s not about I mean we have companies which have globally when you look at AI companies they are massive companies but in the end we have to show a new model where we can actually build AI which helps everyone and then we can win for the people and our model can be adopted in the world and that is my belief of where we need to go on this thing So Sarvam has been building India’s full stack sovereign AI platform and we work with developers fundamentally India is a country of developers we have more developers and we work with enterprises and we work with governments and we have a full stack platform which I’ll talk about In fact the full stack platform consists of three different things one is models we need models that are built in India And that is the key thing and that’s what we’ll focus on, sovereignty and models.

And then we are focusing on applications. Applications is AI for everyday tasks, for making things better for people. And finally, we’ll talk about infrastructure and infrastructure at India scale. The first thing I want to talk about are sovereign models. Rule number one is they are built from scratch. They are not dependent on any other model that is there in the world. They are built from scratch. There is no data dependency on anyone else. But at the same time, the focus is these are world -class, state -of -the -art models that we have. So I’m going to talk a little bit about some of our models. The first model is actually the SARAS model, which is actually the speech model which helps recognize speech in Indian languages.

This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. In fact, if you see, this is actually best in class in Indian languages compared to any other global model in terms of that’s something to say that in the, these are extremely small models, but these are models which have been trained with significant amounts of Indian diverse data, which will actually lead to better performance on Indie voices.

So let me just take a play, an iconic moment in India, diarized using our model. Oh, sorry. Okay, I don’t think I can make this happen. Okay. Is audio, it’s not playing. let’s maybe we’ll come back to this and here we have a majestic liftoff of lbm3 m4 rocket carrying india’s prestigious chandrayaan 3 spacecraft this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket We want to create models which are naturally expressive Indian voices, and they have low latency streaming and actually production grade quality.

So, in fact, our models, our speech to text models are considered in blind tests are actually the most preferred voices in Indian languages. And this is something compared to all the global competitors, such as Levin Labs and Cartesia, etc. And we have actually the most preferred text to speech model in the country. We’ll also talk, we also have a dubbing capability, which actually preserves the speaker modality and has precise control over duration and supports mixed language things. So I will show, in fact, in this model as well. We see compared to any other model in the world, we are the most preferred as far as dubbing is concerned. I will show a small snippet of what happens here.

We have the remaining 13 bits in which the 12th bit is called the small a bit. Then the remaining 6 bits are compute bits. These 3 are the destination bits and these are the jump bits. Then we have the remaining 13 bits in which the 12th bit is called the small a bit. Then the remaining 6 bits are compute bits. These 3 are the destination bits and these are the jump bits. Then we have the remaining 13 bits in which the 12th bit is called the small a bit. Then the remaining 6 bits are compute bits. These 3 are the destination bits and these are the jump bits. So we’ve also built these vision models. And these vision models are actually very good for document digitization.

They’re very good at language layout understanding, visual grounding, and in fact, finding intelligence by visual components. And finally, reading order predictions. In fact, the vision model that we built is only a 3 billion parameter state space model which beats all other models in the world, not just in Indian languages, but in English as well. So therefore, it shows, and many of these models are many orders of magnitude bigger than our models, and still we are able to get world -class performance from them. Of course, in Indian languages, we are far and away ahead of the models that are there from the global comparison. Now we come talk about some of the LLMs. We have actually, India has started the training of LLMs from scratch.

And this was done through a grant from the India AI Mission that actually without which it would not have been possible for us to train these kinds of models through a GPU grant. In fact, it is a context, it’s a model which is an extremely small model which can run on a single GPU. And it has a 32K context length and is trained on 16 trillion tokens. And it’s extremely efficient thinking which actually gives better answers with lower. And the focus of this model is actually real time conversational applications that people will be able to, it will be able to generate conversations in all the Indian languages in a real time system. And some of the benchmarks show compared to models, global models of similar size.

such as QN30B or GPT -OSS. It is far superior in the ability in terms of various parameters, such as fluency, language and script, and usefulness and conciseness. So the important thing is that this is able to, at the size, it is again a global best. And then we’ll come finally to our largest model, which is the 105 billion parameter model. This is the largest LLM that has trained from scratch in the country. And it is basically better on par with most open source and closed source models of its class. And it can handle various kinds of complex reasoning tasks, as well as web searching and things like that. So it’s actually a fairly complex, fairly advanced model, which again works in all…

all Indian languages. From a benchmark perspective, these models, again, compared to things like GBT OSS 120 billion, as well as Gemini Flash, is actually superior in terms of the kinds of outputs it can generate. So therefore, this is really an example. While this is quite a small model, it is really, just to give you an idea, last year we had DeepSeek R1, which actually was launched, and that was 670 billion parameters. The numbers that we are getting are actually superior to what DeepSeek R1 had last year. Of course, the state of the art has also improved. But the goal is we can show that India can build these things. And I want this. The most important thing that I want people to understand is…

just because, and I think that the, you know, I would love that not just us, but many other people come and show that we can actually build world -class models from India, because that is the fact. And these models was built with a team of just 15 young people. And really it’s the game of the youth of India that have actually made this model what it is. I’m just the spokesperson. These young kids have actually made something like this happen. And if these kids can do it, we have so much talent in the country. And I think that’s, I’m very positive about given the right kind of support in the way that we have been given, that much bigger things can happen.

Moving beyond our models, we actually want these models to become useful. And so therefore we build applications. And I’ll talk very briefly. about the kinds of applications that we build. So in fact, we have an ability to actually converse, and this is our real -time voice conversation that happens. We do more than a million minutes of voice conversation in 11 Indian languages every day using Servum. So these models are actually being used to build things. So these models which have been trained fully in our control are actually now being used for conversations across enterprises and government use cases. In fact, in the last month, we actually took about 20 NGOs, actually did a crore minutes of calling within a month to really understand what is the real -time voice conversation and what people actually are saying at the ground.

Okay. So we actually have ways by which we can make this available for work tasks and enterprise tasks. And we also have the ability to do this for content, which is digitizing books, translating books, dubbing videos. These are all studio products that we have. And I think finally, I want to end with infrastructure. We actually are doing many interesting things. We are making our models smaller to work on the edge, to work on phones. We are making, we’re actually, many of you may have heard that we have also launched some of these glasses so that we are able to have these models run on different form factors and capture the intelligence, capture the voice of India at every point in there.

And finally, for these things to work, you need to have compute at a large scale and the ability to actually very efficiently deliver all these models to India at India scale at India cost. Finally, we are, of course,

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Vivek Raghavan is the co‑founder of Sarvam, a company focused on building AI that can speak India’s many languages and understand its unique context.”

The knowledge base records Raghavan delivering the Sarvam AI keynote and saying “why we started Sarvam”, indicating his role as a founder of the company [S4].

Confirmedhigh

“The central mantra of the talk was “India can”, emphasizing that India can train state‑of‑the‑art models and bring AI to a billion Indians.”

Multiple transcript excerpts repeat the phrase “India can” and link it to training state-of-the-art models for a billion citizens [S4] and [S5] and [S18].

Additional Contextmedium

“Raghavan contrasted short‑term technical races (largest model, fastest chip) with the long‑term need for AI sovereignty, warning that without it India could become a “digital colony” dependent on foreign core technologies.”

The knowledge base discusses AI sovereignty as a broader, longer-term concern beyond data, noting India’s focus on technological independence and the risks of short-term versus long-term strategies [S26] and [S27] and [S50].

Additional Contextmedium

“India must develop its own AI models despite the presence of many large global AI firms.”

Other speakers in the knowledge base stress the importance of building sovereign digital foundations to avoid dependence on foreign technology providers [S27] and [S49].

External Sources (55)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — I come here to say that India can. And I think that’s the message I want to say. India can. And India can train state -o…
S7
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Pellerin Matis: If I can add a comment, I think I fully agree that education is very important, and if you want to pro…
S8
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — ## Key Speaker Contributions Alex Moltzau: Thank you so much. And it’s a pleasure to be here today. And really great to…
S9
https://dig.watch/event/india-ai-impact-summit-2026/waves-of-infrastructure-open-systems-open-source-open-cloud — So I do expect that to start happening. That’s why we started working with CDAC and VVDN to some extent. We do see the o…
S10
https://app.faicon.ai/ai-impact-summit-2026/the-global-power-shift-indias-rise-in-ai-semiconductors — How do you make sure that. there is enough packaging verification and many of that ecosystem getting developed. So all o…
S11
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — The speaker emphasizes that India’s approach to AI sovereignty centers on three key pillars: controlling its data, build…
S12
Building Indias Digital and Industrial Future with AI — Deepak Maheshwari from the Centre for Social and Economic Progress provided historical context, tracing India’s digital …
S13
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Professor Ganesh argues that India has barely scratched the surface of AI potential and can achieve significant breakthr…
S14
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — Mazumdar-Shaw positioned this technological convergence within a broader geopolitical context, arguing that nations comm…
S15
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S16
The Global Power Shift India’s Rise in AI & Semiconductors — Building India’s AI and Semiconductor Ecosystem: The panel discussed India’s positioning in the global AI and semiconduc…
S17
The Global Power Shift India’s Rise in AI &amp; Semiconductors — -Building India’s AI and Semiconductor Ecosystem: The panel discussed India’s positioning in the global AI and semicondu…
S18
https://app.faicon.ai/ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-vivek-raghavan-sarvam-ai — And then we are focusing on applications. Applications is AI for everyday tasks, for making things better for people. An…
S19
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — ROI doesn’t come from creating a very large model. 95% of the work can happen with models which are 20 billion or 50 bil…
S20
From KW to GW Scaling the Infrastructure of the Global AI Economy — The success of this transformation will depend on continued collaboration between global technology providers and local …
S21
From KW to GW Scaling the Infrastructure of the Global AI Economy — The current generation at 130, 140 kilowatt per cabinet, while the next one is 250, 260. and the one down the road is 40…
S22
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S23
Securing Access to the Internet and Protecting Core Internet Resources in Contexts of Conflict and Crises — This shifted the discussion from theoretical policy frameworks to urgent, real-world applications. It forced subsequent …
S24
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — The speaker emphasizes that India’s approach to AI sovereignty centers on three key pillars: controlling its data, build…
S25
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — Mazumdar-Shaw positioned this technological convergence within a broader geopolitical context, arguing that nations comm…
S26
Building Indias Digital and Industrial Future with AI — But how should India define data sovereignty without control over standards, decision -making systems, and long -term st…
S27
Designing the AI Factory Scaling Compute to Sovereign AI — Both speakers discuss the importance of technological independence, with Sabharwal noting that Prime Minister Modi frequ…
S28
How Multilingual AI Bridges the Gap to Inclusive Access — Participant from NTU Singapore This comment added geopolitical depth to the discussion while simultaneously complicatin…
S29
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S30
Foster AI accessibility for building inclusive knowledge Societies: a multi-stakeholder reflection on WSIS+20 review — Pablo Medina Jimenez:Thank you very much for your kind words and for the introductory remarks. As I was saying, it’s a p…
S31
Development — The development basket includes the following public policy issues: sustainable development, access, inclusive finance, …
S32
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — The speaker emphasizes that India’s approach to AI sovereignty centers on three key pillars: controlling its data, build…
S33
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Professor Ganesh argues that India has barely scratched the surface of AI potential and can achieve significant breakthr…
S34
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — “I believe that nations that command the convergence of biology and AI, or what I like to call the convergence of biolog…
S35
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — 8 year old prodigy: Sharing is learning with the rest of the world. One, an AI that is independent. From large global A…
S36
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S37
AI Innovation in India — India’s unique strength lies in its people’s ability to work in unstructured environments and get jobs done regardless o…
S38
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Collaboration and Interoperability as India’s Strategic Advantage – India’s strength lies in collaborative approaches ra…
S39
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particul…
S40
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — Sarvam’s strategy is built on a full‑stack approach that separates sovereign models, downstream applications, and the un…
S41
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — Raghavan explained that Sarvam has built a comprehensive three-part platform: models, applications, and infrastructure. …
S42
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-vivek-raghavan-sarvam-ai — And then we are focusing on applications. Applications is AI for everyday tasks, for making things better for people. An…
S43
From KW to GW Scaling the Infrastructure of the Global AI Economy — The success of this transformation will depend on continued collaboration between global technology providers and local …
S44
From KW to GW Scaling the Infrastructure of the Global AI Economy — It’s all, in fact, even the shell and the campus is all purpose -built as an AI factory. So we have to start thinking bo…
S45
World Economic Forum Panel on Quantum Information Science and Technology — -Real-World Applications and Current Limitations: Discussion covered practical applications including quantum sensing fo…
S46
https://app.faicon.ai/ai-impact-summit-2026/founders-adda-raw-conversations-with-indias-top-ai-pioneers — So for example, anything and everything that is required we are basically making the entire suite of the… automation l…
S47
Keynote-Rishad Premji — Government initiatives to train 10 million young people in AI, along with industry partnerships with universities, are e…
S48
From Innovation to Impact_ Bringing AI to the Public — Very good. So first I want to say one thing. To make a foundation model, English, we’ll speak English and Hindi, okay? A…
S49
Keynote-Jeet Adani — The speaker contends that inclusion without capability represents weakness, while capability without sovereignty leads t…
S50
Global Risks 2025 / Davos 2025 — Focus on short-term vs long-term risks
S51
Empowering People with Digital Public Infrastructure — 4. Evolving identity verification systems, as illustrated by Parvinder Johar’s personal example of obtaining a birth cer…
S52
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Matthew Prince Cloudflare — Prince opened with a historical parallel, comparing today’s AI development to the transformative spread of the printing …
S53
WS #43 States and Digital Sovereignty: Infrastructural Challenges — Min Jiang: Thank you. Thank you colleagues at CGI Brazil for convening the session and for inviting me to join. Can …
S54
Digital democracy and future realities | IGF 2023 WS #476 — Anna Christina:Well, it’s a difficult question, but I actually was looking to a person that just step up because it’s a …
S55
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Pramod Varma:Good afternoon. I hope you can hear me. Yes, we can hear you. Yeah, thank you. Thank you, Aishwarya, for se…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument133 words per minute73 words32 seconds
Argument 1
Sovereign AI necessity (Speaker 1)
EXPLANATION
Speaker 1 frames sovereign AI capability as essential rather than optional, emphasizing that reliance on foreign‑trained models limits India’s autonomy. The introduction positions Sarvam’s work as a concrete illustration of why India must develop its own AI infrastructure.
EVIDENCE
During the opening remarks, the speaker notes that the world is dominated by models trained on English-language data and declares that sovereign AI capability is “not just a luxury, it is a necessity” [2].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Speaker 1’s opening keynote frames sovereign AI capability as essential rather than optional, describing it as a necessity for India’s autonomy [S6].
MAJOR DISCUSSION POINT
Sovereign AI necessity
AGREED WITH
Vivek Raghavan
V
Vivek Raghavan
11 arguments139 words per minute2407 words1033 seconds
Argument 1
Long‑term sovereignty outweighs short‑term technical leads (Vivek Raghavan)
EXPLANATION
Vivek argues that while the industry chases the biggest models and fastest chips, these advantages are fleeting. In contrast, building AI sovereignty ensures lasting strategic independence for India.
EVIDENCE
He points out that discussions about the largest model or fastest chip are “transitory technical advantages” and stresses that “in the long run, it’s sovereignty that matters” [10-13]. He reinforces this by stating that “sovereignty will always trump technical leads that are short term” [19-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Raghavan argues that short-term technical advantages are transitory and that lasting strategic independence comes from AI sovereignty, a view reiterated in his keynote and the accompanying summary [S4][S5].
MAJOR DISCUSSION POINT
Long‑term sovereignty outweighs short‑term technical leads
AGREED WITH
Speaker 1
Argument 2
Linguistic diversity as a core strength (Vivek Raghavan)
EXPLANATION
Vivek highlights India’s linguistic richness as a unique asset for AI development, noting the need for models that reflect the country’s many languages and dialects. This diversity is presented as a competitive advantage for building AI that truly serves Indian users.
EVIDENCE
He enumerates India’s 22 official languages and explains that speech patterns change roughly every 50 km, insisting that AI must capture this diversity to understand the “voice of the people” [28-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He positions India’s 22 official languages and regional speech variations as a competitive advantage for building AI models that capture the “voice of the people,” a point emphasized in the keynote analysis [S5].
MAJOR DISCUSSION POINT
Linguistic diversity as a core strength
Argument 3
Large, cost‑conscious market creates massive demand (Vivek Raghavan)
EXPLANATION
Vivek points out that India’s huge economy and cost‑sensitive consumer base generate a vast demand for affordable AI services. He argues that meeting this demand with low‑cost, scalable solutions can make India one of the world’s biggest AI consumers.
EVIDENCE
He references India’s size as an economy, the existing demand for AI-enabled citizen services, and the country’s reputation for cost-consciousness, stating that AI must be “delivered at scale for the people” at the lowest cost [35-40]. He also cites the success of UPI as evidence that digital services can outperform global benchmarks when built locally [41-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion notes a $300-$500 million market opportunity and highlights India’s cost-sensitive consumer base as a driver for affordable AI services at scale [S9][S10].
MAJOR DISCUSSION POINT
Large, cost‑conscious market creates massive demand
Argument 4
Integrated stack: sovereign models, applications, infrastructure (Vivek Raghavan)
EXPLANATION
Vivek outlines Sarvam’s three‑layer approach: building sovereign AI models, creating applications that solve everyday problems, and developing the infrastructure needed for Indian‑scale deployment. This full‑stack strategy is presented as the roadmap for achieving AI sovereignty.
EVIDENCE
He describes the full-stack platform consisting of models, applications, and infrastructure, and emphasizes that the focus is on “world-class, state-of-the-art models that we have” built from scratch in India [42-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Raghavan outlines Sarvam’s three-layer full-stack platform-models, applications, and infrastructure-built entirely in-house, as described in the keynote overview [S4][S5].
MAJOR DISCUSSION POINT
Integrated stack: sovereign models, applications, infrastructure
Argument 5
Indigenous SARAS speech model outperforms global competitors (Vivek Raghavan)
EXPLANATION
Vivek claims that Sarvam’s SARAS speech‑to‑text model, trained on diverse Indian data, delivers superior performance for Indian languages compared with international models. He positions it as a flagship example of a sovereign AI component.
EVIDENCE
He introduces the SARAS model as a native speech recognizer, noting that it is “best in class in Indian languages compared to any other global model” and that blind tests show it is the most preferred voice in Indian languages, outperforming competitors such as Levin Labs and Cartesia [53-65][69-70].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The SARAS speech-to-text model is presented as best-in-class for Indian languages, outperforming international systems in blind tests, according to the keynote details [S4][S5].
MAJOR DISCUSSION POINT
Indigenous SARAS speech model outperforms global competitors
Argument 6
Vision model (3 B parameters) surpasses larger international models (Vivek Raghavan)
EXPLANATION
Vivek presents Sarvam’s 3‑billion‑parameter vision model as achieving world‑class performance despite its relatively small size, beating larger foreign models on tasks like document digitization and visual grounding. This demonstrates the efficiency of home‑grown AI.
EVIDENCE
He explains that the vision model, with only 3 B parameters, “beats all other models in the world, not just in Indian languages, but in English as well” and that it outperforms models many orders of magnitude larger [84-90].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sarvam’s 3-billion-parameter vision model is reported to beat much larger foreign models on tasks such as document digitization and visual grounding [S4].
MAJOR DISCUSSION POINT
Vision model (3 B parameters) surpasses larger international models
Argument 7
Home‑grown LLMs (32K‑context model & 105 B parameter model) achieve world‑class performance (Vivek Raghavan)
EXPLANATION
Vivek details two large language models developed by Sarvam: a compact 32K‑context model trained on 16 trillion tokens, and a 105 billion‑parameter model. Both are claimed to match or exceed the performance of leading global models on fluency, reasoning, and multilingual capabilities.
EVIDENCE
He notes that the 32K-context model is “far superior” to global models of similar size such as QN30B or GPT-OSS [91-100], and that the 105 B model is “on par with most open source and closed source models of its class,” outperforming GBT-OSS 120 B and Gemini Flash on benchmark outputs [101-106].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Benchmark comparisons show the 32K-context model exceeding peers of similar size and the 105 B model matching top open-source and proprietary models, as highlighted in the keynote summary [S5].
MAJOR DISCUSSION POINT
Home‑grown LLMs (32K‑context model & 105 B parameter model) achieve world‑class performance
Argument 8
15‑person youth team proves India’s talent pool can build world‑class AI (Vivek Raghavan)
EXPLANATION
Vivek emphasizes that Sarvam’s breakthroughs were achieved by a small team of just 15 young engineers, illustrating the depth of technical talent available in India. He suggests that with appropriate support, even larger achievements are possible.
EVIDENCE
He states that the models “were built with a team of just 15 young people” and credits the youth of India for making the models possible, expressing optimism about future scaling with the right support [112-119].
MAJOR DISCUSSION POINT
15‑person youth team proves India’s talent pool can build world‑class AI
Argument 9
Real‑time multilingual voice conversations handling millions of minutes daily (Vivek Raghavan)
EXPLANATION
Vivek reports that Sarvam’s platform powers over a million minutes of real‑time voice conversations each day across 11 Indian languages, demonstrating large‑scale, practical deployment of sovereign AI. He also mentions extensive usage by NGOs for ground‑level data collection.
EVIDENCE
He cites that “we do more than a million minutes of voice conversation in 11 Indian languages every day” and that “about 20 NGOs… did a crore minutes of calling within a month” to capture real-time voice data [123-127].
MAJOR DISCUSSION POINT
Real‑time multilingual voice conversations handling millions of minutes daily
Argument 10
Content services: digitization, translation, dubbing for books and videos (Vivek Raghavan)
EXPLANATION
Vivek outlines additional Sarvam offerings that use AI to digitize books, translate content, and dub videos, thereby expanding access to information in multiple Indian languages. These services are positioned as part of the broader application layer of the full‑stack strategy.
EVIDENCE
He mentions capabilities for “digitizing books, translating books, dubbing videos” and refers to these as “studio products” that the company provides [128-131].
MAJOR DISCUSSION POINT
Content services: digitization, translation, dubbing for books and videos
Argument 11
Edge‑optimized models, wearable devices, and large‑scale compute delivered at Indian cost (Vivek Raghavan)
EXPLANATION
Vivek describes efforts to shrink models for edge deployment on phones and wearables, including smart glasses, and to build large‑scale compute infrastructure that can be delivered affordably across India. This infrastructure focus is presented as essential for scaling sovereign AI.
EVIDENCE
He notes work on making models “smaller to work on the edge, to work on phones,” the launch of “glasses” to run models on different form factors, and the need for “compute at a large scale… delivered at India scale at India cost” [133-135].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Raghavan describes efforts to shrink models for edge deployment, launch of smart-glass devices, and building large-scale compute infrastructure affordable for Indian deployment [S4].
MAJOR DISCUSSION POINT
Edge‑optimized models, wearable devices, and large‑scale compute delivered at Indian cost
Agreements
Agreement Points
Sovereign AI is essential for India’s autonomy and long‑term strategic advantage
Speakers: Speaker 1, Vivek Raghavan
Sovereign AI necessity (Speaker 1) Long‑term sovereignty outweighs short‑term technical leads (Vivek Raghavan)
Both speakers stress that relying on foreign-trained models is a risk and that building AI capabilities domestically is a strategic necessity. Speaker 1 frames sovereign AI as a “necessity” in a world dominated by English-language models [2], while Vivek emphasizes that “in the long run, it’s sovereignty that matters” and that “sovereignty will always trump technical leads that are short term” [10-13][19-20].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s AI sovereignty strategy-controlling data, building independent infrastructure, and developing domestic talent-is presented as vital for economic growth and strategic autonomy, echoing Prime Minister Modi’s emphasis on avoiding dependence on other countries for AI capabilities and situating AI within broader geopolitical competition for future dominance [S24][S25][S26][S27].
Similar Viewpoints
Both see AI sovereignty as a core, non‑optional capability for India, arguing that short‑term advantages such as the biggest model or fastest chip are fleeting, whereas sovereign AI secures lasting independence and avoids digital colonisation [2][11-13][19-20].
Speakers: Speaker 1, Vivek Raghavan
Sovereign AI necessity (Speaker 1) Long‑term sovereignty outweighs short‑term technical leads (Vivek Raghavan)
Unexpected Consensus
Linking AI sovereignty to inclusive development and multilingual access
Speakers: Speaker 1, Vivek Raghavan
Sovereign AI necessity (Speaker 1) Linguistic diversity as a core strength (Vivek Raghavan)
While Speaker 1’s remarks focus on the strategic necessity of sovereign AI, they do not explicitly mention linguistic inclusion. Vivek, however, ties sovereignty to India’s linguistic diversity, arguing that AI must capture 22 official languages and regional dialects to truly serve the people [28-34]. The convergence of a strategic sovereignty narrative with a multilingual inclusion agenda is an unexpected alignment, linking national security concerns with the “Closing all digital divides” theme.
POLICY CONTEXT (KNOWLEDGE BASE)
The connection between AI sovereignty and inclusive development is highlighted through multilingual AI initiatives that bridge access gaps, and through policy calls to address infrastructure, dataset diversity, and skill development to ensure equitable AI benefits, reflecting multi-stakeholder perspectives on building inclusive knowledge societies [S28][S29][S30].
Overall Assessment

The transcript shows a clear consensus between the opening speaker and the keynote that AI sovereignty is a strategic imperative for India, outweighing short‑term technical races. Both also implicitly connect this sovereignty to broader development goals, especially multilingual inclusion. The agreement is strong and reinforces the need for a domestically built, cost‑effective AI stack.

High consensus on the necessity of sovereign AI, with moderate consensus extending to its role in inclusive, multilingual development. This unified stance strengthens policy arguments for investing in home‑grown AI infrastructure and capacity building.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript shows strong alignment between the opening remarks and the keynote. Both speakers advocate for AI sovereignty and highlight India’s linguistic diversity, cost‑conscious market, and talent pool. The only variation is in emphasis: Speaker 1 presents sovereignty as a strategic necessity, whereas Vivek outlines a concrete, three‑layer implementation strategy. No substantive conflict or opposing viewpoints emerge.

Minimal – the discussion is largely consensual, indicating a unified stance on the importance of sovereign AI and its implementation in India. This cohesion suggests that policy and industry stakeholders are likely to find common ground when shaping AI strategies, reducing barriers to coordinated action.

Partial Agreements
Both speakers share the overarching goal of achieving AI sovereignty for India. Speaker 1 frames this as an essential, non‑optional need given the dominance of English‑trained models [2], while Vivek expands on how to realise it—emphasising that short‑term technical advantages are transitory and that a full‑stack approach (building models from scratch, creating applications, and deploying scalable infrastructure) is required to secure long‑term sovereignty [10-13][19-20][42-46]. The difference lies in the level of detail and the proposed pathway rather than the end goal.
Speakers: Speaker 1, Vivek Raghavan
Sovereign AI necessity (Speaker 1) Long‑term sovereignty outweighs short‑term technical leads (Vivek Raghavan) Integrated stack: sovereign models, applications, infrastructure (Vivek Raghavan)
Takeaways
Key takeaways
Sovereign AI is a strategic necessity for India; long‑term sovereignty outweighs short‑term technical advantages. India’s linguistic diversity and large, cost‑conscious market provide unique advantages and massive demand for AI. Sarvam follows a full‑stack AI strategy that integrates sovereign models, applications, and infrastructure tailored for Indian needs. Indigenous models—including the SARAS speech model, a 3 B vision model, a 32K‑context LLM, and a 105 B parameter LLM—achieve world‑class performance and often surpass larger global counterparts. A small, youthful team (≈15 people) demonstrates the depth of talent available in India to build world‑class AI. Real‑time multilingual voice‑conversation services handle millions of minutes daily; additional content services cover digitization, translation, and dubbing of books and videos. Infrastructure efforts focus on edge‑optimized models, wearable devices, and large‑scale compute delivered at Indian cost and scale.
Resolutions and action items
None identified
Unresolved issues
None identified
Suggested compromises
None identified
Thought Provoking Comments
Sovereignty will always trump technical leads that are short term. Unless we build these things ourselves, we will become a digital colony dependent on other countries for this core technology.
Frames AI development as a matter of national security and independence rather than just a competitive tech race, shifting the conversation from performance metrics to strategic imperatives.
Sets the thematic foundation for the talk, prompting the audience to view subsequent technical details through the lens of sovereign capability and influencing later emphasis on building models from scratch.
Speaker: Vivek Raghavan
Our unique advantage is actually our diversity – 22 official languages and regional dialects that change every 50 kilometres. AI built in India must capture that diversity to truly understand the voice of the people.
Highlights linguistic diversity as a strategic asset, introducing a novel angle on why indigenous AI development is essential for relevance and inclusivity.
Leads to the introduction of the SARAS speech model and the discussion of native language performance, steering the conversation toward concrete examples of leveraging diversity.
Speaker: Vivek Raghavan
We need to build efficient AI that can be delivered at scale for the people so that the last person in the country can actually have a better experience.
Connects cost‑consciousness with social impact, emphasizing that scalability and affordability are as critical as model size or accuracy.
Shifts the tone from showcasing high‑end models to addressing practical deployment challenges, paving the way for later remarks on edge devices, phones, and glasses.
Speaker: Vivek Raghavan
Rule number one is they are built from scratch. They are not dependent on any other model that is there in the world. But at the same time, the focus is these are world‑class, state‑of‑the‑art models that we have.
Establishes a clear principle of full sovereignty while asserting competitive quality, challenging the assumption that only large, foreign‑trained models can be world‑class.
Introduces the concept of ‘sovereign models’, leading to detailed descriptions of SARAS, vision models, and LLMs, and reinforcing the earlier sovereignty narrative.
Speaker: Vivek Raghavan
Our 105‑billion‑parameter LLM was built by a team of just 15 young people, and it matches or exceeds the performance of many open‑source and closed‑source models of its class.
Demonstrates that world‑leading AI can be achieved with limited resources and talent, challenging the belief that massive teams and budgets are prerequisites for large models.
Inspires confidence in India’s talent pool, transitions the discussion toward the role of youth and talent, and supports the call for more support and investment.
Speaker: Vivek Raghavan
We do more than a million minutes of voice conversation in 11 Indian languages every day using Sarvam, and we have already helped NGOs log a crore minutes of calls in a month.
Provides concrete usage metrics that illustrate real‑world impact, moving the conversation from theory to measurable social benefit.
Validates the earlier claims about applicability, prompting the audience to consider scalability and societal outcomes, and sets up the segue into infrastructure needs.
Speaker: Vivek Raghavan
We are making our models smaller to work on the edge, on phones, and even on glasses, so that we can capture the intelligence and the voice of India at every point.
Links model efficiency with ubiquitous accessibility, introducing the forward‑looking vision of pervasive AI deployment across diverse hardware.
Concludes the talk with a forward‑looking infrastructure theme, reinforcing earlier points about cost‑consciousness and broad reach, and leaves the audience with a tangible future direction.
Speaker: Vivek Raghavan
Overall Assessment

The discussion was driven by a series of strategically placed comments that reframed AI from a purely technical pursuit to a matter of national sovereignty, inclusivity, and practical impact. Vivek Raghavan’s initial assertion about the primacy of sovereignty set the agenda, after which he introduced India’s linguistic diversity as a unique advantage, linking it to concrete model development (SARAS, vision, LLMs). By emphasizing efficiency, youth talent, and real‑world usage statistics, he shifted the conversation from abstract capability to tangible societal benefit and scalability. These pivotal remarks guided the flow from high‑level policy concerns to detailed technical achievements and finally to deployment infrastructure, shaping a cohesive narrative that underscored the necessity and feasibility of building sovereign AI in India.

Follow-up Questions
What strategies and resources are needed to build sovereign AI models in India from scratch without relying on external data or models?
Ensuring AI sovereignty is a core theme; identifying concrete approaches is essential for independent development.
Speaker: Vivek Raghavan
How can AI models be made low‑cost and efficient enough to reach the last person in India, especially given the country’s cost‑conscious nature?
Scalable, affordable AI is crucial for nationwide impact and to avoid digital exclusion.
Speaker: Vivek Raghavan
What methods can capture India’s linguistic diversity (22 official languages and frequent dialect changes) effectively in speech and language models?
Accurate representation of diverse voices is necessary for truly inclusive AI services.
Speaker: Vivek Raghavan
What benchmarks and evaluation metrics should be used to compare Indian language speech, dubbing, and vision models against global competitors?
Objective measurement is needed to validate claims of world‑class performance.
Speaker: Vivek Raghavan
How can AI models be optimized to run on edge devices such as smartphones and smart glasses while maintaining performance?
Edge deployment expands accessibility and reduces reliance on centralized infrastructure.
Speaker: Vivek Raghavan
What large‑scale compute infrastructure is required in India to train and serve AI models at Indian cost and scale?
Sustainable compute resources are a prerequisite for ongoing AI development and deployment.
Speaker: Vivek Raghavan
What support mechanisms (funding, policy, talent pipelines) are needed for other Indian teams to build world‑class sovereign AI models?
Encouraging broader participation will amplify talent and accelerate AI capabilities across the country.
Speaker: Vivek Raghavan
How can the impact of real‑time voice conversation applications be measured across enterprises, government, and NGOs?
Metrics are required to assess effectiveness, adoption, and societal benefit of AI services.
Speaker: Vivek Raghavan
What are the privacy, security, and governance considerations when developing and deploying sovereign AI systems in India?
Ensuring data protection and ethical use is vital for public trust and regulatory compliance.
Speaker: Vivek Raghavan
How can AI be leveraged for large‑scale content digitization, translation, and dubbing of Indian books and media?
These applications can preserve cultural heritage and increase accessibility, but need robust pipelines.
Speaker: Vivek Raghavan
What policies and funding models (e.g., extensions of the India AI Mission grant) are needed to sustain the development of larger and more advanced AI models?
Continued financial support is critical for scaling up model size and capability.
Speaker: Vivek Raghavan
How can talent development programs be expanded to nurture more young engineers capable of building advanced AI models?
Scaling the talent pool will enable larger, more ambitious AI projects in the future.
Speaker: Vivek Raghavan

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI

HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel discussed how heterogeneous computing-from devices to edge clouds to data centers-is essential for delivering reliable AI services in India, ensuring the user experience remains invariant to network quality [6-13]. Durga highlighted voice as a natural interface supporting 14 languages and argued that inference must run locally on devices when connectivity is poor, noting that smartphones can now run 10-billion-parameter multimodal models [1-3][12-13]. Rizvi emphasized the environmental impact of AI, pointing out India’s 300 Gen-AI startups and the push to develop sovereign large language models to reduce energy and data dependence [14][15-18].


Arun Shetty identified three major impediments to AI adoption: insufficient power and compute infrastructure, networking bottlenecks, and a lack of high-quality data, and called for fit-for-purpose, secure AI factories at the edge and in data centers [43-55]. He also warned that security and safety are critical because models can hallucinate or be poisoned, requiring visibility across the stack and safeguards against malicious code [46-55]. Dr. Kamakoti reinforced the need for sovereign models and a formal trust framework, arguing that trust is neither reflexive, symmetric nor transitive and must be defined mathematically for heterogeneous systems [46-63].


Gokul described practical edge constraints-memory, connectivity, I/O, thermal limits and power-and stressed the importance of high PUE, air-cooled racks and hybrid energy solutions to enable AI in remote regions [69-77]. He noted that India must leverage its abundant land, water and power resources while adopting liquid cooling for high-density racks and off-grid hybrid power to achieve cost-effective deployment [75-78]. Durga concluded that a hybrid AI approach, combining on-device inference, edge-cloud air-cooled carts, and centralized data centers, can distribute compute across the network without relying solely on liquid cooling [89-103].


The Minister reiterated that policy must ensure the necessary power, water and land infrastructure, framing AI progress as a means to achieve welfare and happiness for all citizens [139-145]. Across the discussion, participants agreed that collaborative effort among technologists, policymakers and ecosystem partners is required to build secure, energy-efficient, and scalable AI infrastructure [44-55][89-103][139-145]. The panel also highlighted the role of edge inferencing in verticals such as education and SMEs, where low-power translation and transcription services can improve accessibility [70-72]. Finally, they called for continued development of policies and standards that address security, data sovereignty, and heterogeneous compute to sustain India’s AI growth [55-63].


Keypoints


Major discussion points


Heterogeneous compute & edge inference are essential for resilient AI experiences.


Durga emphasized the need to run inference locally on devices to keep AI usable regardless of network quality, describing “heterogeneous computers, disaggregation of compute within the network” and the ability to run large models on smartphones or glasses [6-13]. She later outlined a “holistic approach of devices, edge cloud, plus data center” and called Qualcomm’s vision “hybrid AI” [90-103].


Infrastructure constraints-power, compute, networking, and cooling-must be addressed with fit-for-purpose solutions.


Arun Shetty listed the three impediments to AI adoption: power, compute, and networking, noting the shift toward more edge inferencing [43-45]. Gokul added that India faces limits in land, water, and especially power, highlighting the importance of PUE, air-cooled carts, liquid cooling, and hybrid energy to sustain growing demand [72-77].


Security, safety, and trust are critical, requiring sovereign models and robust guardrails.


Shetty warned about model hallucinations, toxicity, and malicious manipulation, stressing visibility across the stack [44-48]. Dr. Kamakoti stressed the need for sovereign models to prevent adversarial attacks and described trust as a non-reflexive, non-symmetric, non-transitive, context-dependent relation that must be mathematically defined [51-63]. Shetty later detailed practical steps such as asset discovery, scanning for vulnerabilities, and applying NIST/MITRE/OWASP controls [104-132].


Environmental and energy-efficiency concerns drive the push for edge AI and hybrid energy solutions.


Rizvi highlighted the “strong environmental aspect” and the finite nature of energy in AI deployments [14-15]. Gokul reinforced this by discussing the power-usage-efficiency metric, cooling strategies, and the need for off-grid or hybrid renewable solutions to reach underserved regions [72-78].


Policy coordination and ecosystem collaboration are needed to scale AI responsibly.


Rizvi called for policy input on critical infrastructure and public systems [64-65]. Dr. Kamakoti noted that “this is not a problem you can solve alone” and stressed partnership with ecosystem players [53-55]. The Honorable Minister concluded by urging technocrats and policymakers to align on power, water, land, and the broader welfare agenda for AI [139-145].


Overall purpose / goal


The panel aimed to map the technical, infrastructural, security, environmental, and policy challenges of scaling generative AI in India and to outline a coordinated roadmap-centered on heterogeneous compute, edge inference, and trustworthy, energy-efficient systems-that enables widespread, secure AI adoption across industries and public services.


Tone of the discussion


The conversation began with a forward-looking, technical optimism about edge AI and heterogeneous compute. As speakers progressed, the tone shifted to a more cautionary focus on security, safety, and resource constraints, underscored by concerns about energy and trust. By the closing remarks, the tone became collaborative and hopeful, emphasizing partnership between industry and government and a shared commitment to “welfare for all” and responsible AI deployment.


Speakers

Rizvi – role: Panel moderator (Kazim Rizvi); expertise: AI adoption, heterogeneous compute, environmental impact of AI.


Dr. Kamakoti – title: Professor V. Kamakoti; role: Academic expert on AI security and policy; expertise: sovereign AI models, cybersecurity, trust frameworks for AI. [S3][S4]


Arun Shetty – role: Cisco representative (senior leader on AI infrastructure and security); expertise: AI infrastructure constraints, edge inferencing, security and safety of AI models. [S5]


Durga – title: Durga Malladi; role: Qualcomm executive (AI hardware and hybrid AI champion); expertise: distributed compute, edge-cloud, hybrid AI systems, air-cooled AI hardware. [S8][S9]


Honorable Minister – role: Government minister (e.g., Minister of State for Personnel / Electronics & IT); expertise: national AI policy, power, water, land resources, welfare-focused AI agenda. [S10][S11][S12]


Gokul – title: Gokul Subramaniam; role: AI/edge-computing specialist (likely Qualcomm); expertise: edge inferencing, power-efficient AI, vertical-specific models, infrastructure cooling strategies. [S13]


Additional speakers:


Sarah – role: Intel representative (mentioned for handing over gifts); expertise: not specified.


Mr. Vichetti – role: Panel participant (mentioned by the Honorable Minister); expertise: not specified.


Padmasree Awadi – role: Distinguished guest (referred to as “eminent Padmasree Awadi”); expertise: not specified.


Dr. Kamothi – appears to be a typographical variant of Dr. Kamakoti; already captured above.


Full session reportComprehensive analysis and detailed insights

The panel convened to map the technical, infrastructural, security and policy challenges of scaling generative AI in India and to outline a coordinated roadmap centred on heterogeneous compute, energy-efficient hardware and trustworthy models. The discussion was led by Durga (Qualcomm), Rizvi (moderator), Arun Shetty (Cisco), Dr Kamakoti (academic) and Gokul (industry), with closing remarks from the Honorable Minister.


Heterogeneous compute and edge inference – Durga opened by stressing that voice-first interfaces in 14 Indian languages must remain usable even when network connectivity fluctuates, which requires the ability to run inference directly on devices rather than relying solely on cloud services [6-13]. She noted that today a smartphone can host a 10-billion-parameter multimodal model and smart-glasses can run sub-billion-parameter models, allowing inference to be performed locally whenever connectivity permits [12-13]. Building on this, she described a “holistic approach” that distributes workloads across on-device inference, edge-cloud resources (including air-cooled server carts) and central data-centres, a strategy Qualcomm brands as “Hybrid AI” and argues is essential for keeping the user experience invariant to network quality [90-103].


Infrastructure, power and cooling – Arun Shetty identified three core impediments to AI adoption: insufficient power, limited compute capacity and networking bottlenecks, and argued that future AI workloads will shift increasingly toward edge inference to alleviate data-centre pressure [43-45]. Gokul expanded on the power-related side of the equation, explaining that India’s growth is bounded by land, water and especially electricity, and that high Power-Usage-Efficiency (PUE) designs, air-cooled racks (effective up to ~25 kW) and, for higher densities, liquid-cooling solutions are required to sustain large-parameter models [69-77][75-76]. Durga reinforced the feasibility of air-cooled hardware, stating that many edge-cloud workloads (including large models) can be run on air-cooled carts, while Gokul cautioned that air cooling is efficient only up to about 25 kW and that higher-density racks (>100 kW) will require liquid cooling [93-97].


Security, safety and trust – Shetty warned that AI models can hallucinate, generate toxic output or be poisoned, and highlighted three security concerns: hallucination/toxicity, model poisoning and “shadow-AI” asset discovery, calling for visibility across the stack and the application of NIST, MITRE and OWASP controls [44-48][112-119][124-129]. Dr Kamakoti argued that sovereign, tamper-proof models are essential to prevent adversarial attacks and that trust must be formalised mathematically – it is neither reflexive, symmetric nor transitive, but context-dependent and temporal, requiring a new “mathematics of trusted AI” [51-63]. He emphasized that these safeguards are especially critical for education-sector applications, where poisoning of training data could compromise models used for teaching [52-54]. Rizvi linked these technical safeguards to policy, urging the development of trust frameworks and regulatory guidance to embed security into AI deployments [45][64-66].


Data gap and sovereign models – Arun Shetty described a “data gap” in which high-quality, accessible datasets are the fuel for AI; without them, building domain-specific “machine-GPTs” is impossible [46-55]. Dr Kamakoti echoed the need for sovereign models, noting that control over the data pipeline is a prerequisite for trustworthy AI [50-51]. Rizvi highlighted India’s vibrant ecosystem of roughly 300 Gen-AI startups that are building on large-language models, and stressed the national push to develop sovereign LLMs to reduce dependence on external data and energy resources [14-18].


Policy, governance and welfare – Rizvi repeatedly called for policy input on critical infrastructure and public systems, asking how heterogeneous compute can bolster national resilience of critical/public systems; Dr Kamakoti responded that security, sovereign models and coordinated policy are essential to protect these systems [64-66]. The Honorable Minister framed AI progress as a matter of providing the essential resources of power, water and land, positioning AI as a tool for “welfare for all” and national happiness [139-145]. Both the moderator and the Minister agreed that coordinated government-industry action is required to create the enabling environment for AI while safeguarding societal goals.


Moderator’s two-year planning note – Rizvi emphasized that AI planning should adopt a two-year horizon and asked Durga and Arun Shetty to outline the expected enterprise outcomes in terms of scale, cost-efficiency and energy-efficiency [45].


Areas of agreement – All speakers concurred that AI workloads must be spread across a continuum of devices, edge inference and data-centres to keep latency low and experience stable [89-103][43-44][104-108][46-48]; that power, compute and networking are the dominant bottlenecks and that energy-efficient hardware (air-cooled carts, high-PUE designs) and hybrid energy sources are vital [43-45][73-77][93-97]; and that robust security, sovereign models and formal trust frameworks are indispensable for safe AI adoption [112-119][55-62][45][64-66].


Points of disagreement – Durga argued that many edge-cloud workloads (including large models) can be run on air-cooled carts without liquid cooling, whereas Gokul cautioned that air cooling is efficient only up to about 25 kW and that higher-density racks (>100 kW) will require liquid cooling [93-97][75-76]. A second tension emerged around the optimal locus of inference: Durga advocated on-device execution whenever possible to guarantee network-invariant experience [12-13], while Shetty emphasized fit-for-purpose edge inference as the primary means of reducing data-centre load, without insisting on pervasive on-device processing [43-44][104-108]. Finally, the panel differed on the pathway to secure AI: Dr Kamakoti promoted a theoretical trust model and sovereign architectures, Shetty stressed operational guardrails and asset discovery, and Rizvi called for policy-driven frameworks, reflecting complementary but distinct emphases [55-62][112-129][45][64-66].


Conclusions and way forward – The panel agreed on a set of immediate actions: Cisco will partner with ecosystem players to build a secure “AI factory” that supports both edge and data-centre deployments; stakeholders will pursue fit-for-purpose heterogeneous compute that balances on-device, edge-cloud and central compute; sovereign LLMs and verification frameworks will be accelerated; asset-discovery and scanning tools will be deployed to mitigate shadow AI; and hybrid cooling and energy solutions will be explored to improve PUE while keeping costs low. Unresolved issues include defining national timelines for heterogeneous compute roll-out, standardising trust certifications, establishing concrete policy instruments for power, water and land provisioning, and creating mechanisms to close the data gap across public and private sectors. The discussion closed on an optimistic note, with the Minister urging that these coordinated technical and policy efforts should ultimately deliver AI-driven welfare and happiness for all citizens.


All participants agreed that a coordinated, multi-layered approach-combining on-device, edge, and data-center compute, energy-efficient hardware, sovereign data, and robust trust frameworks-is essential for India’s responsible AI rollout.


Session transcriptComplete transcript of the session
Durga

with them. 14 languages. Voice is the most natural user interface to devices around you. So the idea is not to actually keep typing and texting, but it’s about the usage of voice, but in native languages, which actually work very nicely. And that means that you have to make sure that the use cases are built on top of it. So that’s what our focus is from a processor standpoint. One final note, and given that I have maybe just one minute, another aspect of heterogeneous computers, disaggregation of compute within the network itself. What I mean by that is, at some point in time, you might have extremely good connectivity to the network. And at some other point in time, you might have zero connectivity to the network.

And the question to ask is, do you want your AI user experience to be invariant to the quality of the communications that you have at that point in time? Or do you want it to depend on it? Obviously, you want it to be invariant. That means you must have the ability to run inference directly on devices. Not that you want to do it all the time, but when you can, why not? today we can run up to a 10 billion parameter model multimodal model state of the art on a smartphone and a sub 1 billion parameter model in your glasses without necessarily charging a device the whole day it’s once every 24 hours so we’ve come a long way in that which means use the data centers use the edge cloud as and when necessary they have a role to play at the same time make sure that we also build for devices where the inference actually occurs and users directly perceive that’s where the data originates so it’s important to think about it that way

Rizvi

yeah there’s there’s also very strong environmental aspect to this and which often gets unnoticed and undiscussed but that element is also very important in terms of efficiently managing the energy requirements because energy as we also know is finite and so I think you one thing which I was struck to me which is spoke what was inferences and the other is that it’s not just about the energy but it’s also about the energy and the A lot of what’s happening in India is also around inferencing models, right? So, I mean, in terms of the Gen AI story, which we have, we have almost 300 Gen AI startups, which are building on top of the large language models.

And India is definitely leading the way in terms of application layer. There’s no doubt about that. Now, of course, with Sarvam and others, we are also building sovereign large language models, right? So, we are sort of, as Minister Vaishnav has spoken about, every, you know, piece of the puzzles. We are there in terms of fitting that puzzle together. I’d like to come to Mr. Arun Shetty, sir, is with Cisco. And, you know, we just want to take it further from where Durga sir had left in terms of talking about enterprise adoption at scale. And, you know, of course, with Cisco, what are the challenge of bottlenecks, which you see in terms of computer availability, connectivity, which Cisco is trying to do, which you see in generally.

And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about.

And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And

Arun Shetty

Yeah, so as you know, we connect and protect the… This should be working, right? Yeah, yeah, yeah. As you know, we connect and protect even in the AI era, right? We started in the internet, we came into the cloud, and we are in this era. First of all, thank you very much for having me, and it’s indeed a pleasure to be representing this esteemed panel. So I think what I’ll do is I’ll summarize based on what others have spoken, actually, and I think those are real problems. The first one is clearly the three impediments for AI adoption is one is clearly infrastructure constraints, and we all spoke about it, and they all spoke about it.

The first one is the power. power is a challenge will be a challenge i think usc is expecting it will be 63 gigawatts of power in couple of years what they require okay and then the compute is a problem we did recognize that compute is becoming a problem and then uh kamakoti sir did tell that cisco is in networking what are you doing in networking and networking will be a problem actually and then we need to see how we need to address and clearly it has to be a fit for purpose solutions because you not only do huge data centers and i think what we see is in couple of years you will see there is more inferencing happening at the edge and that’s what we need that’s what the how the world will move and that’s why solutions have to be fit for purpose for sure the second bigger challenge what we have is the security and the safety aspect so that is something what we need to pay lot of attention because as the adage says what if you can’t see you can’t trust right you can’t trust something what you can’t see so you need to have the visibility across the stack and also you need to see whether the models what we are using are the right models for us or is there anything malicious into the models itself actually vulnerabilities in that model so the security aspect becomes where security and safety aspect becomes very very important because the models hallucinate you can inject toxicity into the model so those are the challenges what we need to address as far as what we use so i think it is very very important to build our models and if you look at the models all the models were built using the public data which was the text voice and video data so but however the enterprises the government has the best data sets so why can’t we use those data sets so the third impediment what we have today is the data set so the third impediment what we have today is the data set so the third impediment is the data gap and data gap is essentially i need to have high quality accessible and manageable data and we can build gpts using that what we can call it as a machine gpt what we can build using that use that for inferencing use that for training use that for inferencing and we get a lot of quality use of ai without data the which is the fuel for the ai today you can’t really move forward on the ai and i think these are the typical three problems and the ways we are looking at addressing this is clearly one is i will not be able to build a huge data center for a specific use case so take a use case and then see how fast i can give that infrastructure a comprehensive secure ai factory or a secure infrastructure whether it is in the data center or in the edge actually so that people can focus on building the use cases or the applications on top of it and the second thing comes on the safety and the security aspect of it and how we can do the defense mechanism and the third one is the data so these are the three problems what cisco is trying to address along with the ecosystem partners of course because this is not a problem what you can solve alone actually yeah thank you

Rizvi

yeah i think i don’t know if my mic okay it’s okay yeah and i’ll i’ll sort of take from the security point which you have spoken and i’ll come back to dr kamakoti i think we have on the clock it shows seven but on my watch it shows 15 yeah so i’ll go by my watch uh yeah so dr kamakoti would like to focus on critical infra and public systems here and as you know that as with the advent of ai we’re going to use it across these sectors as well so how important do you see heterogeneous compute in terms of contributing to national resilience to safeguard and to sort of you know ensure that our critical infrastructure public systems are secure as well

Dr. Kamakoti

So today, the type of things that we need to do for each one of these actions, the type of inferencing, type of response time we need, as Shetty mentioned, it’s going to be different. I hope all of you have seen Yes Prime Minister, and always they say, need to know, right? You need to know, right? And now what happens is if I am going to make a model that has understood the entire data, then this that the model, and it is used to be someone that someone should they need to know that data? That’s a very important question. So that’s where the entire aspect of cybersecurity comes in. And that’s why we are all saying that we have need to have sovereign models.

As he rightly pointed out, we can have adversarial AI, we can go poison the whole thing and then make it teach make it tell the things that, you know, should not be told, or need not be told. Okay. This is something that we need to very much look at from a security point where i do an inferencing and my training data set goes for a toss number one so we need to have something for for education at least as a director of one of the premium students in the country what my worry is that for education like how we have since our board for uh you know movies what we should make models for which certain details alone should be fed into it see is a bacha right whatever you teach what it will tell you back probably do a little more uh generative on that so this is number one number two is again coming back to cisco itself right you do deep packet inspection and basically you do it with some signatures today the the whole story is changing dynamically the malware can change its signature so that’s going to be the biggest challenge now and what sort of inferencing they are going to do they have to bring some more different architecture and that will be a heterogeneous architecture now and so so So, ultimately, you know, as you see, you know, what you see, the trust component, I always repeat this, I’ll finish with this with my one minute.

So, trust is, you know, friends, you know, if you want to define A is equivalent to B, that’s the definition, right? If you want to define A, you have to come with B, which is equivalent to A. So, equivalence in discrete mathematics, equivalence relation should satisfy three properties, reflexive, symmetric, transitive. A is trust is not reflexive, I don’t trust myself sometimes. Trust is not symmetric, I trust Sarah, Sarah may not trust me. Trust is not transitive, I trust Gokul, Gokul trust you, I may not trust you. Trust is in addition, trust is context dependent, I trust. I trust you on something, I don’t trust you on something else. It is temporal, morning I trust you, evening I don’t trust you.

So, right? So, the main thing is, we have to build that mathematics. defined trusted and if you go to you know some of these search engine and define trust you get 1 million hits for that so so that is going to be the most important part so specifically on heterogeneous we will have certain different types of security issues something which a can sound something which is originating because of a and that’s where all of us edge connectivity server all the three people have to work together and and we will teach and he’ll put policy so

Rizvi

but both of you are equally playing an important role in terms of policy dr. Kamothi you’re also you know very influential and important figure in India’s AI policies of course lots to learn from you Goku very quickly would like to come to you and you know just sort of taking away in terms of the practical deployment models and what are the sort of examples you’ve seen which demonstrate that we are moving towards heterogeneous compute right and what needs to be done to also get get to that

Gokul

So I started off with workload and I’ll go back to the same thing. So one of the things that we’re looking at and it’s critical is to see what vertical really needs what kind of domain specific models. And then try to apply that as much as possible as edge inferencing and contain the walls that are there that prevents AI to work efficiently. Primarily it’s like memory, you know, the connectivity, the IO, the thermal and then the power. So from an edge inferencing standpoint, there are quite a few things that are being done, be it an education segment where you want more translation, data being available, transcription. So that the knowledge is being imparted in a way that you have with the right data with the lowest power that’s meaningful for the student.

And more importantly, when we talk security, it’s not only about protecting data. the models we keep talking data and models it’s protecting the user that’s even more fundamental and how you can ensure that that happens second thing is applying it to other verticals be it small and medium business i think there is a great opportunity there where edge inferencing and putting compute with the right kind of power that can translate the businesses into actually using ai more effectively the last aspect that i want to also touch upon is in terms of just power you know as we go from one gig to nine to ten gig in the next five years in the country we have to realize that india is challenged by three physical things that we cannot run away from land water and power and these are very important aspects that it will drive how we set up our infrastructure and you know almost you know in a hundred percent of your power energy that comes into a data center forty percent goes into cooling forty percent into your computer and twenty percent on connectivity and there is this famous metric that you use, the PUE, the power usage efficiency.

It has to be as close to one as possible. All the power that you give goes to the most important thing, which is the computer, not to the cooling and things. And there are a lot of technologies that are being played with with respect to how much you can air cool on a rack, per rack, and that was okay up to about 25 kilowatt, and as you start to get to 100, you have to use liquid cooling, and then how we can set that infrastructure up. And for a country like India, it’s absolutely important to look at what hybrid energy solutions we can go with, because just pure renewable may not be able to address it. You’ll have to have something that is stable and be able to do something off -grid so that there is that dependency for you to get the data from the data centers and push as much as possible to edge, because edge is all about reach.

How can I take it to places across the country where there is no access to connectivity? It’s about how can I leapfrog? How can I leapfrog with verticals that have not used technology as much? We’ve always done a leapfrogging in India, and this is a great moment for us, and total cost of ownership. Those are the big areas.

Rizvi

Thank you, Gokul. And I think as we are approaching the end of the panel, I’d sort of like to go to Durga and Dr. Shetty also in terms of closing remarks and the way forward. So to both of you, I’ll pose this question in terms of the next two to four years, because I think the AI age, we don’t think too far ahead. We can’t do five -year planning or 10 -year planning. I think two -year planning is sufficient. So what enterprise outcomes are you both looking at? Maybe we can start with Durga in terms of defining India’s access to compute, access to infrastructure, capacity, and also sort of building in scale, cost efficiency and energy efficiency.

Durga

So I’ll keep it brief. I think what I’m looking forward to with all the conversations here and in other parts of the world as well, where the problems are somewhat similar, is the ability to distribute compute across the entire network. So think of a combination of inference that runs in devices to the largest… extent that’s possible. Edge cloud, on -prem servers, where a lot of the localized processing can be done. And these can be done in air -cooled carts, by the way. The point that was made earlier is absolutely relevant. You don’t necessarily need liquid cooling all the time. You can do air -cooled carts and then just use air -cooled servers and running up to 100 to 300 billion parameter models, which are getting pretty sophisticated.

That’s the edge cloud. And as you go deeper from there onwards, then you have the data centers. It then mitigates the overall requirements of what you need in a data center. And instead of, therefore, concentrating the entire compute in one single location and then building it for just that alone, a holistic approach of devices, edge cloud, plus data center is probably what we are looking forward to. From Qualcomm, we call it as hybrid AI. It’s not just a marketing slogan, but it is something that we truly believe in. Thank you.

Arun Shetty

Since the infrastructure part has been addressed here, so let me talk. A little bit more on safety and security aspects. So I think one of the things what we need to understand about the modern… these models are very intricate and very complex. And it’s also non -deterministic because if you give an input, not necessarily the output will be the same like a standard application, correct? So that’s why it is non -deterministic. So what one should be doing, right? There are two aspects of safety and security. I’ll just touch upon why it is important to know that actually. Safety is all about, we want the models to work in a certain way but it is not working in that certain way or the way we want them to work.

That is the first part of it. That’s where the toxicity part, hallucination, all those challenges come actually. The second part of it is the security part wherein a bad actor from outside can change the behavior of the model. So we need to be careful about both the things actually. So what one should be doing? Say for example, I think Kamakoti sir also told about users to have, that’s it. users also to be secure, right? So it is essential that the organizations or the country has to build that actually. So which means if I’m accessing a chat GPT and sending some confidential info, the system should stop me. So that is the when I’m accessing a third party application, the system should be smart enough to stop me saying that you can’t be sharing that information that’s not allowed for you to share that.

So that’s something which is already happening in organizations today. The second part of it is the first party application, I’m building an application, and I’m using a model. So now the organization should be able to scan what all my AI assets are. Because one of the biggest challenges for enterprise is the shadow AI applications, they don’t know what people are doing actually. So I need to clearly know what all my assets are. That is number one, I detect all my assets or discover all my assets. And next is I should scan. and also ensure that these models and the applications what I’m using are not vulnerable. If it is vulnerable, then I need to put guardrails around it or I need to fix those problems.

And similarly, there are organizations who are already telling that there are a lot of risks. So you need to nist Mitre and OWASP are telling that there are a lot of risks associated with that and we need to ensure that we need to stop that. So that is something what Cisco is focus, our focus to see how we can use AI to defend the, to defend against all these malice and also the vulnerabilities what we see. Thank you so much.

Rizvi

I think with this, we’ll probably close the panel, but I’d like to invite Honorable Minister once again for his very quick closing remarks that you have sort of. Thank you. us highly motivated to sort of build on this. You’ve heard us in the last one hour. What are your thoughts? We’d love to hear from you in terms of your closing address.

Honorable Minister

Thank you, Rizvi. And in fact, it’s a great pleasure to be here with the eminent Padmasree Awadi, Professor Kamakoti and Gokul and Durga Prasad and Mr. Vichetti sharing their truly professional experience and how as a policymaker, how we should view the things especially in terms of power, electricity, water and the land. How we should be well equipped to provide all these things where all the eminent panelists over here or the eminent people of the days would be thinking of putting. My primary challenge they have posed before is try to provide all these things. We are here to provide the rest remaining. And in fact, you know, thanks once again for a very apt introduction. very apt dialogue over here.

Ultimately, we have to all, me as a policymaker, and you all technocrats and innovators have to think the basic agenda for this AI impact term is welfare for all, happiness for all. Thank you for inviting me. Thank you so much.

Rizvi

With this, we will have to close the panel. I’d like to thank all our panelists and also invite colleagues, Sarah from Intel to hand over the gifts. But we’ll just have a group photo. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (18)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“The panel was organized by Durga (Qualcomm), Rizvi (moderator), Arun Shetty (Cisco), Dr Kamakoti (academic) and Gokul (industry), with closing remarks from the Honorable Minister.”

The knowledge base confirms the existence of a panel on heterogeneous computing in India featuring Durga Malladi from Qualcomm and moderated by Kazim Rizvi, and that it gathered industry, academia and government experts [S4].

Confirmedhigh

“A smartphone can host a 10‑billion‑parameter multimodal model and smart‑glasses can run sub‑billion‑parameter models, allowing inference to be performed locally.”

Durga’s statement is corroborated by the source noting that premium smartphones, AR glasses and PCs are capable of running models with billions of parameters locally, eliminating the need for constant cloud access [S9].

Confirmedhigh

“Arun Shetty identified three core impediments to AI adoption: insufficient power, limited compute capacity and networking bottlenecks, and argued that future AI workloads will shift increasingly toward edge inference to alleviate data‑centre pressure.”

Cisco’s commentary in the knowledge base describes the same three infrastructure constraints-power, compute and network bandwidth-and highlights the need to move workloads toward the edge to reduce data-centre load [S59] and [S47].

Additional Contextmedium

“Qualcomm’s “Hybrid AI” strategy distributes workloads across on‑device inference, edge‑cloud resources (including air‑cooled server carts) and central data‑centres, and is essential for keeping the user experience invariant to network quality.”

The knowledge base discusses Qualcomm’s focus on a distributed AI fabric that spans edge to cloud, but does not use the specific “Hybrid AI” branding; it provides broader context on the edge-to-cloud approach [S14] and the heterogeneous compute paradigm [S1].

Additional Contextmedium

“India’s AI growth is bounded by land, water and especially electricity, requiring high Power‑Usage‑Efficiency designs, air‑cooled racks (effective up to ~25 kW) and liquid‑cooling for higher densities.”

The knowledge base highlights massive upcoming energy demands for AI in India and the need for advanced cooling solutions, though it does not specify the 25 kW air-cooling limit; it adds quantitative expectations of a 4× increase in energy use over the next decade [S57] and [S70].

Additional Contextmedium

“Dr Kamakoti argued that trust in AI must be formalised mathematically – it is neither reflexive, symmetric nor transitive, but context‑dependent and temporal, requiring a new “mathematics of trusted AI”.”

The source on heterogeneous compute mentions a theoretical foundation for understanding trust and security in AI systems, supporting the notion that new formal models are being explored, though it does not detail the specific properties cited [S1].

External Sources (74)
S1
S2
ASEF OUTLOOK REPORT 2016/2017 — 64 Rizvi and Lingard, 2010
S3
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — – Prof. V. Kamakoti- Arun Shetty
S4
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Speakers:Arun Shetty, Prof. V. Kamakoti Speakers:Arun Shetty, Prof. V. Kamakoti, Gokul Subramaniam Speakers:Arun Shett…
S5
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — – Arun Shetty- Gokul Subramaniam – Durga Malladi- Arun Shetty – Gokul Subramaniam- Arun Shetty
S6
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Speakers:Arun Shetty, Gokul Subramaniam Speakers:Arun Shetty, Gokul Subramaniam, Kazim Rizvi, Sridhar Babu Speakers:Ar…
S7
Subrata K. Mitra Jivanta Schottli Markus Pauli — In this chapter, we look at major foreign policy events during Indira Gandhi’s terms as prime minister (1966-…
S8
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — – Durga Malladi- Gokul Subramaniam
S9
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — – Praveer Kochhar- Durga Maladi – Ritukar Vijay- Durga Maladi
S10
Building the Workforce_ AI for Viksit Bharat 2047 — -Dr. Jitendra Singh- Role/Title: Honorable Minister, Minister of State for Personnel, Minister of State for Personal Gri…
S11
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Ashwini Vaishnaw- Role/Title: Honorable Minister (appears to be instrumental in India’s semiconductor industry developm…
S12
Announcement of New Delhi Frontier AI Commitments — -Shri Ashwini Vaishnaw: Role/Title: Honorable Minister for Electronics and Information Technology, Area of expertise: El…
S13
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — – Arun Shetty- Gokul Subramaniam – Gokul Subramaniam- Arun Shetty
S14
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — Evidence:Cloud is powerful for training foundational models, on-premises servers with air-cooled cards can run 100-300 b…
S15
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Good evening, distinguished guests. Welcome to the session on powering AI. As AI scales at speed, so does its infrastruc…
S16
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Sharma identifies compute resources and research talent as the main barriers, suggesting regulatory issues are less sign…
S17
Agents of Change AI for Government Services &amp; Climate Resilience — “…they can hallucinate it can have bias, it can have toxicity, avoid all of that and they are unpredictable ultimately…
S18
WS #279 AI: Guardian for Critical Infrastructure in Developing World — Daniel Lohrman: Yeah, but I cannot, the video is not started, so I don’t know if you can see me, but I can certainly s…
S19
Emerging Shadows: Unmasking Cyber Threats of Generative AI — Data poisoning and technology evolution have emerged as significant concerns in the field of cybersecurity. Data poisoni…
S20
Internet Governance Forum 2024 — However, the session also discussed the inherent risks associated with AI systems. Daniel Lohrmann emphasized concerns s…
S21
Designing Indias Digital Future AI at the Core 6G at the Edge — Power consumption concerns are driving data centers toward edge deployment Roy emphasizes that infrastructure challenge…
S22
Efforts to improve energy efficiency in high-performance computing for a Sustainable Future — The demand for high-performance computing (HPC) has surged due to technological advancements like machine learning, geno…
S23
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Poort raised concerns about potential political pushback: “Strong pushback against AI regulation may affect AI policy re…
S24
Agentic AI in Focus Opportunities Risks and Governance — So understanding that risk picture is going to be critically important. And last, I think that really pivots into one of…
S25
Panel Discussion Data Sovereignty India AI Impact Summit — Arguments:Government must establish sovereign guardrails and provide long-term policy stability for private investment T…
S26
Supply Chain Fortification: Safeguarding the Cyber Resilience of the Global Supply Chain — Business models that leverage emerging technologies, like robotics and drones for packaging and delivery, have the poten…
S27
Designing Indias Digital Future AI at the Core 6G at the Edge — Summary:Roy emphasizes that infrastructure challenges, particularly power consumption and site requirements, are the mai…
S28
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S29
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Anita Gurumurthy emphasised that despite improvements in chip efficiency, energy demand from data centres continues crea…
S30
Setting the Rules_ Global AI Standards for Growth and Governance — Esther Tetruashvily responded by describing OpenAI’s efforts to evaluate model performance across various languages and …
S31
Safe and Responsible AI at Scale Practical Pathways — But if the resource, you know, the use is commercial, then, of course, there is a system. There is a policy for it. And …
S32
AI and Data Driving India’s Energy Transformation for Climate Solutions — Coordination and Collaboration at Scale: Multiple speakers highlighted the critical need for coordination across stakeho…
S33
AI for agriculture Scaling Intelegence for food and climate resiliance — The policy adopts a government‑led, ecosystem‑driven approach to foster AI solutions for agriculture across Maharashtra….
S34
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — “You can do air -cooled carts and then just use air -cooled servers and running up to 100 to 300 billion parameter model…
S35
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Use air-cooled servers for edge cloud deployments (100-300 billion parameter models) while reserving liquid cooling for …
S36
Discussion Report: Sovereign AI in Defence and National Security — Faisal advocates for a strategic approach where countries focus their limited sovereign resources on the most critical c…
S37
The AI Governance Alliance of the World Economic Forum unveiled the Presidio AI Framework — The AI Governance Alliance of the World Economic Forum (WEF) unveiled the ‘Presidio AI Framework’ as part of its AI Gove…
S38
How nonprofits are using AI-based innovations to scale their impact — The practical implementation of these principles included the development of guardrails and evaluation frameworks. Parti…
S39
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Explanation:Unexpectedly, these speakers represent different philosophies toward AI development. Sheth emphasizes buildi…
S40
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Good evening, distinguished guests. Welcome to the session on powering AI. As AI scales at speed, so does its infrastruc…
S41
Building Sovereign and Responsible AI Beyond Proof of Concepts — So this example is with a public health company. They’re using AI to read different x -rays and radiology scans. And the…
S42
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S43
Designing Indias Digital Future AI at the Core 6G at the Edge — The convergence of AI and 6G will create a distributed computing fabric that extends far beyond traditional network boun…
S44
Artificial intelligence (AI) – UN Security Council — Moreover, these companies often collaborate with academic institutions, governments, and other stakeholders to create a …
S45
AI for Social Empowerment_ Driving Change and Inclusion — Arguments:Urgent need for comprehensive policy responses including competition policy, tax policy, labor law reforms, an…
S46
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — The conversation highlighted government policy’s critical importance in creating enabling environments through legal ref…
S47
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — Infrastructure LimitationsThe first constraint involves infrastructure limitations, which Patel described as “oxygen for…
S48
Climate change and Technology implementation | IGF 2023 WS #570 — Furthermore, the worldwide Cloud infrastructure for apps adds to the energy demands. Cloud servers, responsible for host…
S49
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — “I’m asking a suggestion from you, so like what model should, like someone who’s creating such solution for voice and tr…
S50
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — Evidence:Let’s say we are all using agents and you’re going to pick the agents that you like and the agents to be useful…
S51
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Artificial intelligence | Information and communication technologies for development Durga argues that AI applications …
S52
AI for Good Technology That Empowers People — This comment introduces the critical concept that edge AI isn’t just about convenience or efficiency—it’s about safety a…
S53
AI for Good Technology That Empowers People — Lall argues that the convergence of communication, compute, and control at the edge is crucial for applications like hap…
S54
AI for Good Technology That Empowers People — And that’s how we got to where we are today. to people who didn’t even have internet in some part of Sri Lanka. And that…
S55
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Summary:There is unanimous agreement that power and energy constraints represent fundamental challenges that must be add…
S56
WS #305 Financing Self Sustaining Community Connectivity Solutions — Power management and renewable energy access are critical infrastructure needs that must be addressed alongside connecti…
S57
Shaping the Future AI Strategies for Jobs and Economic Development — Several speakers addressed the unique needs of emerging economies, particularly the 70 million MSMEs in India that emplo…
S58
Driving Indias AI Future Growth Innovation and Impact — In the Indian context, as the audience is aware, we had a lot of catching up to do. And it’s fair to say that a lot of w…
S59
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — Patel highlights a lack of trust in AI systems due to safety and security concerns. He stresses the need for runtime gua…
S60
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Personal sovereign edge models are needed to ensure privacy and security without cloud dependency
S61
Scaling Enterprise-Grade Responsible AI Across the Global South — “And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agen…
S62
Is AI the key to nuclear renaissance? — In the global race for AI dominance, tech giants spare no effort in securing the necessary energy resources. However, th…
S63
White House eyes clean energy for AI expansion — A new task force has beenlaunchedby the White House to address the growing demands of AI infrastructure. Led by the Nati…
S64
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S65
Safe and Responsible AI at Scale Practical Pathways — And I’ll also work on it. So what are we talking about? We are talking about data generated from different sources, be i…
S66
Setting the Rules_ Global AI Standards for Growth and Governance — Esther Tetruashvily responded by describing OpenAI’s efforts to evaluate model performance across various languages and …
S67
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S68
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — And that’s clearly something we try to do. And, of course, in addition, we need absolutely to have computer facility at …
S69
Indias Roadmap to an AGI-Enabled Future — This discussion focused on India’s path to building an AGI-enabling ecosystem, examining the critical pillars of energy,…
S70
Shaping the Future AI Strategies for Jobs and Economic Development — Infrastructure and Energy Challenges: Significant discussion around the massive infrastructure requirements for AI deplo…
S71
The Global Power Shift India’s Rise in AI & Semiconductors — Thank you. Thank you. across CPUs, GPUs, SoCs, and AI engines that power cutting -edge compute systems worldwide. She br…
S72
Lift-off for Tech Interdependence? / DAVOS 2025 — Examples of running models with billions of parameters on phones, PCs, and cars.
S73
https://app.faicon.ai/ai-impact-summit-2026/heterogeneous-compute-for-democratizing-access-to-ai — So I’ll keep it brief. I think what I’m looking forward to with all the conversations here and in other parts of the wor…
S74
https://app.faicon.ai/ai-impact-summit-2026/from-human-potential-to-global-impact_-qualcomms-ai-for-all-workshop — All right. How am I doing on time? Maybe I have 10 minutes. So let me talk a little bit on data center. I don’t see the …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Durga
2 arguments197 words per minute538 words163 seconds
Argument 1
Hybrid AI deployment across devices, edge‑cloud and data‑center (Durga)
EXPLANATION
Durga emphasizes that AI workloads should be distributed across the entire network, from on‑device inference to edge‑cloud resources and large data‑center clusters. This hybrid approach aims to keep the user experience consistent while leveraging the most appropriate compute tier for each task.
EVIDENCE
He notes that inference must be possible directly on devices to remain invariant to network quality, and that today we can run large multimodal models on smartphones and sub-billion-parameter models on glasses, using data-centers and edge-cloud as needed [12-13]. Later he describes a holistic strategy that combines device inference, edge-cloud, on-prem servers, and data-centers, mentioning air-cooled carts and the ability to run 100-300 billion-parameter models at the edge [89-103].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The hybrid AI concept of distributing compute across devices, edge-cloud and data-centers is described in multiple sources, which highlight a distributed approach to reduce data-center load and improve flexibility [S4] [S14] [S1].
MAJOR DISCUSSION POINT
Hybrid AI deployment across the compute continuum
DISAGREED WITH
Arun Shetty
Argument 2
Air‑cooled server carts enabling large‑parameter models without liquid cooling (Durga)
EXPLANATION
Durga points out that air‑cooled server carts can support very large AI models, eliminating the need for more complex liquid‑cooling solutions in many scenarios. This reduces infrastructure complexity and cost while still delivering high‑performance inference.
EVIDENCE
He specifically mentions that air-cooled carts and servers can run up to 100-300 billion-parameter models without requiring liquid cooling, describing this as part of the edge-cloud offering [93-97].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Air-cooled carts and servers capable of running 100-300 billion-parameter models without liquid cooling are explicitly mentioned as feasible in the literature [S1] [S4].
MAJOR DISCUSSION POINT
Air‑cooled hardware for large AI models
AGREED WITH
Arun Shetty, Gokul, Rizvi
DISAGREED WITH
Gokul
A
Arun Shetty
3 arguments179 words per minute1219 words407 seconds
Argument 1
Fit‑for‑purpose edge inference to reduce data‑center load (Arun Shetty)
EXPLANATION
Arun stresses that edge inference should be tailored to specific use‑cases, reducing reliance on centralized data‑centers. By deploying compute where it is needed, latency improves and overall data‑center demand drops.
EVIDENCE
He lists the three impediments-power, compute, networking-and calls for fit-for-purpose solutions that enable more inference at the edge rather than in massive data-centers [43-44]. He further explains that edge inferencing is essential for the future, describing it as a key shift in how AI will be deployed [104-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Shetty’s advocacy for fit-for-purpose edge inference to offload data-centers is documented as a key shift toward edge processing [S4] [S4].
MAJOR DISCUSSION POINT
Purpose‑built edge inference
AGREED WITH
Durga, Dr. Kamakoti, Rizvi
DISAGREED WITH
Durga
Argument 2
Power, compute and networking as the three main impediments to AI adoption (Arun Shetty)
EXPLANATION
Arun identifies power availability, compute capacity, and networking bandwidth as the primary barriers to scaling AI across enterprises. Overcoming these constraints is essential for broader AI uptake.
EVIDENCE
He explicitly enumerates power, compute, and networking as the three major challenges, noting that all panelists have highlighted them [43-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Three critical bottlenecks-power, compute and networking-are identified as the primary barriers to AI scaling in several analyses [S1] [S15].
MAJOR DISCUSSION POINT
Three core AI adoption barriers
AGREED WITH
Gokul, Durga, Rizvi
Argument 3
Security and safety guardrails: hallucinations, toxicity, shadow AI, and asset discovery (Arun Shetty)
EXPLANATION
Arun outlines the need for robust safety and security measures, including mitigation of model hallucinations, toxic outputs, and hidden (shadow) AI applications. He also stresses the importance of discovering and scanning AI assets to prevent vulnerabilities.
EVIDENCE
He discusses toxicity, hallucination, and the necessity of visibility across the stack, as well as the risk of malicious model manipulation [112-119]. He then describes the need to detect, inventory, and scan all AI assets for vulnerabilities, proposing guardrails and remediation steps [124-129].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safety concerns such as hallucinations, toxicity, and the need for guardrails are discussed alongside security risks of model manipulation [S4] [S17].
MAJOR DISCUSSION POINT
AI safety, security, and asset management
AGREED WITH
Dr. Kamakoti, Rizvi
DISAGREED WITH
Dr. Kamakoti, Rizvi
D
Dr. Kamakoti
3 arguments170 words per minute611 words215 seconds
Argument 1
Need for heterogeneous architecture to address security and inference latency (Dr. Kamakoti)
EXPLANATION
Dr. Kamakoti argues that varying inference workloads and response‑time requirements demand a heterogeneous compute architecture. Such diversity also supports evolving security needs and reduces latency across different deployment contexts.
EVIDENCE
He notes that different inferencing types and response times will be required, implying heterogeneous architectures [46-48]. He adds that security concerns and dynamic malware signatures necessitate new heterogeneous designs [53-55] and later mentions distinct security issues arising from heterogeneous edge-cloud-server setups [63-64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Heterogeneous compute architectures are recommended to meet diverse inference latency and security requirements across edge, cloud and server tiers [S4] [S1].
MAJOR DISCUSSION POINT
Heterogeneous compute for security and latency
AGREED WITH
Durga, Arun Shetty, Rizvi
DISAGREED WITH
Arun Shetty, Rizvi
Argument 2
Trust as a non‑reflexive, non‑symmetric, context‑dependent relation; need for sovereign, tamper‑proof models (Dr. Kamakoti)
EXPLANATION
Dr. Kamakoti defines trust as lacking reflexivity, symmetry, and transitivity, emphasizing its contextual and temporal nature. He links this to the necessity for sovereign, tamper‑proof AI models that can be trusted in critical applications.
EVIDENCE
He provides a detailed breakdown of trust properties-non-reflexive, non-symmetric, non-transitive, context-dependent, and temporal-and argues for a mathematical definition of trusted systems [55-62]. He also references the need for sovereign models to prevent adversarial poisoning and malicious behavior [52-54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The non-reflexive, non-symmetric, context-dependent nature of trust and the call for sovereign, tamper-proof AI models are outlined in the trust framework discussion [S1].
MAJOR DISCUSSION POINT
Nature of trust and sovereign AI models
AGREED WITH
Arun Shetty, Rizvi
Argument 3
Control of training data, risks of poisoning and adversarial attacks in education and other domains (Dr. Kamakoti)
EXPLANATION
Dr. Kamakoti highlights the importance of controlling training data to prevent poisoning attacks, especially in sensitive sectors like education. He warns that compromised data can lead to harmful model outputs and calls for safeguards.
EVIDENCE
He raises the question of who should know the data used to train models, linking it to cybersecurity and the risk of adversarial AI, and cites the need for sovereign models to avoid poisoning [49-53]. He then discusses specific concerns for education, noting that models could be fed inappropriate content and generate harmful responses [66-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data-poisoning and adversarial attack risks are highlighted as major threats to model integrity, especially in sensitive sectors like education [S19] [S20].
MAJOR DISCUSSION POINT
Data integrity and poisoning risks
G
Gokul
2 arguments186 words per minute572 words183 seconds
Argument 1
Edge constraints (memory, I/O, thermal, power) and strategies for low‑power deployment (Gokul)
EXPLANATION
Gokul outlines the primary technical constraints at the edge—limited memory, I/O bandwidth, thermal dissipation, and power availability. He proposes low‑power strategies and efficient hardware to enable effective edge inferencing.
EVIDENCE
He enumerates memory, connectivity, I/O, thermal, and power as the main bottlenecks for edge inferencing and stresses the need for low-power solutions that still deliver meaningful AI for students and other users [69-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Edge hardware limitations-memory, I/O, thermal dissipation and power-are identified as key constraints driving low-power design strategies [S21] [S4].
MAJOR DISCUSSION POINT
Edge hardware constraints and low‑power solutions
Argument 2
Power‑usage‑efficiency (PUE), air vs. liquid cooling, and hybrid energy solutions (Gokul)
EXPLANATION
Gokul discusses the importance of optimizing power usage efficiency (PUE) in data‑centers, comparing air‑cooled and liquid‑cooled designs, and advocating for hybrid energy sources to meet India’s growing demand. These measures aim to reduce overall energy consumption and improve sustainability.
EVIDENCE
He explains that PUE should approach 1, describes the split of power between cooling and compute, contrasts air-cooled racks (up to ~25 kW) with liquid cooling for higher loads, and calls for hybrid renewable/off-grid energy solutions for Indian infrastructure [73-77].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Optimising PUE, comparing air-cooled versus liquid-cooled racks, and adopting hybrid renewable energy sources are discussed as ways to improve AI infrastructure sustainability [S15] [S22].
MAJOR DISCUSSION POINT
Efficient cooling and hybrid energy for AI infrastructure
AGREED WITH
Arun Shetty, Durga, Rizvi
DISAGREED WITH
Durga
R
Rizvi
4 arguments183 words per minute839 words275 seconds
Argument 1
Environmental sustainability and finite energy considerations (Rizvi)
EXPLANATION
Rizvi stresses that AI development must account for environmental impact, noting that energy resources are finite and must be managed efficiently. He links sustainability to the broader challenge of meeting AI’s energy demands.
EVIDENCE
He remarks on the strong, often unnoticed environmental aspect of AI, emphasizing that energy is finite and must be efficiently managed in the context of AI inference and model development [14-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The finite nature of energy resources and the need for sustainable AI practices are emphasized in analyses of AI’s power consumption and environmental impact [S15] [S22].
MAJOR DISCUSSION POINT
Energy constraints and sustainability in AI
AGREED WITH
Arun Shetty, Gokul, Durga
Argument 2
Policy‑driven trust frameworks and the importance of securing AI deployments (Rizvi)
EXPLANATION
Rizvi calls for policy mechanisms that embed trust and security into AI deployments, highlighting the role of regulators and standards in guiding safe AI use. He underscores the need for coordinated policy to address emerging AI risks.
EVIDENCE
He acknowledges the panelists’ roles in shaping policy and asks for practical deployment models, indicating that policy guidance is essential for secure AI implementation [45] and follows up with a request for concrete examples of heterogeneous compute in practice [64-66].
MAJOR DISCUSSION POINT
Policy for trustworthy AI
AGREED WITH
Honorable Minister
DISAGREED WITH
Dr. Kamakoti, Arun Shetty
Argument 3
Rapid growth of Indian Gen‑AI startups leveraging large language models and the push for sovereign LLMs (Rizvi)
EXPLANATION
Rizvi highlights India’s vibrant Gen‑AI ecosystem, noting the presence of around 300 startups building on large language models and the national effort to develop sovereign LLMs. This momentum positions India as a leader in AI application development.
EVIDENCE
He cites the existence of nearly 300 Gen-AI startups in India building on large language models and mentions initiatives like Sarvam that aim to create sovereign LLMs, reflecting India’s leadership at the application layer [15-19].
MAJOR DISCUSSION POINT
India’s Gen‑AI startup surge and sovereign LLMs
Argument 4
Importance of policy guidance, regulatory oversight and alignment with national AI strategy (Rizvi)
EXPLANATION
Rizvi stresses that effective AI adoption requires clear policy direction, regulatory oversight, and alignment with a national AI strategy. He frames short‑term (two‑year) planning as essential for achieving enterprise outcomes.
EVIDENCE
He frames a question to the panel about enterprise outcomes over the next two to four years, emphasizing the need for policy-driven planning and alignment with national AI goals [83-88].
MAJOR DISCUSSION POINT
Policy and strategic planning for AI
H
Honorable Minister
1 argument141 words per minute166 words70 seconds
Argument 1
Government’s role in providing power, water, land and enabling infrastructure for AI while ensuring welfare and happiness for all (Honorable Minister)
EXPLANATION
The Minister outlines the government’s responsibility to supply essential resources—power, water, and land—to support AI infrastructure. He frames this provision as part of a broader mission to achieve welfare and happiness for the entire population.
EVIDENCE
He explicitly mentions the need to provide power, electricity, water, and land for AI initiatives, and ties these provisions to the overarching goal of welfare and happiness for all citizens [139-145].
MAJOR DISCUSSION POINT
State support for AI infrastructure and societal welfare
AGREED WITH
Rizvi
Agreements
Agreement Points
Distributed/hybrid AI compute across devices, edge‑cloud and data‑centers
Speakers: Durga, Arun Shetty, Dr. Kamakoti, Rizvi
Hybrid AI deployment across devices, edge–cloud and data–center (Durga) Fit‑for‑purpose edge inference to reduce data‑center load (Arun Shetty) Need for heterogeneous architecture to address security and inference latency (Dr. Kamakoti)
All four speakers stress that AI workloads must be spread over a continuum – from on-device inference to edge-cloud resources and large data-center clusters – and that heterogeneous, fit-for-purpose architectures are required to keep the user experience invariant to network quality. [89-103][43-44][104-108][46-48][53-55]
Power, compute and networking as core bottlenecks; need for energy‑efficient hardware and cooling
Speakers: Arun Shetty, Gokul, Durga, Rizvi
Power, compute and networking as the three main impediments to AI adoption (Arun Shetty) Power‑usage‑efficiency (PUE), air vs. liquid cooling, and hybrid energy solutions (Gokul) Air‑cooled server carts enabling large‑parameter models without liquid cooling (Durga) Environmental sustainability and finite energy considerations (Rizvi)
The panel concurs that limited power, compute capacity and network bandwidth are the primary constraints on AI scaling, and that solutions such as air-cooled server carts, high PUE designs and hybrid energy sources are essential to address these constraints while keeping environmental impact low. [43-44][73-77][93-97][14-15]
POLICY CONTEXT (KNOWLEDGE BASE)
AI-Powering sessions highlight the unprecedented electricity and cooling demands of large models, calling for energy-efficient hardware and infrastructure upgrades [S40]. The Green AI discourse stresses sustainability and the need to curb power consumption in data centres [S42]. Infrastructure-limitations framing describes power and compute as the “oxygen for AI” [S47].
Security, safety and trust – need for sovereign, tamper‑proof models and guardrails
Speakers: Arun Shetty, Dr. Kamakoti, Rizvi
Security and safety guardrails: hallucinations, toxicity, shadow AI, and asset discovery (Arun Shetty) Trust as a non‑reflexive, non‑symmetric, context‑dependent relation; need for sovereign, tamper‑proof models (Dr. Kamakoti) Policy‑driven trust frameworks and the importance of securing AI deployments (Rizvi)
All three speakers underline the critical importance of robust security and safety mechanisms, a clear definition of trust, and the development of sovereign, tamper-proof AI models to protect against hallucinations, toxicity, adversarial attacks and shadow AI deployments. [112-119][124-129][55-62][52-54][45][64-66]
POLICY CONTEXT (KNOWLEDGE BASE)
Sovereign AI strategies advise focusing limited sovereign resources on critical control points to ensure trustworthy, tamper-proof models [S36]. The WEF Presidio AI Framework provides a governance structure for security and trust in generative AI deployments [S37]. Practical guardrails such as slur filters and safety measures are highlighted as essential for trusted AI [S38].
Policy and government support are essential for AI infrastructure and societal welfare
Speakers: Rizvi, Honorable Minister
Policy‑driven trust frameworks and the importance of securing AI deployments (Rizvi) Government’s role in providing power, water, land and enabling infrastructure for AI while ensuring welfare and happiness for all (Honorable Minister)
Both the moderator and the Minister agree that clear policy direction and state provision of essential resources (power, water, land) are indispensable for building AI infrastructure that serves broader welfare and happiness goals. [45][64-66][139-145]
POLICY CONTEXT (KNOWLEDGE BASE)
UN Security Council discussions stress the need for coordinated policy to balance innovation with ethical safeguards [S44]. Calls for comprehensive policy responses-including competition, tax and labour reforms-underscore government’s role in AI’s societal impact [S45]. The World Economic Forum panel emphasizes legal reform and infrastructure investment as key policy levers for AI entrepreneurship [S46].
Similar Viewpoints
Both speakers advocate that air‑cooled hardware can reliably support very large AI models, reducing the need for complex liquid‑cooling systems and improving overall energy efficiency. [93-97][73-77]
Speakers: Durga, Gokul
Air‑cooled server carts enabling large‑parameter models without liquid cooling (Durga) Power‑usage‑efficiency (PUE), air vs. liquid cooling, and hybrid energy solutions (Gokul)
Both highlight power (and related resource) constraints as a primary barrier and propose low‑power, fit‑for‑purpose edge solutions to mitigate these limits. [43-44][69-73]
Speakers: Arun Shetty, Gokul
Power, compute and networking as the three main impediments to AI adoption (Arun Shetty) Edge constraints (memory, I/O, thermal, power) and strategies for low‑power deployment (Gokul)
Both stress that establishing trustworthy, sovereign AI models through policy and technical safeguards is essential for secure AI deployment. [55-62][52-54][45][64-66]
Speakers: Dr. Kamakoti, Rizvi
Trust as a non‑reflexive, non‑symmetric, context‑dependent relation; need for sovereign, tamper‑proof models (Dr. Kamakoti) Policy‑driven trust frameworks and the importance of securing AI deployments (Rizvi)
Unexpected Consensus
Air‑cooled hardware can run 100‑300 billion‑parameter models without liquid cooling
Speakers: Durga, Gokul
Air‑cooled server carts enabling large‑parameter models without liquid cooling (Durga) Power‑usage‑efficiency (PUE), air vs. liquid cooling, and hybrid energy solutions (Gokul)
It is surprising that both a senior AI architect and a hardware-focused speaker agree that simple air-cooled racks are sufficient for extremely large models, a claim that traditionally would be associated with liquid-cooling solutions. [93-97][73-77]
POLICY CONTEXT (KNOWLEDGE BASE)
Reports on heterogeneous compute note that air-cooled carts can successfully host 100-300 B-parameter models, eliminating the immediate need for liquid cooling [S34][S35].
Overall Assessment

The panel shows strong convergence on four major themes: (1) a distributed, heterogeneous AI compute continuum; (2) power, cooling and networking as the dominant technical constraints, with air‑cooled and high‑PUE solutions favored; (3) the necessity of robust security, safety and trust mechanisms, including sovereign models; and (4) the pivotal role of policy and government resources in enabling AI infrastructure and ensuring societal welfare.

High consensus – the speakers from technical, policy and ministerial backgrounds repeatedly echo the same priorities, indicating a unified vision that can drive coordinated action across industry, academia and government.

Differences
Different Viewpoints
Use of air‑cooled server carts versus need for liquid‑cooling for large AI models
Speakers: Durga, Gokul
Air‑cooled server carts enabling large‑parameter models without liquid cooling (Durga) Power‑usage‑efficiency (PUE), air vs. liquid cooling, and hybrid energy solutions (Gokul)
Durga claims that air-cooled carts and servers can run 100-300 billion-parameter models without requiring liquid cooling, suggesting that liquid cooling is not always necessary [93-97]. Gokul counters that air-cooled racks are feasible only up to roughly 25 kW and that for higher loads (approaching 100 kW) liquid cooling becomes essential, implying a need for liquid cooling in many large-scale AI deployments [75-76].
POLICY CONTEXT (KNOWLEDGE BASE)
While some experts demonstrate that air-cooled carts suffice for 100-300 B-parameter models, other guidance reserves liquid cooling for larger data-center workloads, reflecting a split view on cooling requirements [S34][S35].
Degree to which inference should be performed on‑device versus at the edge or cloud
Speakers: Durga, Arun Shetty
Hybrid AI deployment across devices, edge‑cloud and data‑center (Durga) Fit‑for‑purpose edge inference to reduce data‑center load (Arun Shetty)
Durga emphasizes that inference must be possible directly on devices to keep the AI user experience invariant to network quality, advocating for on-device inference whenever possible while still using edge-cloud and data-centers as needed [12-13]. Arun Shetty stresses fit-for-purpose edge inference, focusing on moving inference to the edge to reduce data-center reliance, but does not stress on-device execution as a primary goal, instead highlighting tailored edge solutions for specific use-cases [43-44][104-108].
POLICY CONTEXT (KNOWLEDGE BASE)
The Indian 6G edge blueprint proposes a tiered inference strategy, allocating simple tasks to on-device, intermediate to edge, and heavy workloads to cloud [S43]. Practitioner queries from AI-for-Bharat highlight practical decision-making around edge versus cloud inference [S49]. Cisco’s trusted AI keynote stresses low-latency on-device inference for immersive applications [S50].
Approach to securing AI models – trust framework and sovereign models versus practical guardrails and asset discovery
Speakers: Dr. Kamakoti, Arun Shetty, Rizvi
Need for heterogeneous architecture to address security and inference latency (Dr. Kamakoti) Security and safety guardrails: hallucinations, toxicity, shadow AI, and asset discovery (Arun Shetty) Policy‑driven trust frameworks and the importance of securing AI deployments (Rizvi)
Dr. Kamakoti argues for a formal trust definition (non-reflexive, non-symmetric, context-dependent) and the development of sovereign, tamper-proof models to ensure security [55-62]. Arun Shetty focuses on operational safeguards such as detecting shadow AI, scanning assets, and implementing guardrails against hallucinations and toxicity [112-129]. Rizvi calls for policy mechanisms that embed trust and security into AI deployments, seeking regulatory guidance and practical deployment models [45][64-66]. While all aim for secure AI, they differ on whether the priority is a theoretical trust framework, practical security tooling, or policy-level interventions.
POLICY CONTEXT (KNOWLEDGE BASE)
Sovereign AI recommendations focus on strategic control points for model security [S36], while the Presidio AI Framework offers a broader trust architecture [S37]. Industry practice emphasizes concrete guardrails such as content filters and asset discovery tools [S38].
Unexpected Differences
Contrasting statements on the necessity of liquid cooling for large AI models
Speakers: Durga, Gokul
Air‑cooled server carts enabling large‑parameter models without liquid cooling (Durga) Power‑usage‑efficiency (PUE), air vs. liquid cooling, and hybrid energy solutions (Gokul)
Durga asserts that air‑cooled carts can support very large models (100‑300 billion parameters) without liquid cooling, which is surprising given Gokul’s technical assessment that liquid cooling becomes mandatory for loads beyond ~25 kW, a range that would encompass many large AI workloads. This divergence was not anticipated given the shared focus on efficient AI infrastructure.
POLICY CONTEXT (KNOWLEDGE BASE)
The same sources present divergent positions: some claim air-cooled solutions are adequate for up to 300 B-parameter models, whereas others argue liquid cooling becomes essential for larger-scale training runs, illustrating ongoing debate in the community [S34][S35].
Overall Assessment

The panel largely converges on the need for distributed, heterogeneous AI compute and on addressing power, security, and policy challenges. The most notable disagreements revolve around hardware cooling strategies (air vs. liquid) and the balance between on‑device versus edge inference, as well as differing emphases on how to achieve AI security (trust frameworks, guardrails, or policy).

Moderate – while there is broad consensus on overarching goals (heterogeneous compute, sustainability, security), the panel exhibits moderate disagreement on technical implementation details (cooling, inference placement) and on the preferred pathway to secure AI (theoretical trust models vs. practical safeguards vs. policy). These differences suggest that coordinated efforts will need to reconcile hardware design choices and align security strategies across technical and regulatory domains to advance India’s AI agenda.

Partial Agreements
All speakers agree that AI workloads must be distributed across a compute continuum (devices, edge, data‑center) to address constraints such as power, compute, networking, and latency. However, Durga stresses a holistic hybrid AI model spanning all tiers, Shetty emphasizes fit‑for‑purpose edge solutions, Gokul highlights low‑power edge hardware constraints, and Kamakoti focuses on heterogeneous architectures for security and latency, leading to differing emphases on where and how the distribution should be optimized.
Speakers: Durga, Arun Shetty, Gokul, Dr. Kamakoti
Hybrid AI deployment across devices, edge‑cloud and data‑center (Durga) Fit‑for‑purpose edge inference to reduce data‑center load (Arun Shetty) Edge constraints (memory, I/O, thermal, power) and strategies for low‑power deployment (Gokul) Need for heterogeneous architecture to address security and inference latency (Dr. Kamakoti)
Both emphasize the critical importance of energy and infrastructure for AI development. Rizvi frames it as an environmental sustainability issue, urging efficient energy management [14-15], while the Minister positions power, water, and land provision as a governmental responsibility to support AI infrastructure and societal welfare [139-145]. They share the goal of ensuring sufficient resources but differ in framing—environmental vs. policy/ welfare perspective.
Speakers: Rizvi, Honorable Minister
Environmental sustainability and finite energy considerations (Rizvi) Government’s role in providing power, water, land and enabling infrastructure for AI while ensuring welfare and happiness for all (Honorable Minister)
Takeaways
Key takeaways
AI workloads must be distributed across heterogeneous compute layers – devices, edge‑cloud, and data‑centers – to achieve latency‑insensitive, resilient user experiences. Infrastructure constraints (power availability, compute capacity, networking bandwidth, and cooling) are the primary impediments to large‑scale AI adoption in India. Energy efficiency is critical; air‑cooled server carts can run large models without liquid cooling, and hybrid energy (mix of renewable and stable sources) is needed for sustainability. Security, safety, and trust are essential; models must be protected against hallucinations, toxicity, adversarial poisoning, and shadow AI, and sovereign, tamper‑proof models are required. High‑quality, accessible datasets are a “fuel” gap; leveraging enterprise and government data can enable the creation of domain‑specific “machine‑GPTs”. Policy and governance must provide the necessary power, water, land, and regulatory frameworks while keeping welfare and inclusive benefit as the overarching goal.
Resolutions and action items
Cisco will collaborate with ecosystem partners to build a comprehensive, secure AI factory that supports both data‑center and edge deployments. Stakeholders agreed to pursue fit‑for‑purpose, heterogeneous architectures that balance edge inference with centralized compute. A push for developing sovereign large language models and trusted model verification frameworks was endorsed. Participants highlighted the need to implement asset discovery and scanning tools to mitigate shadow AI and enforce security guardrails. Exploration of hybrid cooling (air‑cooled carts where feasible, liquid cooling for higher‑density racks) and hybrid energy solutions was proposed.
Unresolved issues
Specific mechanisms and timelines for scaling heterogeneous compute infrastructure nationwide remain undefined. How to standardize and certify trust frameworks for AI models across different sectors is still an open question. The exact policy instruments, funding models, and regulatory processes required to support power, water, and land provisioning were not detailed. Methods for systematically closing the data gap—especially governance of government/enterprise data sharing—were not resolved. Concrete strategies for detecting and mitigating model poisoning in real‑time edge deployments were not fully addressed.
Suggested compromises
Utilize air‑cooled server carts for many edge workloads to avoid the complexity and cost of liquid cooling, reserving liquid cooling for high‑density data‑center racks. Adopt hybrid energy solutions (combining renewable sources with stable backup) to meet power demands while ensuring reliability. Apply a fit‑for‑purpose approach rather than a one‑size‑fits‑all architecture, allowing different verticals to choose the optimal mix of edge, edge‑cloud, and data‑center resources. Balance the need for sovereign model development with leveraging existing open‑source LLMs, using government data to fine‑tune while maintaining security.
Thought Provoking Comments
Do you want your AI user experience to be invariant to the quality of the communications that you have at that point in time? … That means you must have the ability to run inference directly on devices.
Durga reframes AI deployment from a cloud‑centric view to a user‑centric one, emphasizing that consistent experience requires on‑device inference regardless of network conditions. This introduces the concept of heterogeneous compute as a resilience strategy.
This comment set the agenda for the whole panel. It prompted Arun Shetty to enumerate infrastructure constraints, Dr. Kamakoti to discuss sovereign models and security, and Gokul to focus on edge power and cooling. The discussion shifted from abstract AI potential to concrete deployment architectures.
Speaker: Durga
Trust is not reflexive, not symmetric, not transitive… Trust is context dependent and temporal. We need to build a mathematics of trusted AI.
By applying discrete‑math properties to the notion of trust, Kamakoti introduced a rigorous way to think about AI reliability and security, moving the conversation from operational challenges to foundational theoretical concerns.
His framing deepened the security discussion. Arun Shetty later referenced “shadow AI” and the need for guardrails, while the Minister highlighted policy implications. It turned the dialogue toward governance and the need for formal trust models.
Speaker: Dr. Kamakoti
The three impediments for AI adoption are: power, compute, and networking… plus security/safety and the data gap.
Shetty distilled the myriad challenges into three concrete pillars, providing a clear structure for the panel to address. This categorisation helped align the subsequent contributions of each speaker.
His list acted as a roadmap. After he spoke, Dr. Kamakoti expanded on security, Gokul elaborated on power and cooling, and Rizvi steered the conversation toward policy and enterprise outcomes. The tone shifted from exploratory to problem‑solving.
Speaker: Arun Shetty
India is challenged by three physical things we cannot run away from: land, water and power… we must look at hybrid energy solutions, edge compute, and leap‑frogging to reach underserved areas.
Gokul connected technical constraints (PUE, cooling) with India’s unique resource limitations and the strategic opportunity of leap‑frogging, linking infrastructure to socioeconomic impact.
This comment broadened the scope from pure technology to national‑scale deployment strategy. It prompted the Minister to speak about power, water, and land policy, and reinforced Durga’s hybrid AI vision.
Speaker: Gokul
Energy is finite and the environmental aspect of AI is often unnoticed… we need sovereign large language models and efficient inference to manage energy requirements.
Rizvi highlighted the sustainability dimension of AI, tying together energy consumption, sovereign model development, and the Indian startup ecosystem, thereby adding an ecological lens to the technical debate.
His point led to further emphasis on edge inference (Durga, Gokul) as a way to reduce data‑center energy use, and reinforced the security‑trust discussion (Kamakoti) about why locally trained sovereign models matter.
Speaker: Rizvi
Overall Assessment

The discussion was shaped by a handful of pivotal remarks that moved the conversation from a high‑level vision of AI to a grounded, multi‑dimensional roadmap for India. Durga’s call for invariant, on‑device AI introduced the need for heterogeneous compute, which Arun Shetty crystallised into three core impediments. Dr. Kamakoti’s mathematical treatment of trust reframed security as a foundational, not just technical, issue, while Gokul’s focus on power, water, and land anchored the debate in India’s physical realities. Rizvi’s reminder of energy constraints and sovereign models added a sustainability and policy layer. Together, these comments redirected the panel toward concrete infrastructure, governance, and environmental strategies, culminating in the Minister’s policy‑oriented closing.

Follow-up Questions
What are the bottlenecks Cisco sees in terms of computer availability, connectivity, and related challenges for enterprise AI adoption at scale?
Rizvi explicitly asked Arun Shetty to elaborate on Cisco’s perspective on bottlenecks, indicating a need for deeper insight into infrastructure constraints.
Speaker: Rizvi
How important is heterogeneous compute for national resilience and the security of critical infrastructure and public systems?
Rizvi directed this question to Dr. Kamakoti, seeking clarification on the role of distributed compute in safeguarding essential services.
Speaker: Rizvi
What practical deployment models or examples demonstrate the move toward heterogeneous compute, and what steps are needed to achieve broader adoption?
Rizvi asked Gokul for concrete cases and actionable measures, highlighting a gap in real‑world implementation knowledge.
Speaker: Rizvi
What enterprise outcomes should be targeted over the next two to four years regarding access to compute, infrastructure capacity, scale, cost efficiency, and energy efficiency?
Rizvi prompted Durga and Arun Shetty to outline short‑term goals, indicating a need for a strategic roadmap.
Speaker: Rizvi

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Founders Adda Raw Conversations with India’s Top AI Pioneers

Founders Adda Raw Conversations with India’s Top AI Pioneers

Session at a glanceSummary, keypoints, and speakers overview

Summary

The summit was organized as a product-only showcase where founders presented their technologies without discussing business or funding, as emphasized by the moderator [2-8]. Archana asked each presenter to tailor their language for both AI-savvy and non-technical audiences, allowing optional simplification [3-4]. The session began with Ravindra Kumar’s introduction of his company Technodate AI [9][10].


Ravindra described Technodate AI’s mission to “automate automation” by providing a DIY, agentic-AI platform that helps users conceptualize, deploy, and troubleshoot industrial robotics solutions [27-32][33-34]. He noted the difficulty of building a foundational model in India due to funding constraints, leading the team to first engage customers and later recognize the need for such a model while already deploying with Fortune-500 firms and the Indian Air Force [35-41][45-48]. A live demo illustrated how the system generates complete architectures, robot programs, and adaptive changes across equipment using background agents [52-63].


Vaibhavath presented Quonsys AI’s end-to-end voice infrastructure that eliminates human agents in call-center workflows, enabling AI agents to answer, record interest, and schedule actions such as site visits [68-78][84-95]. The platform operates on a per-minute subscription model and relies on a proprietary data engine that the team built after finding public datasets insufficient for scaling [97-99][109-115]. He highlighted deployments with large enterprises like Paytm and described handling tens of thousands of concurrent calls, positioning the solution as a cost-effective alternative to traditional call-center staffing [116-124].


Pradyum explained Papri Labs’ visual-data system that equips dashcams and CCTVs on vehicles to continuously update maps, providing real-time information for use cases such as dynamic billboard pricing, autonomous-vehicle safety, and public-transport optimization [121-138][139-148]. The company processes petabytes of video, blurs faces and license plates, and stores only front-camera data on bare-metal servers located in Europe to remain compliant with data-privacy regulations [199-207][208-218]. Pricing is offered on a per-tile basis (e.g., 1.5 lakh rupees for a 25 km² area per day), targeting B2B customers rather than consumers [267-271].


Meenal introduced Imagix AI, an AI-driven precision imaging and treatment-planning tool for cancer that is HIPAA-compliant, ISO-certified, and has four pending patents, achieving 92-99 % accuracy after training on a 5 million-image dataset that includes 30 % Indian data [288-295][300-307][330-335][336-344]. The solution automates organ contouring, reducing manual planning time from up to 90 minutes to as little as five minutes, and has already been deployed in 14 Indian states, processing over a million scans and detecting thousands of TB and cancer cases [345-347][350-357]. Vivek then described Indus Labs AI’s voice operating system that provides a low-latency, multilingual DIY platform for building Indian-language voice agents, offering cost reductions of up to 70 % and ensuring data residency on Indian servers, with partnerships for telecom integration and global white-labeling [360-368][369-382][389-395][416-424][428-433]. Together, these presentations demonstrated how Indian AI startups are tackling sector-specific challenges-from manufacturing and customer support to mapping, healthcare, and voice technology-while emphasizing practical deployment, scalability, and compliance with local data regulations [27-32][68-78][121-138][288-295][360-368].


Keypoints

Major discussion points


Product-only presentation format to foster peer learning.


Archana explains that the summit is “only about product…no business, no pitching, no money” and asks founders to balance jargon with simplicity so non-AI audiences can follow [2-8].


Technodate AI’s “automation of automation” using agentic AI for robotics.


Ravindra describes the need for a foundational model, the three-module workflow (conceptualize, deploy, troubleshoot), collaborations with IITs, the Indian Air Force and Fortune-500 firms, and a demo of AI-driven robot programming and CNC troubleshooting [20-53].


Voice-centric AI platforms for call-center and enterprise automation.


Quonsys AI presents an end-to-end, no-human-in-the-loop voice infrastructure that can answer inbound leads, book appointments and charge per-minute [67-109].


Indus Labs AI outlines a DIY voice-OS with ultra-low latency, Indian-dialect mastery, cost reductions of up to 70 % and a no-code workflow for building voice agents [355-467].


Papri Labs’ real-time visual mapping platform and data-privacy handling.


Pradyum explains a dash-cam/CCTV-based system that continuously updates maps, offers B2B tile-based pricing, and addresses DPDP compliance by blurring faces/number plates and using bare-metal European servers [118-271][207-236].


EasyOPI Solutions’ AI-driven cancer imaging and treatment-planning tool.


Meenal details a HIPAA-compliant, ISO-certified platform that automates organ contouring, reduces planning time from 90 minutes to 5-15 minutes, and has been deployed across multiple Indian states with 92-99 % accuracy [274-345].


Overall purpose / goal


The summit’s goal is to create a knowledge-sharing forum where early-stage founders showcase the technical essence of their AI products-without sales pitches-to help peers learn about emerging solutions, challenges (e.g., foundational models, data privacy) and real-world deployment pathways.


Overall tone and its evolution


– The discussion begins with a welcoming, instructional tone as Archana sets ground rules.


– It shifts to a technical, enthusiastic tone during each founder’s deep-dive (e.g., detailed demos from Technodate, Quonsys, Papri Labs, EasyOPI, Indus Labs).


– The Q&A introduces a defensive yet transparent tone around concerns such as data compliance and scaling (e.g., DPDP compliance [207-236], foundational-model funding [34-41]).


– Throughout, the tone remains collaborative and supportive, with repeated encouragement to applaud presenters and continue conversations after the session ends [117][468].


Speakers


Archana Jahargirdar


Area of Expertise: Conference moderation, startup ecosystem facilitation


Role / Title: Moderator / Host, Rukam Capital [S6][S7]


Ravindra Kumar


Area of Expertise: Industrial automation, agentic AI for robotics


Role / Title: Founder, Technodate AI


Vaibhavath Shukla


Area of Expertise: Voice infrastructure, AI-driven call-center automation


Role / Title: Founder & CEO, Quonsys AI [S8]


Pradyum Gupta


Area of Expertise: Real-time visual mapping, large-scale video analytics


Role / Title: Founder, Papri Labs [S4]


Meenal Gupta


Area of Expertise: AI-driven precision imaging & treatment planning for oncology


Role / Title: Founder, EasyOPI Solutions [S2]


Vivek Gupta


Area of Expertise: Voice AI platform, multilingual speech-to-text & text-to-speech infrastructure


Role / Title: Founder & CEO, Indus Labs AI [S16]


Audience


Area of Expertise:


Role / Title: Questioner / Attendee


Additional speakers:


Weber – mentioned as the next presenter (no further details).


Karan – thanked by Vaibhavath Shukla for the opportunity (no further details).


Ravindra Kumar (already listed above).


Ravindra (duplicate mention).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate).


Ravindra (duplicate


Full session reportComprehensive analysis and detailed insights

Opening & session rules – Moderator Archana Jahargirdar opened the summit with a strict “product-only, no-pitch” directive, asking presenters to balance technical depth with accessible language for a non-AI audience and to focus on the core engineering of their offerings [1-8].


Technodate AI – Ravindra Kumar positioned Technodate AI as an “automation of automation” platform built around a three-module workflow: (i) conceptualise a robotics or automation solution, (ii) deploy and commission it-including robot programming, and (iii) troubleshoot failures [30-32]. The system uses agentic AI to act as a virtual automation expert, automatically generating system architectures, code and adaptive re-configurations when equipment changes [52-63]. Kumar explained that building a foundational model in India is financially challenging, so the team first engaged customers with existing tools, later recognising the need for a bespoke model while already deploying with Fortune-500 firms and the Indian Air Force [33-40]. He reiterated that even a super-intelligent model would still require an application layer to solve real-world industrial problems [55-60]. Collaborations with academic expert Dr Sumit Chopra and defence organisations were highlighted as validation pathways [45-48].


Quonsys AI – Vaibhavath Shukla described Quonsys AI as a voice-first, end-to-end call-centre automation stack that removes the human-in-the-loop: AI agents capture inbound leads, record interest and automatically schedule actions such as site visits [68-78]. The service is sold on a per-minute subscription model, charging only for the actual duration of AI-handled calls [94-99]. After finding public datasets insufficient for scale, the team built a proprietary data engine to generate and fine-tune large-scale training data, a capability that earned a Prime Ministerial award [109-115]. To date the system has processed tens of thousands of calls, with a current concurrent capacity of about 50 sessions [111-115]. Quonsys has already integrated with large enterprises such as Paytm, CRED and PropBotX, handling high-volume call traffic and delivering significant cost savings compared with traditional staffing [116-124].


Papri Labs – Pradyum Gupta presented a dash-cam and CCTV-based visual data platform that continuously updates maps, enabling use-cases such as dynamic billboard pricing, autonomous-vehicle safety, public-transport optimisation and a Delhi Transport Corporation (DTC) optimisation project [121-148]. The company processes petabytes of video, categorising and indexing footage to allow instant queries (e.g., locating non-functioning street lights) [140-148]. The DTC deployment was financed by JICA, and the AI-driven optimisation helped the corporation recover roughly ₹800 crore in lost revenue [198-199]. In response to privacy concerns, Papri Labs complies with the Data Protection and Data Privacy (DPDP) regime by blurring faces and number plates, retaining only front-camera data, and storing everything on bare-metal servers in Europe, thereby avoiding public hyperscalers [208-214][225-236]. Pricing is tile-based and per-day; for example, a 25 km² tile costs ₹1.5 lakh for a single day, targeting B2B customers [261-271].


Imagix AI (EasyOPI Solutions) – Meenal Gupta introduced Imagix AI, an AI-assisted precision-imaging platform for oncology that automates organ contouring and radiation-treatment planning [289-295]. The product is HIPAA-compliant, ISO 13485 certified and holds four pending patents, operating under a human-in-the-loop workflow in which final approval rests with radiologists [348-353]. Trained on a 5 million-image dataset (30 % Indian data from remote northeast regions), the system achieves 92-99 % accuracy and has already processed over one million scans, detecting thousands of TB and cancer cases across 14 Indian states [330-347][336-344]. Recent recognition includes an invitation to demonstrate the solution to Bill Gates at Microsoft [345-347].


Indus Labs AI – Vivek Gupta presented Indus Labs AI as the voice operating system of India, offering a full stack (speech-to-text, text-to-speech, LLM, speech-to-speech) that is dialect-aware, ultra-low latency (≈ 300-400 ms) and sovereign-data-resident on Indian servers [360-368][370-393]. The entire stack runs on self-hosted GPU servers, avoiding third-party hyperscalers, which contributes to its low latency and data-sovereignty claims [360-368]. The platform is a DIY, no-code builder where users define journeys by linking nodes to webhooks or APIs; tutorials and support lower the engineering barrier [435-445]. Cost-wise, Indus Labs claims up to 70 % reduction compared with global providers such as ElevenLabs, charging roughly ₹2 per minute versus ₹8 per minute abroad [380-382]. Partnerships with telecoms (Airtel, Geo) enable SIP integration, and the company is white-labeling its technology for partners in Dubai, Germany and other markets [417-433].


Cross-cutting themes – A clear consensus emerged around DIY/no-code platforms that let non-experts build AI-driven automation, voiced by the founders of Technodate AI, Quonsys AI and Indus Labs AI [28-32][68-71][435-445]. All three also highlighted strategic collaborations with large organisations (IITs, Indian Air Force, OpenAI, JICA) to validate and scale their technologies [45-48][74-75][198-199]. Data-privacy and regulatory compliance were recurring priorities: Papri Labs detailed DPDP-compliant blurring and European bare-metal hosting [208-214][225-236]; Imagix AI stressed HIPAA and ISO certification [348-353]; Indus Labs underscored Indian data-residency [389-393]. Divergences appeared around the necessity of building foundational models-Ravindra argued limited funding makes a home-grown model impractical, favouring application-layer development [33-40][55-60], while audience members cited scalability concerns and the Servam failure as cautionary examples [103-108]. Vaibhavath acknowledged that Quonsys currently targets large enterprises and that pricing and scaling for SMEs are not a priority, a point probed by the audience as a potential market gap [111-115][103-108]. An audience query about incentives for dash-cam owners was met with a blunt reply that the platform does not pay contributors; instead, data providers pay for the service [244-246].


Closing – Archana Jahargirdar thanked the founders, encouraged continued one-on-one conversations, and asked attendees to gather for a group photo [468-470].


Overall, the summit succeeded in creating a knowledge-sharing forum where founders showcased the technical core of their AI products, discussed practical deployment hurdles, and collectively affirmed the importance of localisation, privacy and partnership-driven scaling for advancing India’s digital economy [2-8][S2][S25].


Session transcriptComplete transcript of the session
Archana Jahargirdar

Thank you. Thank you. you you Thank you. Thank you. Thank you. so how do founders learn about these changes the only way you can learn or maybe the best way to learn at a conference like this at a summit like this is by listening to each other so it’s not a pitch there’s not going to be any talk about business there’s going to be no funding conversation it’s only about product so I’m going to request all the founders who are presenting to come up and then we’ll go sequentially and the other request on the presentations to all the founders who are presenting is please come I mean feel free to come up is that please use jargon because the intent is that the audience will understand it however also be mindful that if you want more people to learn who may not be AI natives may not be AI people may not be technologists may not be AI people but it’s still important for them to learn and understand.

So if you can simplify it, it’s fine. If you don’t want to simplify it, it’s also okay. So the format we’ll follow is that each one of you takes a little bit of time to talk about your product. But like I said again, only product. No business, no pitching, no money, nothing. So I’m going to request to start with, if I could request Ravindra Kumar to talk about what is it that you’re building. So quick introduction and then the product that you’ve built.

Ravindra Kumar

Hi everyone, this is Ravindra from Technodate AI. And we are aiming to automate automation itself. Everybody says AI won’t take away any jobs. We’re like, let us do something about it.

Archana Jahargirdar

Do you want to stand there at the podium or you want me to start? Whatever, whatever. No, no. Yeah, you can start your presentation. Shall we do that? Yeah, yeah, we should. Because I want people to really get into the product.

Ravindra Kumar

Can I use the clicker? What generally happens is that there are already very sophisticated automation equipment available out there in the market. Before starting Technodate, I have been working with this company, which happens to be world’s largest manufacturer of industrial robots. Way back in the year 2010 or so, they achieved 100 % automation, which means no human on the shop floor. Still, the manufacturing is happening at 100 % capacity. On the other side, if you see globally, including India, manufacturing is not even successful. People are not able to. use automation to the fullest extent. That is something which Technodate is aiming to solve. We want to make automation as easy as DIY using agentic AI. So what it does is basically help you in three ways.

First, to conceptualize an engineering robotics and automation engineering solution on your own. Then to deploy and commission that including robot programming etc. etc. And then eventually it also helps you to troubleshoot when something doesn’t work. For this, we started as okay, we have to do something like this because the idea of this discussion is how do we go from experimentation to real world deployment. So when I came up with this idea, the first thought was that you need to build a foundational model. But we are in India. It’s not that easy to raise money to build a foundational model. But then how do you approach this? The idea is okay, let us go talk to customer.

Let us experiment with what all options are available out there and then figure out in the process, do we need a foundational model? So we started working, started talking to customers, started doing some initial deployments. Today we stand again back where we started from that we need a foundational model for this. But in the process, we have already started deploying application, including with people like Fortune 500 companies. This is how the team comes from. We are working with some people that I’m sorry, Archana, but being a founder, some pitching comes in by default. But then, yeah, this is how the team looks like. We are collaborating with people like Dr. Sumit Chopra, a Ph .D. under the godfather of AI and Lincoln.

He worked at a fair earlier. We are exploring or we are rather going to deploy a use case very soon with Indian Air Force itself. Of course, the team comes from IITs. I have a small demo to show to everyone. There’s some music to this. But it is. It’s just music. There’s no audio in any case. So what it does is, as I said, three modules. it helps you to conceptualize robotics and automation solution it helps you to build that the agentic AI what it does is it acts mimics automation expert it really finds what it takes to deploy that solution in a real world scenario it gives you the complete architectures it gives you the programs it gives you the step by step procedures how do you put these systems together and then eventually in the final I mean of course you can also ask it to make changes what happens is in industrial scenario you change one equipment it has to talk to all other equipment so everything changes so it does all that on its own autonomously by using agents in the background you can also see how that solution will look like on your factory which has been conceptualized by the agents and then when it comes to robot programming many people ask me that Chad Jibadi can’t do this why do you want to build a foundational model for this when it comes to robotics necessarily we are interacting with the real world you have to understand what is the object what needs to be done how the robot needs to move all that data needs to be injected into the systems only then robotic programming can be done anyways uh then there is something called cnc programming cncs are the mother machine so every aerospace component every automotive engine be it two -wheeler four -wheeler they’re all machined on cnc machines for that matter to build other machines you need a cnc machine so all those programs can also be generated by using uh agentic or generative ai in this case in defense use case it’s like for example this is a case of aero engine where you just say the error code the 3d model explodes and the generative ai tells you where and what steps to take to solve that particular problem for example you will be able to see you said the error code it shows you where in the whole machine that error belongs and these are the steps you need to take to solve these problems so yes this is it from me we are exhibiting at hall 14 see you all there who want to discuss more

Archana Jahargirdar

so anybody has questions on the product you including founders sitting on this panel can ask questions on the product any question yes please

Ravindra Kumar

we’ll use their model see I have no I am not fond of building foundational model my aim is to solve the problem of my customer absolutely so one thing that this is this will never be in a human history these kind of tasks will never be simple chat response kind of scenario you need common play workflows, right? So even if, let us say, OpenAI wants to do it, he will have to build a custom application for this, right? So this is an application layer. Model can become ASI, the super intelligence level. You still will have to build the application. So that is our first approach. Second is for industrial domain. Even if OpenAI wants to do today, he will have to build a foundational model for this, separately.

Because it is related to industrial world. The 3D actual world, the data is proprietary, customer doesn’t share it with you. So your application has to run on his premises or on the virtual clouds.

Archana Jahargirdar

Okay, thank you. Weber, you are next.

Vaibhavath Shukla

Thank you so much. Thank you. Thank you. Thank you. first of all I would like to thank Karan and Archana from Rukam Capital for giving me this opportunity India doesn’t need more wrappers we need infrastructure and that’s what we are building at Quonsys AI my name is Vaibhavath Shukla I’m the founder and CEO of Quonsys AI we are building the voice infrastructure for India and so India is the customer support capital of the world it is 55 billion dollar industry for us it’s roughly 2 % of India’s GDP and the problem is that this entire model is outdated in the agentic era so that’s what we are solving we ask this question to ourselves if we can automate the call centers itself and the call centers can automate and completely run by themselves so for that we started solving this problem and we started building from scratch for what exactly is required to automate the entire call center pieces and that’s what we initiated with Quonsys AI.

So Quonsys is the default layer wherein you don’t need humans in the loop which can automate the entire call center and BPO infrastructure and we can completely run end -to -end for the processes. So these systems can listen, understand, act, respond and solve the entire purpose for any particular use case. So it’s not a concept anymore. We have been working with some of the top enterprises like Paytm, CRED, PropBotX. We are also partnered with OpenAI for the infrastructure. We are working with them on voice and Indic languages infrastructure which we have developed by our own digital data engine and we can generate data at scale. So we are different in a way because we have solved the entire layer for be it your application layer, for orchestration layer, then organization, on the model layer and the data layer itself.

So for example, anything and everything that is required we are basically making the entire suite of the… automation layer for call centers. So it’s completely, you can say call centers are completely running on itself. We have built companies before. We have a really good research team which is helping us in developing the entire foundational layer of it. And we have deployed some of the use cases when we have already worked with some of the large enterprises already. Yeah, and I’m happy to answer

Archana Jahargirdar

Any questions on the product?

Audience

Yes, yeah. So the call lands on the somebody’s phone.

Vaibhavath Shukla

Correct.

Audience

So it’s like again a kind of thing.

Vaibhavath Shukla

Correct. those kind of scenarios yes it can be can you be more specific on the use case

Audience

yeah for example uh i got generated a lead on google ads or say a training uh on digital marketing right

Vaibhavath Shukla

yeah

Audience

so that customer is calling to a particular number

Vaibhavath Shukla

correct

Audience

this lands on say in this phone

Vaibhavath Shukla

yeah

Audience

so can i put this agent into this phone which can attend that call and answer according to my requirements

Vaibhavath Shukla

yeah it can definitely do so what it will do in the back end and then you can you know have a handshake handshake of web sockets when your number and the other number that we have uh we can basically merge together and the conversation can float from there and it can answer the questions because these are dynamic questions it’s not a fixed kind of question

Audience

right

Vaibhavath Shukla

right it can so all the knowledge that you can you’re going to give it so for example i’ll give you a use case of real estate right so uh if somebody’s making an inquiry about the real estate project you basically fill the form you get the number there the AI agent will make the call it is already trained on the entire data set of your real estate project where is it what is the per square feet size the cost of it what are the amenities and all those things locality all those things which is already trained on that it will talk to you on the basis of all that information it will record the interest level from you whether you want to visit the site or not and then it will automatically book the site visit as well and you can trigger SMS WhatsApp email whatever you require so everything that was previously done by a call center agent is completely automated using AI agents and it’s end to end process so basically the purpose that you have given it it can completely solve for that

Audience

and can like institutes can take this or companies can take this on stand alone basis or you have put in a subscription mode kind of thing

Vaibhavath Shukla

so it’s more like with charging per minute kind of subscription at this point so you set it up one time then or whatever the number of minutes that you consume with us you pay for that

Archana Jahargirdar

Okay. Any other question?

Audience

Yeah. I mean, you talked about building foundational models before the ending language, right?

Vaibhavath Shukla

Yeah.

Audience

So if you could tell me how you’re scaling on the same because foundational models are very good for demos, but when we scale, we have even seen Servam breaking.

Vaibhavath Shukla

Right.

Audience

So how are you…

Vaibhavath Shukla

That is right. We basically gave a demo with Servam and… Guys.

Audience

Well, that was too loud, but then, yeah, how are you thinking of combating that scenario?

Vaibhavath Shukla

The main thing is basically the data engine, right? So, I mean, data that you have basically trained it on, that’s the most important piece. Initially, when we tried it with Bhashini and Google data sets, all the public libraries that are available, we basically tried to fine -tune and generate, basically train the model on that data set. But unfortunately, like you mentioned, there are so many problems with that. So that’s why we built our own data engine. As you can see, we… We won an award from Prime Minister Modi as well. It was right here in the Bhatman room last year. so we basically generated data generate data from our own data data engine and that is what we are basically putting it in the model so for use case by use case for example Paytm that is working with us at scale we are making tens of thousands of all with Paytm for those kind of use cases we basically take it what exactly is the use case on right for example merchant is a very complex use case yeah right right right so we are working I mean concurrency currently is around 50 now we are going to increase that as I mean as the model grows we will increase the concurrency like right right right right right right right right right correct so there are two kinds of uh problems right so there are smaller companies which are employing five guys ten guys so if you talk about that that’s not something where we are currently focusing on and it’s not the industry can’t focus on that i mean the pricing will come down drastically in the next couple of years but there are companies like sbi insurance they are employing tens of thousands of people in particular building right so from real estate from managing the security the parking spaces uh the hr management team managements all those things subscription headsets machinery all that those things if you take it down to the last minute so that costs roughly 25 to 30 minute rupees per minute for this thing this particularly maybe cost three rupees per minute so that’s more like 90 percent of the cost saving for those kind of companies so that’s where the current market is and that’s what we are basically focusing on

Archana Jahargirdar

thank you thank you very much I request guys a round of applause for all the founders I request Pradyum Gupta to now come and present be generous with the applause at the questions both please yeah I mean founders are taking time out to talk about their product

Pradyum Gupta

thank you ma ‘am for providing me this opportunity okay hi Hi everyone, my name is Pradyum. I am representing Papri Labs here. So, just giving a simple example what Papri Labs actually do is that, for example, you are today all coming to Bharat Mandapam. Now you must be using, maybe if you are here from Delhi, you might not be using a map, but I am from outside Delhi, so I was using a map from IIT Delhi to Bharat Mandapam. Now what happened was that it said to me that these gates are open, but these gates were all of them were closed. I was just looking around all over the parking areas. And normally this is a common problem in the map system today.

What map system have done is that they have brought a great navigation system. So, you want to go at a particular place that could be anywhere all over the city, you can go down there. But that navigation will never be so much aware that the kind of awareness that you require. So, for example, that there could be a ploy. There could be a place that the gates are closed. There could be something which is happening. Maybe a very heavy fog is there. Now that is not updated. What our company does is, it is not updated. is that we update the map, any kind of existing mapping system in a very instant way. How we do it?

We work on a visual system. So for example, any kind of a vehicle which are having on the ground, they have our cameras placed in. So these are simple dashcams, CCTVs, anything which is visual, we basically place all over the cities and we work only in the metro cities as on date or only on the major highways. We take out all the data and we plot it over the map and then we not only place the videos or the images, we categorize them. So like for example, right now in Delhi, we work with a local transport here, DTC is a Delhi Transport Company. So we plotted about 8 ,000 units and then we were getting like 100 petabytes of data from all this thing.

And then we will categorize them so fast that you basically see the entire Delhi life. And the best part was that you can search from them, what’s going on there. So now, what are the use cases that we… we brought probably three use cases in the market so for example there’s a company called JC de Cox they own about 4 ,000 billboards all over New Delhi the problem in all these billboards is these billboards come at a standard price so for example you own like 10 ,000 billboards but you don’t you usually sell it only on a like a basis of if it’s a posh area then I will charge maybe more if it’s a less posh I will charge them less what we brought as a new kind of pricing mechanism that you charge on the base of impression count what digital arts brought for them and that’s how we were able to increase the revenue for about 40 -45 percent because now they were charging more on the revenue basis we work with the company called autonomous cars of this mg motors they had a hectare hectare vehicle which when they were entering in India back then they were bringing that thing in the autonomous set that one they were bringing internet edge inside one of the problems that they had was that they had luxury passengers but they wanted to know that okay what is happening on the road Like instantly, even if it has a fog, they need to know that, okay, divider is broken or not.

Am I safe in there or not? So we started to update. There’s a company called MapMyIndia. We started to update their systems very fast. Third, we worked with a company like BCG. BCG is a consulting firm which basically consults government to take decisions on the ground. What we told them that this is where the demand is there. This is where the capacity is high. That’s how we brought a root rationalization algorithm. That’s helped DTC on the ground to basically manage all their 8 ,000 buses to where they need to actually deploy more buses so that they can increase the revenue. But the second perspective was that more passengers can actually board the bus. So we update the map on the cases that they want.

Now, we have been penetrating in news. So normal daily newspaper that you usually read on a regular basis, there’s an image tree that is attached to it. In that image, there’s an about. 8 ,000 to 10 ,000 people usually just on the ground. do a basic job is to collect these images. What we do is because we have a huge volume of videos that we have, we are just updating them and they are creating a news out of it. So like you want to search anything, any news you want to create, you can create from there. How we do it? So because this is a more of a product business, so one of the problems that we faced in India when we were trying to scale this product is that even everyone is talking about AI, but today if I am just going to be asking any single passenger just to put a phone in their car and provide us data, none of you will do it.

And this was a very basic problem. We realized that people want it. In India there is this perception is that they want to absorb, they absorb the technology really fast. But to give that information is very hard. So we created a mechanism in which the customer started to supply the data by itself. So for example, when we started to deal with passenger bus service, we started to deal with passenger bus service. So we started to give them passenger bus service. So we started to give them passenger bus service. So we started to give them passenger bus service. So we started to give them passenger bus service. So we started to give them passenger bus service.

So we started to give them passenger bus service. So we started to give them passenger bus service. So we started to give them passenger bus service. counting because the problem was that they didn’t know how many people come inside the bus on a particular bus stand where we need to run more buses. When we started to deal with packages like logistic companies, logistic companies use digital locks. So today any truck which goes all over the country, the problem is that they put a digital lock and then they expect that the truck is safe. But that digital lock even gets opened, any kind of thing is not evidence in a court. What we added was a small camera in the particular container.

That thing counted how many goods were getting inside and how many goods were not, like what was the exact tally value. That’s how we deployed in our highway sector. So here if there are three sides of the data part, one is the passenger, that means we are getting city data. If we are deploying in the logistic sector, we are getting city data. We are getting highways data. And if you are deploying in the normal commercial cars, we are getting the lane information. And the perspective was. to just to get the front imagery. The back imagery they use it for themselves and we are certainly not interested in that part. That’s how we formed this entire information.

One of the problem that we faced to one of our customers was so we reached to one of so Delhi police and we started to sell this entire platform. They started to Google that like basically they want to search everything that where people are not wearing helmets because they want to cut the chalance very instantly. Now these things was that we created layers but we didn’t have a system that we can create very dynamic layers for that particular person. So what we did was we added that’s where the LLM thing came in that we started to describe every image and then internally we were searching everything for them. So like you have anything any idea in your mind you want to for example a person comes to me he says to me that find me all the CCTVs.

In New Delhi find me street lights which are not working just prompt it up. internally we are a video analytics company like we are so we keep we are running on a bare metal like hundred petabytes and then we’re just processing them really fast and you can then the best part that we brought was let’s start to compare like what changed now and what was previously before like six days back year back what was the development going on and this is how the basically the end customer gets so for example if it’s a local bus passenger company they wants to know that how many passengers actually board so we we provide a system to them but internally we use a front camera system so they use a fleet management system then we brought popular over the top so this is an example that we brought with DTC this this was funded by JICA JICA is an investment corporation which funds the government of New Delhi and that’s how we scaled in entire New Delhi second thing was that if you want to know any count like where the people whether how many cars been gone through or how many buses pass through that particular portion or where the two wheelers are or where the ambulances actually cross, we started to take out every information all over New Delhi and this is all real time.

So if you want to do it today, you want to compare it for like last six months and then you will track and target them, you can do all of it. So we brought these pay systems because JCD Cox was the organization for. So we are

Audience

Yes. Hello. So thank you so much for presenting. I am curious about knowing that you mentioned a certain petabyte of data that you are using and data you know is a very debatable topic right now after DPDP. So how are you handling that? How are you DPDP compliant? Because you are going to give this to certain other businesses also. So because you are getting a lot of personal data too, like getting images of people, getting images of the car numbers and all of these things. So how are you dbdp compliant and ensure that?

Pradyum Gupta

So there are two things. One is that inside videos we never take out for the public information. Even though the clients are ready to pay even 10 times over that value. Second thing is like this is the rule of property labs internally. Second thing is only front data is used. Front camera data faces are blurred. Number plates are also blurred. Second, third thing is that right now we don’t run on any AWS. So we don’t use hyperscalers right now. We only use bare metal servers. So bare metal is stacked in which we keep everything in Europe right now. So Europe, Hetzner, we have taken a portion of their data centers. And the second thing, so in India there’s a big problem.

Audience

how are you handling that? How are you DPDP compliant? Because you are going to give this to certain other businesses also. So because you are getting a lot of personal data too, like getting images of people, getting images of the car numbers and all of these things. So how are you DPDP compliant and ensure that?

Pradyum Gupta

So there are two things. One is that inside videos, we never take out for the public information, even though the clients are ready to pay even 10 times over that value. Second thing is, like this is the rule of popular. It loves internally. Second thing is only front data is used. Front camera data faces are blurred. Number plates are also blurred. Second, third thing is that right now, we don’t run on any AWS. So we don’t use hyperscalers right now. We only use bare metal servers. So bare metal means stacks in which we keep everything in Europe right now. So Europe, Hetzner, we have taken a portion of their data centers. And the second thing, so in India, there’s a big problem.

problem. One of the things that people say that, okay, GPUs are a lot, but the reality is that GPUs are, the companies which are actually selling these GPUs never purchase from them, rather purchase from CDAC. So CDAC is an organization called Aravat. Aravat is providing us supercomputers and like a bare, dirty price. So if you’re going to be searching on Aravat, just purchase their GPUs and then keep data on bare metal on your security premises, then it’s very safe. And it’s very cheap.

Archana Jahargirdar

Okay, one more question. And in the end, we’ll take more questions because once everyone’s done their presentations, please go for it.

Audience

I want to ask, because I’m a performative on the product, and I just want to ask, like, what are the incentives you are giving to the dashcam holders? Like, I heard you are giving incentives to the local DTDC buses or like…

Pradyum Gupta

So we don’t pay incentives, they pay us.

Audience

So like, what is the leverage you are holding for them to…

Pradyum Gupta

For example… So this company… DTC, Dairy Transport Corporation, they burn about 80 crores every year on not providing the timely bus service. And they had a very low revenue. Like they had a revenue loss of about 800 crores as I had a talk with A .S. Sachin Shinde back then he was there. Now A .S. Jitendraji came in. Now when we came into the system, we actually reduced on the revenue loss for them. So for example, if you see this number, 27 is the demand which is there and 25 is the capacity. So in India what happened was that when Amadvi Party came in, they provided female passengers as a free bus service. Now every party started to criticize all these parties that you are providing free for the bus service to the females.

We were the first company which actually gave them a mandate that 1 % is the actual female passengers are operating. So that’s how they were able to save their lives. So that’s when, you know, when we came in, we saw there are a lot of… operational issues.

Audience

We can access it through our apps or like…

Pradyum Gupta

Nothing is possible. We are a pure B2B company. We never intend to be B2C.

Archana Jahargirdar

Okay. We’ll do questions at the end. Let’s finish the presentations and… Okay, quickly. But short answer on your part. Yeah, but short answer, please.

Pradyum Gupta

So, we sell on tile basis. So, for example, this particular area, this comes at 25 by 25 square kilometers. This starts at 1 .5 lakh rupees. Per tile. This is for only valid for one day. And this usually multiplies at a volume at the company comes in. Thank you so much.

Archana Jahargirdar

So, now I’ll request Meenal to come and present, please.

Meenal Gupta

Hello everyone, I am Meenal Gupta from EasyOPI Solutions and so nice to see you over here. Who all are founders over here? Oh wow, so many. So we… Founders are here. So founders should be here. Who all are founders over here? So I love to be with founders. I know they share the journey and the struggle, they know it very well. So we all three women, mostly known as GreenDeviya because this name was given to us by Mr. Narendra. Okay. So I am Narendra Modi. So Meenal Gupta, I am the founder. I am the founder. Noor for… Noor for… and Sheetal Tarkas. We all started this journey. Our platform, we have named it as Imagix AI.

It’s an AI driven precision imaging to treatment planning for cancer. So we are HIPAA compliant. We have four patents in hand. We are ISO 13485 certified company. We also have SEDESCO license. Talking about SEDESCO, people who are from medical field, they might be knowing that there is a license which is required when we want to take our solution to hospitals. So there is a ICMR agency that certifies your product that is software as a medical device. Once it is certified, then you can take it to any hospital. You can actually commercialize post that. So we are a company. We are SEDESCO certified. So talking about the problem, we have a lot of people who are from medical field.

We know there are around 20 million new cancer cases every year. And there is not like that doctors don’t have the intent of solving the problem or treating cancer. But the main problem is the shortage of clinical experts. We can increase the devices like diagnosis devices can be increased, imaging can be increased. But the problem that is facing is the shortage of expertise for oncology. So and finally the treatment planning. So talking about the problem, once a cancer is being detected, the patient is sent for CT scan. Once the CT scan MRI. Once it is being done, then. Tumor board decides whether the patient has to go for radiation therapy or they have to go for surgery.

or the combination of both. I can understand everyone can relate it because almost every family in India or world are having someone very near and dear who are facing through cancer and they have gone through just such challenges. So what happens is because of this shortage, it cause life. It cause life or the stage of the cancer changes. Either it changes from first level to second stage. It changes because of this unavailability. So this is where our solution comes in. So this was our own personal experience where all three founders have personally experienced cancer to our near ones and we have gone through this radiation therapy where we had to wait in queue because of unavailability of specialist, because of unavailability of treatment planning.

So this was a very big bottleneck. You can see over here. So this was a very big bottleneck. You can see over here. So this was a very big bottleneck. So this was a very big bottleneck. So this was a very big bottleneck. So this was a very big bottleneck. So this was a very big bottleneck. So this was a very big bottleneck. Once a patient is recommended to go for radiation therapy, there is a planning to do for that radiation therapy. For this planning, there is a manual process where wherever there is a tumor, all the surrounding organs of tumor, they are to be segmented. And this is a manual process. I can proudly say that in India, there is no one who is solving this problem and we are the only one who have this solution.

Here we contour all the, contouring is the masking. We mask all the organs which are on risk and are surrounding tumor. So here the main purpose is that all the organs that are surrounding tumor, they should be saved because they are healthy organs. And radiation therapy should be as less as possible on those healthy organs. the manual process it used to take somewhere around 960 to 90 minutes we have reduced it to at the max of 15 minutes the reason is uh for complex uh radiation therapy when it is head and neck cancer it takes lots of time so maximum 15 minutes and minimum 5 minutes so we have reduced it here uh what we do is once the patient is diagnosed with cancer he goes for city scan tumor board this city scan is being done it is uploaded on our cloud tumor board have the access of this city scan through our own uh dicom viewer uh dicom is the format uh through which this images can be seen they decide whether they have to go for city scan radiation therapy planning or surgery and we have various suits we do ai analysis where do we do first level of analysis where we mention the load of the tumor and second level of analysis is we have various suits we have XraySuite, NeuroSuite and OncoSuite which works on this scans and finally we give the final report.

So this is our product. We have trained our AI on about 5 million of data set in which around 30 % is Indian data set and this 30 % we gathered it from northeast region. So northeast region is very tough terian and we got support from Niti Aayog where we went and collected data. It is very tough terian. Means taking off AI over there is very challenging because 4G has not reached there yet. So we had to implement it on -premise solution so that we can get the data and we can help them solve that. So 30 % of data we have almost deployed it in 14 states in India. We got our data 30 % from that. Accuracy is around 92%. So it is around 92 % to 99 % depending upon the data.

complexity you can see this data we are working in Gujarat in seven district where we are helping to scan to do CXR chest and lung analysis where we have helped we have helped we have made somewhere around 1 million of scans we have detected around 4 ,000 TB positive cases till yet in which there were around six lung cancer cases where early intervention is still possible we have done around thousand of radiation radiotherapy plans till yet and talking about in last three months we had done around five fifty thousand of chest x -rays where twenty seven hundred of TB were flagged so we could save TB these are live photos where handle x -rays and all are being done so I this was our solution was recognized first by Mr.

Naren Indra Modi and day before yesterday we were invited in Microsoft by Bill Gates to show our solution to him.

Audience

In health tech, I’ve observed that trust is a very big factor in terms of AI adoption and you seem to be implementing it across India. So how do you make sure that the technology and the science behind it is trusted by the people who are being benefited by it?

Meenal Gupta

Yeah, I understand. So here, our solution, we are not replacing doctors. We are just assisting doctors. We have made their manual process easy process. But final approval has to be done by radiologists. So it is human in the loop. We are not claiming that directly our AI will solve it.

Archana Jahargirdar

Thank you. I’ll request Vivek now to come

Vivek Gupta

Hi everyone. So first of all, thank you so much team Rukam Capital for organizing such a vibrant event and the energy is full of high in this room, I can see. It can be higher though. Yeah. I think Yes. So my name is Vivek Gupta. I’m the founder and CEO of company called Indus Labs AI. So we are building the voice architecture of India. So we are basically building the whole layer of . operating system of voice, where all the layers like speech to text, text to speech, the LLM, speech to speech, all of this infrastructure we are building, right? So it is a common platform where anyone, I said anyone can come on this platform and build their own voice agent.

As sir was asking the question about whether you are running a campaign on Google, you have put a number, you can build your own agent by yourself, right? So it’s a DIY platform where we are training, we are primarily focusing on Indian languages because the problem, linguistic problem in our country is if you see after each 20 kilometer dialect changes, right? So we are working with couple of banks, NBFCs, right? And whenever they run a campaign, they run a basically cold calling in let’s say Mojaffarnagar region, in UP, right? And whenever they call in Gorakhpur region, the Hindi is totally different. But global players like 11 Labs or maybe, you know, the Indian language, they are all different.

You know, some different global players, players like Azure and Google, they are providing a generic Hindi, but we need. a company in India who can build the infrastructure of voice in our country based on our directs. And again, so while we are building the infrastructure on our own GPUs and servers, hyperscalers we have inbuilt in our system, so that’s how we are able to reduce the latency. So we have some sub 400, 500 millisecond around latency into the system, which is like more human conversation, you can feel it. And you know, the complete analysis of the call is there, right? As soon as the call gets disconnected, you will find the sentiment analysis of the call, the outcome of the call, and it will log into the system.

So what is the expected outcome of the call? It will go into your CRM. So the journey starts from your CRM and ends with CRM. You trigger the calls from CRM and it ends with the CRM. So again, as I said, like native dialect mastery we have, and we are ultra low latency. And again, you know, call. somebody was also asking question regarding how effective it is in terms of cost. So if I talk about existing system cost we are reducing the cost up to 70 % right. So up to 70 % cost can be reduced and operationally you are enabled like 24 7 availability of the system is there. System is multilingual right. You don’t need to have multiple people for different languages.

Single system can handle 24 7 and that’s how you are able to reduce cost and operationally efficient. And you know the important part is emotional handling right. So like one year back I started this company 2 .5 years back right. I used to be director of an engineering company in software company in Bangalore and 2 .5 years back I quit and started Indus Labs and my background is from IT Delhi right. So the core problem was emotions right when I started this company. I always thought like if somebody is laughing over the call how would AI system would recognize the person is happy or angry. That’s how the agent would say sorry or congratulate you. Right because you need to understand the emotions right.

So we were working on this. is speech -to -text model since last 1 .5 years and on the 16th of this month in the department only we launched this model called it’s basically no emotion of your STD so we launched this model here in part of the monthly and we are basically you know distributing it to our customers existing customers now so that they can start using it’s a PUC phase right now and the good part is since we are an Indian company the whole the data is going to reside here on our sovereign feel is there right so we are pure Indian origin company so as I said like if I if I compare with now global pairs like Google and 11 labs so we are cost so like let’s say I hope many people knows what 11 labs is right so their cost is somewhere around eight rupees per minute right but we are seventy percent lower we sell at two rupees per minute right and we are superior in terms of Indian dialect accuracy and we are superior and streaming latency is somewhere around three hundred to four hundred millisecond and emotional expressiveness is already there in our system as we recently launched it and Indian data residency clause is obviously there because we are an Indian company.

So, I mean, so we are a huge case agnostic platform. We don’t say like we are having mastery in this huge case. You can come on our platform. So, like as of today, we are working with multiple use cases, right? We are working with banks. We are working with enterprises in FMCG. We are working with, you know, customer support people. They are building their own voice agents, but they use our STT and TTS through API, right? So, not anyone can build STT and TTS. So, instead of using 11 labs, they use us because we are cost effective and obviously good in terms of Indian dialect mastery. So, it’s DIY platform. Anyone can come and build their own agent.

We have different flows. Workflow is already there. You can create nodes. Each node can be connected with webhooks or API and that can be used to build your own voice agent. So, we are working with customers. We are working with customers. We are working with customers. We are working with customers. We are working with customers. We are working with customers. We are working with customers. So it’s completely guided journey and you can create and you can also integrate your voice agent with telephony. So we have already partnership with Airtel and Geo. So that’s how you can give the SIP channels through there and connect with your voice agent. So it’s complete end to end journey.

And again, so the core market is B2B enterprises. Right. And we are also platform for developers. So developers can use our APIs into their existing systems wherever they want to use. Right. So you just based API and it is again per second based costing is there. So as how many seconds you will use, you will be the credits would be detected. So it’s a recharge based system. You recharge and you can use it. And also we are into we are building channel partners as well. So like we have a couple of partners. One is in Dubai. One is in Germany. So we are white white labeling our platform for them. Right. So they can onboard their clients on their platform internally.

They would become our client shared. and we can share the revenue so we have right now we have developed four to five partners globally so we are building from Bharat for the globe so we have foreign languages as well we have Arabic we have German we have French and Mandarin Mandarin is in building stage right now so we have core and English all the accents we have or still in accent British accent American accent all the male female voices and again you can clone your voice as

Archana Jahargirdar

thank you any questions quick questions any questions

Vivek Gupta

yep absolutely it’s a no -code platform it’s a journey based platform you need to trigger what you want to do what you want to basically build out of it so let’s say you are you want to build an agent for inbound agent for your leads right so anybody who is calling from Google Ads would land on this voice agent. So you will define the journey and how you want to integrate. Let’s say you want to, some meeting has been fixed for your product, right? So it is connected with Google Calendar, your Google Calendar. So as soon as AI agent books a meeting, you will get an email on your Google Calendar. This meeting has been fixed and your Google Calendar will be blocked.

Audience

So my question is like, I have to make this like, there are some nodes, I have to connect these and make a flow or you have all things are made up, we just have to click and the agent will start walking.

Vivek Gupta

Yeah, it’s like DIY platform and we have tutorials as well, right? If you are stuck, you can see the tutorials. So and still you are feeling you are not able to build it, you can support the customer center. So our team will help you in that case.

Archana Jahargirdar

Okay, quick question.

Audience

Yeah. How did you start? When you start, left your job?

Archana Jahargirdar

We don’t have so much time. You can talk to them offline, but the data is a question.

Vivek Gupta

I’ll make it short. So, journey started 2 .5 years back. So, I initially started building voice agent and started using TTS of somebody else. Then we figured out that this TTS is having this issue. How can I solve this? Because my customer will ask the issues complaint to me. So, we were able to solve this problem of pronunciation of some words because these issues were there already in the third -party system. So, we thought of initially building our own infrastructure. And we pivoted in a model that people will use our APIs. That’s what we want to build it, right? So, then, firstly, we used public data, publicly available data. Then we started creating our own data, right?

So, we create the data and we have multiple hyperscalers available. And scalability -wise, our system is so much scalable that you can put a thousand requests at a time, it will handle. So, it will scale 0 to 1000 within 10 minutes. So, that’s how we built it. Thank you very much.

Archana Jahargirdar

thank you so much for all the engaging questions that everybody did make the effort to ask we are time constrained over here so I want to thank all the founders for sharing your product and any questions you have for the founders please do connect with them and do continue the conversation it’s just that we need to leave the room and I request all of us to do a quick picture together thank you Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (34)
Factual NotesClaims verified against the Diplo knowledge base (1)
Confirmedhigh

“Moderator Archana Jahargirdar opened the summit with a strict “product‑only, no‑pitch” directive, asking presenters to focus on product details.”

The knowledge base states that the session was moderated by Archana Jahargirdar, who emphasized that presentations should focus purely on product details, confirming the reported directive.

External Sources (110)
S1
Scaling Innovation Building a Robust AI Startup Ecosystem — -Shri Ashok Gupta: Title – Director STPI Gurugram; Role – Dignitary presenting mementos Hi, I’m Meenal Gupta, founder o…
S2
Founders Adda Raw Conversations with India’s Top AI Pioneers — 1230 words | 154 words per minute | Duration: 478 secondss Hello everyone, I am Meenal Gupta from EasyOPI Solutions and…
S3
https://dig.watch/event/india-ai-impact-summit-2026/founders-adda-raw-conversations-with-indias-top-ai-pioneers — Accuracy is around 92%. So it is around 92 % to 99 % depending upon the data. complexity you can see this data we are wo…
S4
Founders Adda Raw Conversations with India’s Top AI Pioneers — Pradyum Gupta from Papri Labs showcased a real-time mapping system that updates existing maps using visual data from das…
S5
Founders Adda Raw Conversations with India’s Top AI Pioneers — Agreed with:Meenal Gupta — Importance of regulatory compliance and certification for market adoption Agreed with:Pradyu…
S6
Founders Adda Raw Conversations with India’s Top AI Pioneers — -Archana Jahargirdar- Conference moderator/host from Rukam Capital, facilitating the founder presentations and Q&A sessi…
S7
Founders Adda Raw Conversations with India’s Top AI Pioneers — This discussion was a product showcase session at a technology summit where AI startup founders presented their solution…
S8
Founders Adda Raw Conversations with India’s Top AI Pioneers — 1130 words | 163 words per minute | Duration: 414 secondss Thank you so much. Thank you. Thank you. Thank you. first of…
S9
S10
Comprehensive Report: Preventing Jobless Growth in the Age of AI — He’s a commissioner at the European Commission. His focus is on the economy and productivity. Ravi Kumar is the CEO of C…
S11
Founders Adda Raw Conversations with India’s Top AI Pioneers — Speakers:Ravindra Kumar Speakers:Ravindra Kumar, Audience Speakers:Ravindra Kumar, Vaibhavath Shukla, Vivek Gupta
S13
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S14
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S15
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S16
Founders Adda Raw Conversations with India’s Top AI Pioneers — 1679 words | 193 words per minute | Duration: 519 secondss Hi everyone. So first of all, thank you so much team Rukam C…
S17
https://app.faicon.ai/ai-impact-summit-2026/founders-adda-raw-conversations-with-indias-top-ai-pioneers — And again, so the core market is B2B enterprises. Right. And we are also platform for developers. So developers can use …
S18
Founders Adda Raw Conversations with India’s Top AI Pioneers — Hi everyone. So first of all, thank you so much team Rukam Capital for organizing such a vibrant event and the energy is…
S19
How AI Drives Innovation and Economic Growth — Thank you very much. So, of course, creative destruction is an important driver of economic growth in the long run. So t…
S20
How AI Drives Innovation and Economic Growth — Thank you very much. And so, of course, creative destruction is an important driver of economic growth in the long run. …
S21
Masterclass#1 — The summary reflects a positive sentiment regarding this model of public-private partnership, recognising its vital cont…
S22
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — Ravindra Gupta:Second, I don’t think that technology at any time failed. Actually, it proved that it was ready. So wheth…
S23
UK and India forge strategic tech alliance — The UK-India Technology Security Initiative (TSI) has made notableprogresssince its launch, reflecting a commitment to s…
S24
Box 4.1: Adapting ITU’s price data collection to ICT developments — 1. The prices of the operator with the largest market share (measured by the number of fixedtelephone subscriptions) are…
S25
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/5/OEWG 2025 — The OAS emphasizes its role as a regional political body in catalyzing policy setting and knowledge sharing in cybersecu…
S26
Diplo/GIP at IGF 2025 — In this session, MPs exchanged current practices in their respective parliaments that aim at contributing to a healthy i…
S27
The impact of regulatory frameworks on the global digital communications industry — Ms Ellie Templeton is a Cyber Security Research Assistant at the Geneva Centre for Security Policy. She has an Internati…
S28
Agentic AI and the new industrial diplomacy — Xiaomi has been publicly promoting its ‘black-light factory’ concept for smartphones and consumer electronics. This refe…
S29
Adoption of agentic AI slowed by data readiness and governance gaps — Agentic AI is emerging as a new stage ofenterprise automation, enabling systems to reason, plan, and act across workflow…
S30
Autonomous AI agents are the next phase of enterprise automation — Organisations across sectors areturning to agentic automation—an emerging class of AI systems designed to think, plan, a…
S31
WS #283 AI Agents: Ensuring Responsible Deployment — ### Introduction and Context Anne McCormick: Thank you, Anne McCormick, EY, Global Head of Public Policy. I’m intereste…
S32
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Industry Perspectives: Systems Integration Challenges ## Introduction and Context Setting ## Sectoral Applications:…
S33
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — The audience member raises technical questions about where AI models for voice and multilingual translation should be ho…
S34
FOREWORD — Furthermore, the AI strategy is in line with the Digital Transformation Strategy for Africa (DTSfA), which …
S35
AI as critical infrastructure for continuity in public services — 501 words | 149 words per minute | Duration: 200 secondss Thank you very much and thank you very much for the invitatio…
S36
ElevenLabs Voice AI Session & NCRB/NPMFireside Chat — This discussion focused on Bhashini, India’s National Language Translation Mission, and its efforts to break down langua…
S37
Data localisation, what is it and what are its potential implications? (JAPAN) — Data localization measures, which require data to be stored and processed within a country’s borders, have been implemen…
S38
2016 Special 301 Report — increased or reinstated customs duties in 2016 on a broad range of innovative and IP-intensive goods, including medical …
S39
Building the Workforce_ AI for Viksit Bharat 2047 — From the community health worker delivering nutrition to an expecting mother to the balancing worker strategizing access…
S40
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Abhishek Singh: One part is that, of course, the way the technology is evolving, there is IP-driven solutions and there …
S41
Advancing Scientific AI with Safety Ethics and Responsibility — And also, very importantly, how we have to also see it from the context of, you know, people doing their own thing, DIY …
S42
Open Forum #29 Advancing Digital Inclusion Through Segmented Monitoring — ### Funding and Incentive Structures Despite typical concerns about private sector data control, there was consensus th…
S43
Comprehensive Report: Preventing Jobless Growth in the Age of AI — And I think one of the things that we can do is get better visibility into how the world is changing, better data and st…
S44
Science as a Growth Engine: Navigating the Funding and Translation Challenge — High level of consensus with strong alignment on fundamental principles despite speakers representing different sectors …
S45
Hardware for Good: Scaling Clean Tech — 3. The importance of partnerships between startups, large companies, and governments for scaling clean technologies.
S46
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — High level of consensus with constructive disagreements mainly on implementation details rather than fundamental princip…
S47
Founders Adda Raw Conversations with India’s Top AI Pioneers — Explanation:While Gupta’s solution processes massive amounts of personal data (faces, number plates), the disagreement e…
S48
Tech group coalition seeks extended compliance timeline for India’s new data protection act — The Asia internet Coalition (AIC), which comprises major tech companies like Meta, Google, Apple, and Microsoft, hascall…
S49
India poised to introduce flexible consent framework and protections for children’s data — India is set tointroducean umbrella framework for consent management under the Digital Personal Data Protection (DPDP) A…
S50
DPDP law takes effect as India tightens AI-era data protections — India has activatednew Digital Personal Data Protection rulesthat sharply restrict how technology firms collect and use …
S51
GOVERNMENT CLOUD POLICY — ## 4.5 Billing and Payment Considerations – i. Consumption-Based Billing : Public bodies will only be charged for the …
S52
The Future of the Internet: Navigating the Transition to an Agentic Web — Business models must evolve from input-based to outcome-based pricing Example of enterprise wanting 10-minute conversat…
S53
Foreword — – Daily, weekly, or monthly installment plans, can significantly improve the affordability of mobile broadband capable d…
S54
https://app.faicon.ai/ai-impact-summit-2026/leaders-plenary-global-vision-for-ai-impact-and-governance-afternoon-session — What AI can do. Having said that. On the investment side, India is just a wonderful area for us. We were one of the earl…
S55
TradeTech’s Trillion-Dollar Promise — In conclusion, the integration of technology in the trade industry has brought numerous positive changes. From crisis ma…
S56
Cross-Border Data Flows: Harmonizing trust through interoperability mechanisms (DCO) — Creating trust in data privacy across borders requires a combined effort from companies, governments, and the tech commu…
S57
Main Session | Best Practice Forum on Cybersecurity — Building trust and collaborating across sectors is crucial for effective cybersecurity capacity building. This involves …
S58
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — Srinivasan advocates for sovereign, domain-specific SLMs with complete data control within individual systems, while Wil…
S59
Advancing Scientific AI with Safety Ethics and Responsibility — Policy evaluation must expand beyond model-centric assessment to include broader socio-technical factors. This includes …
S60
AI Meets Agriculture Building Food Security and Climate Resilien — Disagreement level:Low to moderate disagreement level with significant implications for AI governance in agriculture. Th…
S61
Open Forum #73 Indigenous Peoples Languages in a Digital Age — Focus on application layer development rather than building models from scratch, utilizing existing open source foundati…
S62
Founders Adda Raw Conversations with India’s Top AI Pioneers — Foundational models face significant scalability challenges when moving from demos to production deployment
S63
How AI Drives Innovation and Economic Growth — Akcigit distinguishes between two layers of AI development in advanced economies. The application layer has low entry ba…
S64
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S65
Founders Adda Raw Conversations with India’s Top AI Pioneers — The session achieved its objective of fostering peer-to-peer learning among founders. Interactive Q&A segments generated…
S66
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/4/OEWG 2025 — Ghana: Mr. Chair, Ghana aligns itself with the statements made by the African Group and recognizes the critical role o…
S67
Founders Adda Raw Conversations with India’s Top AI Pioneers — This was a founder showcase event organized by Rukam Capital where AI startup founders presented their products to an au…
S68
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 74. With a few exceptions, the Inspector did not find enough information about the actual instructional approaches or th…
S69
Autonomous AI agents are the next phase of enterprise automation — Organisations across sectors areturning to agentic automation—an emerging class of AI systems designed to think, plan, a…
S70
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — The second is graph neural networks, which are artificial intelligence systems that make sense of siloed sets of data an…
S71
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Owen Larter from Google DeepMind provided an industry perspective on the technical requirements for robust AI assurance,…
S72
Agentic Intelligence set to automate complex tasks with human oversight — Thomson Reuters hasunveiled a new AI platform, Agentic Intelligence, designed to automate complex workflows for professi…
S73
AI reshapes customer experience, survey finds — A survey of contact centre and customer experience (CX) leadersfindsthat AI has become ‘non-negotiable’ for organisation…
S74
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Mentions specific concern for translators, voice actors, and call center operators whose jobs face complete automation
S75
Scaling Enterprise-Grade Responsible AI Across the Global South — “from a sovereignty perspective, it’s important that we can build our own models where data is not a constraint.”[49]. “…
S76
AI as critical infrastructure for continuity in public services — And when the users, they can hear, okay, I should be more productive. I don’t want to be more productive anymore, right?…
S77
Open Internet Inclusive AI Unlocking Innovation for All — Firstly, hi, everyone. Great to have all of you here. So I think the first thing is, look, Matthew, I don’t know whether…
S78
Harnessing Collective AI for India’s Social and Economic Development — <strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in speci…
S79
Scaling Innovation Building a Robust AI Startup Ecosystem — EZO5 Solutionswas represented by co-founders Noor Fatima and Meenal Gupta, who described their Imagix AI platform for pr…
S80
AI tool boosts accuracy of cancer treatment predictions — A Slovenian-US biotech company, Genialis, isharnessingAI to revolutionise cancer treatment by tackling a major obstacle:…
S81
Serbian startup revolutionises cancer diagnostics with AI-powered radiotherapy tool — A group of Serbian physicists, programmers, and radiologists, led by Stevan Vrbaški, hasdevelopeda groundbreaking softwa…
S82
Newcomers Orientation Session — The discussion maintains a welcoming, educational tone throughout, with speakers actively encouraging questions and part…
S83
https://app.faicon.ai/ai-impact-summit-2026/shaping-the-future-ai-strategies-for-jobs-and-economic-development — I think it’s set the tone for what we really wanted. I think it’s set the tone for what we really wanted. so everyone th…
S84
Afternoon session — The discussion began with a collaborative and appreciative tone as various stakeholders shared their visions and commitm…
S85
Setting the Rules_ Global AI Standards for Growth and Governance — The discussion maintained a consistently collaborative and constructive tone throughout. Panelists demonstrated remarkab…
S86
Lightning Talk #38 Chat with Itu International Internet Public Policy Issues — The tone was consistently professional, welcoming, and informative throughout. The speakers maintained a collaborative a…
S87
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — The discussion maintained a consistently optimistic and collaborative tone throughout, characterized by mutual respect b…
S88
GermanAsian AI Partnerships Driving Talent Innovation the Future — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers demonstrated mutual resp…
S89
AI Development Beyond Scaling: Panel Discussion Report — The tone began as optimistic and technically focused, with researchers enthusiastically presenting their innovative appr…
S90
Exploring Emerging PE³Ts for Data Governance with Trust | IGF 2023 Open Forum #161 — Additionally, a platform is used for companies to provide feedback and declare their compliance. Interestingly, the syst…
S91
Advancing Scientific AI with Safety Ethics and Responsibility — The discussion maintained a collaborative and constructive tone throughout, characterized by technical expertise and pol…
S92
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Artificial intelligence (AI) is reshaping the corporate governance framework and business processes, revolutionizing soc…
S93
Operationalizing data free flow with trust | IGF 2023 WS #197 — Jameson Olufi from Africa ICT Alliance highlighted the challenge of data access in the US, particularly regarding the Ge…
S94
Launch / Award Event #126 Women in Internet Governance — The tone was consistently positive, collaborative, and encouraging throughout the session. Speakers demonstrated enthusi…
S95
Opening of the session — The tone was generally constructive and collaborative, with delegates emphasizing the need for cooperation and shared co…
S96
Open Forum #60 Cooperating for Digital Resilience and Prosperity — The discussion maintained a consistently collaborative and constructive tone throughout. It was professional yet engagin…
S97
WS #90 Digital Safety: Tackling Disinformation in Future Internet — The tone of the discussion was positive and collaborative, with speakers emphasizing partnerships and joint efforts. The…
S98
Building Sovereign and Responsible AI Beyond Proof of Concepts — The discussion maintained a professional, educational tone throughout, with presenters acting as knowledgeable guides sh…
S99
Summit Opening Session — Summit Opening Session
S100
Building the Workforce_ AI for Viksit Bharat 2047 — Thank you. So, the mic’s there. Two minutes. Then I’ll say the second. No good answers. You got nothing to do. Before I …
S101
Agentic AI in Focus Opportunities Risks and Governance — Louveaux explains that MasterCard has evolved from AI systems that recommend to AI systems that act autonomously. These …
S102
Top 7 AI agents transforming business in 2025 — AI agentsare no longera futuristic concept — they’re now embedded in the everyday operations of major companies across s…
S103
From Innovation to Impact_ Bringing AI to the Public — India has to build a foundation model. This is no compromise statement. Not because that we can make a better financial …
S104
Keynotes — O’Flaherty acknowledges that the regulatory work is not finished and that current regulatory models will likely be insuf…
S105
Keynote-Demis Hassabis — Despite his optimism about AI’s potential, Hassabis emphasises the need for humility and careful consideration in approa…
S106
Artificial Intelligence &amp; Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S107
Contents — – ‘One thing that has gone well has been the coordination between the science funding from the EPSRC, the slightly highe…
S108
https://dig.watch/event/india-ai-impact-summit-2026/smaller-footprint-bigger-impact-building-sustainable-ai-for-the-future — I also would like to acknowledge the co -chairs of the Working Group on Resilience, Innovation, and Efficiency, the Mini…
S109
Improving the practice of cyber diplomacy: — – (a) We collated an initial set of mapping data through an in-house research focus group, complemented by desk research…
S110
Committee on Payment and Settlement Systems — – channels for data transfer to banks called GSM banking (a banking application is installed on the phone’s SIM card); -…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
R
Ravindra Kumar
5 arguments161 words per minute1033 words382 seconds
Argument 1
Automation of industrial robotics using agentic AI
EXPLANATION
Ravindra explains that existing automation equipment is sophisticated but under‑utilised, and Technodate aims to make automation as easy as DIY by leveraging agentic AI. The solution helps users conceptualize, deploy, and troubleshoot robotics and automation systems.
EVIDENCE
He describes the state of industrial automation, noting that a world-leading robot manufacturer achieved 100 % automation yet capacity remains under-used, and that many manufacturers cannot fully exploit automation ([21-28]). He then outlines Technodate’s three-module approach-conceptualisation, deployment/commissioning, and troubleshooting-using agentic AI to act as an automation expert ([28-32]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External reports describe Technodate’s agentic AI platform that “automates automation itself” and aims to democratize industrial robotics, matching Ravindra’s claim [S2][S4].
MAJOR DISCUSSION POINT
Automation of industrial robotics using agentic AI
Argument 2
Debate over building a foundational model vs. focusing on application layer
EXPLANATION
Ravindra argues that building a large foundational model is difficult in India due to funding constraints, so the focus should be on customer engagement and application‑level solutions. He suggests experimenting with existing models before deciding whether a foundational model is needed.
EVIDENCE
He notes the challenge of raising money for a foundational model in India and proposes talking to customers and experimenting with available options before committing to a foundational model ([33-40]). Later he reiterates that the problem is solved at the application layer, even if a super-intelligent model were built, the application still needs to be developed ([55-60]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion on the necessity and scalability challenges of foundational models for complex automation tasks is reflected in external analysis of foundational model requirements and limitations [S4].
MAJOR DISCUSSION POINT
Debate over building a foundational model vs. focusing on application layer
AGREED WITH
Vaibhavath Shukla, Audience
DISAGREED WITH
Audience
Argument 3
DIY, agentic AI approach for industrial automation
EXPLANATION
Ravindra positions Technodate’s product as a do‑it‑yourself, agentic AI platform that enables users to build automation solutions without deep technical expertise. The approach emphasizes ease of use and modularity.
EVIDENCE
He states that Technodate wants to make automation as easy as DIY using agentic AI and describes the three functional modules that support users from concept to deployment and troubleshooting ([28-32]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The DIY, agentic AI vision for users to build automation solutions without deep expertise is corroborated by external descriptions of Technodate’s “automating automation itself” platform [S2][S4].
MAJOR DISCUSSION POINT
DIY, agentic AI approach for industrial automation
AGREED WITH
Vaibhavath Shukla, Vivek Gupta
Argument 4
Strategic collaborations with academic experts and defence organisations to validate and accelerate the technology
EXPLANATION
Ravindra highlights partnerships with leading AI researchers and a forthcoming deployment with the Indian Air Force, signalling a strategy to leverage expertise and secure high‑profile defence contracts.
EVIDENCE
He mentions working with Dr. Sumit Chopra, a Ph.D. under the “godfather of AI,” and states that they are “going to deploy a use case very soon with Indian Air Force itself” ([45-48]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External material notes an upcoming deployment with the Indian Air Force, illustrating a defence partnership that supports Ravindra’s claim of strategic collaborations [S4].
MAJOR DISCUSSION POINT
Collaboration with academic and defence partners for validation and market entry
AGREED WITH
Vaibhavath Shukla, Pradyum Gupta
Argument 5
Early traction with Fortune 500 enterprises demonstrates market interest
EXPLANATION
Ravindra notes that Technodate has already begun deployments with Fortune 500 customers, indicating that large corporations are adopting their agentic AI automation platform.
EVIDENCE
He says “we have already started deploying application, including with people like Fortune 500 companies” ([41]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Evidence of deployments with Fortune 500 companies is documented in external sources, confirming the early enterprise adoption mentioned by Ravindra [S4].
MAJOR DISCUSSION POINT
Early enterprise adoption and market traction
V
Vaibhavath Shukla
6 arguments163 words per minute1130 words414 seconds
Argument 1
End‑to‑end voice‑driven call‑center automation
EXPLANATION
Vaibhavath presents Quonsys AI as a voice‑infrastructure that can fully automate call‑center operations, handling listening, understanding, acting, and responding without human intervention. The platform integrates with existing enterprise systems and supports multiple use cases.
EVIDENCE
He explains that Quonsys builds a voice infrastructure that removes humans from the loop, allowing call-centers to run end-to-end automatically, and cites collaborations with Paytm, CRED, and PropBotX, as well as a partnership with OpenAI for Indic language capabilities ([68-82]). In the Q&A he demonstrates how the system can answer inbound calls, process leads, and trigger follow-up actions such as SMS or booking site visits ([95-106]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Quonsys AI’s voice infrastructure that removes humans from the loop and automates call-center operations is described in external reports, supporting the argument [S2].
MAJOR DISCUSSION POINT
End‑to‑end voice‑driven call‑center automation
AGREED WITH
Ravindra Kumar, Vivek Gupta
Argument 2
Creation of a proprietary data engine to fine‑tune and scale voice models
EXPLANATION
Vaibhavath describes building an in‑house data engine to generate and curate large‑scale training data, enabling fine‑tuning of voice models beyond public datasets. This engine underpins the scalability of Quonsys AI’s solutions.
EVIDENCE
He states that after initial attempts with public datasets failed, they built their own data engine to generate data at scale, which is now used to train models for enterprise customers such as Paytm ([109-115]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The development of an in-house data engine for large-scale voice model training after public datasets proved insufficient is detailed in external sources [S2].
MAJOR DISCUSSION POINT
Creation of a proprietary data engine to fine‑tune and scale voice models
Argument 3
Per‑minute subscription pricing for automated call‑center agents
EXPLANATION
The pricing model is usage‑based, charging customers per minute of AI‑driven call handling, allowing flexible cost control for businesses.
EVIDENCE
During the Q&A he explains that the service is billed per minute, with a one-time setup and then charges based on the number of minutes consumed ([94-99]).
MAJOR DISCUSSION POINT
Per‑minute subscription pricing for automated call‑center agents
AGREED WITH
Pradyum Gupta, Vivek Gupta
Argument 4
Scaling challenges for small vs. large enterprises; focus on high‑volume customers
EXPLANATION
Vaibhavath acknowledges that their solution is currently tailored to large enterprises with high call volumes, while smaller firms are not the primary focus due to pricing and scalability considerations.
EVIDENCE
He notes that the current market focus is on large customers such as SBI Insurance and large BPOs, and that pricing will drop in the future, but small firms with five-ten employees are not being targeted now ([111-115]).
MAJOR DISCUSSION POINT
Scaling challenges for small vs. large enterprises; focus on high‑volume customers
AGREED WITH
Ravindra Kumar, Audience
DISAGREED WITH
Audience
Argument 5
Partnership with OpenAI to provide Indic‑language voice capabilities
EXPLANATION
Quonsys AI works together with OpenAI to build voice and Indic‑language infrastructure, combining OpenAI’s models with the company’s proprietary data engine.
EVIDENCE
He states “We are also partnered with OpenAI for the infrastructure. We are working with them on voice and Indic languages infrastructure which we have developed by our own digital data engine” ([74-75]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A partnership with OpenAI for Indic-language voice infrastructure is explicitly mentioned in external material, confirming the claim [S2].
MAJOR DISCUSSION POINT
Strategic partnership for language capabilities
AGREED WITH
Vivek Gupta, Pradyum Gupta
Argument 6
Recognition by the Indian government through an award from Prime Minister Modi
EXPLANATION
The founder mentions receiving a national award, which provides governmental endorsement and raises the profile of the solution.
EVIDENCE
He says “We won an award from Prime Minister Modi as well. It was right here in the Bhatman room last year” ([115-116]).
MAJOR DISCUSSION POINT
Government recognition and legitimacy
P
Pradyum Gupta
8 arguments190 words per minute2346 words739 seconds
Argument 1
Real‑time city‑wide mapping and video analytics from dash‑cams
EXPLANATION
Pradyum outlines a platform that collects visual data from dash‑cams, CCTVs, and other cameras across metro cities, processes petabytes of video, and updates maps instantly with searchable, categorized imagery.
EVIDENCE
He describes deploying simple dash-cams and CCTVs on vehicles, gathering around 100 petabytes of data, categorising it, and providing real-time searchable map updates for use cases such as billboard pricing, autonomous vehicle perception, and public-transport optimisation ([135-142]).
MAJOR DISCUSSION POINT
Real‑time city‑wide mapping and video analytics from dash‑cams
Argument 2
Tile‑based, per‑day pricing for mapping data services
EXPLANATION
The service is sold on a per‑tile basis, where each 25 km² tile is priced for a single day of access, with volume discounts for larger contracts.
EVIDENCE
He states that a tile of 25 km² costs 1.5 lakh rupees for one day, and pricing scales with volume ([267-271]).
MAJOR DISCUSSION POINT
Tile‑based, per‑day pricing for mapping data services
AGREED WITH
Vaibhavath Shukla, Vivek Gupta
Argument 3
B2B focus on cost‑saving for transport operators and logistics firms
EXPLANATION
Pradyum emphasizes that their platform helps transport operators like DTC reduce revenue loss by optimising bus capacity and routing, delivering significant cost savings.
EVIDENCE
He recounts reducing DTC’s annual revenue loss from 800 crore rupees to a much lower figure by improving demand-capacity matching and providing data-driven insights, citing specific numbers and a conversation with senior officials ([248-259]).
MAJOR DISCUSSION POINT
B2B focus on cost‑saving for transport operators and logistics firms
Argument 4
DPDP compliance: face/plate blurring, bare‑metal European servers
EXPLANATION
Pradyum explains that the company complies with India’s DPDP law by blurring personally identifiable information in video feeds and by storing data on bare‑metal servers located in Europe, avoiding public hyperscalers.
EVIDENCE
He notes that faces and number plates are blurred, videos are not released publicly, and all data is kept on bare-metal servers in Europe (Hetzner) rather than on AWS or other hyperscalers ([208-214]).
MAJOR DISCUSSION POINT
DPDP compliance: face/plate blurring, bare‑metal European servers
AGREED WITH
Vaibhavath Shukla, Vivek Gupta
DISAGREED WITH
Audience
Argument 5
Managing petabyte‑scale video data while ensuring privacy and compliance
EXPLANATION
The platform handles massive volumes of visual data while applying privacy safeguards such as blurring and limiting data exposure, aligning with regulatory requirements.
EVIDENCE
He mentions processing around 100 petabytes of video data from thousands of cameras ([140-142]) and combines this with the privacy measures described in the DPDP compliance answer ([208-214]).
MAJOR DISCUSSION POINT
Managing petabyte‑scale video data while ensuring privacy and compliance
Argument 6
Lack of incentives for dash‑cam owners; data contribution driven by B2B contracts
EXPLANATION
Pradyum clarifies that dash‑cam owners are not paid incentives; instead, data providers (e.g., transport companies) pay the platform for the analytics they receive.
EVIDENCE
During Q&A he states, “We don’t pay incentives, they pay us” when asked about incentives for dash-cam holders ([244-246]).
MAJOR DISCUSSION POINT
Lack of incentives for dash‑cam owners; data contribution driven by B2B contracts
DISAGREED WITH
Audience
Argument 7
Large‑language‑model layer enables natural‑language search over massive video archives
EXPLANATION
Pradyum describes adding an LLM on top of the visual data so users can query the system in plain language (e.g., to find helmets or street lights), turning raw footage into searchable knowledge.
EVIDENCE
He explains “we added that’s where the LLM thing came in that we started to describe every image and internally we were searching everything for them… you can prompt ‘find me all the CCTVs…’” ([193-195]).
MAJOR DISCUSSION POINT
LLM‑driven semantic search over visual data
Argument 8
Funding and scaling support from the international development agency JICA
EXPLANATION
The platform’s expansion across Delhi was enabled by financing from JICA, illustrating how development assistance can accelerate smart‑city infrastructure.
EVIDENCE
He notes “this was funded by JICA… that’s how we scaled in entire New Delhi” ([198-199]).
MAJOR DISCUSSION POINT
International development financing for smart‑city infrastructure
AGREED WITH
Ravindra Kumar, Vaibhavath Shukla
M
Meenal Gupta
6 arguments154 words per minute1230 words478 seconds
Argument 1
AI‑assisted cancer imaging and treatment planning
EXPLANATION
Meenal presents Imagix AI, an AI‑driven platform that automates cancer imaging segmentation and treatment planning, reducing manual contouring time from up to 90 minutes to as low as 5‑15 minutes with high accuracy.
EVIDENCE
She lists certifications (HIPAA, ISO 13485, SEDESCO), patents, and describes the workflow from CT/MRI upload to AI-generated organ segmentation and treatment plans, noting a 92-99 % accuracy and deployment in 14 Indian states with over a million scans processed ([288-345]).
MAJOR DISCUSSION POINT
AI‑assisted cancer imaging and treatment planning
Argument 2
HIPAA, ISO 13485, SEDESCO certification and human‑in‑the‑loop validation
EXPLANATION
Meenal emphasizes that the solution complies with major health‑tech standards and that final decisions remain with radiologists, ensuring safety and regulatory compliance.
EVIDENCE
She cites HIPAA compliance, ISO 13485 certification, and SEDESCO licensing, and explains that while AI assists, radiologists retain final approval ([290-294], [348-353]).
MAJOR DISCUSSION POINT
HIPAA, ISO 13485, SEDESCO certification and human‑in‑the‑loop validation
Argument 3
Training on 5 million medical images, on‑premise deployment for remote regions
EXPLANATION
The platform was trained on a large, diverse dataset that includes 5 million images, with 30 % sourced from remote Indian regions, and is deployed on‑premise to overcome connectivity challenges.
EVIDENCE
She notes training on 5 million images, 30 % Indian data collected from the northeast with support from Niti Aayog, and the need for on-premise solutions due to limited 4G coverage ([335-345]).
MAJOR DISCUSSION POINT
Training on 5 million medical images, on‑premise deployment for remote regions
Argument 4
Building trust in medical AI through radiologist oversight and proven accuracy
EXPLANATION
Meenal states that the AI does not replace clinicians; instead, it assists them, and the system’s high accuracy (92‑99 %) and regulatory certifications help build trust among users.
EVIDENCE
She explains that the AI acts as an assistant, with radiologists providing final approval, and highlights the achieved accuracy rates and extensive clinical deployments ([348-353]).
MAJOR DISCUSSION POINT
Building trust in medical AI through radiologist oversight and proven accuracy
Argument 5
Large‑scale public‑health impact through TB detection and early cancer identification
EXPLANATION
Meenal reports that the platform has processed over a million chest X‑rays, flagged thousands of TB cases and identified several lung‑cancer cases, demonstrating tangible health outcomes beyond oncology.
EVIDENCE
She states “we have helped to scan… around 1 million of scans… detected around 4 000 TB positive cases… there were around six lung cancer cases…” ([340-345]).
MAJOR DISCUSSION POINT
Public‑health outcomes of AI‑driven imaging
Argument 6
Invitation to showcase the solution to Bill Gates at Microsoft, indicating global tech interest
EXPLANATION
The founder mentions being invited by Bill Gates to demonstrate the platform, highlighting international recognition and potential for broader adoption.
EVIDENCE
She says “we were invited in Microsoft by Bill Gates to show our solution to him” ([335]).
MAJOR DISCUSSION POINT
International tech leadership interest
V
Vivek Gupta
6 arguments193 words per minute1679 words519 seconds
Argument 1
Indian‑language voice architecture platform
EXPLANATION
Vivek describes Indus Labs AI’s platform as a full‑stack voice operating system for Indian languages, offering speech‑to‑text, text‑to‑speech, LLMs, and voice‑to‑voice capabilities that can be customized by developers and enterprises.
EVIDENCE
He outlines the platform’s components (STT, TTS, LLM, speech-to-speech), its DIY nature, focus on Indian dialects, low latency (300-400 ms), and integration with CRM and telephony providers like Airtel and Geo ([360-433]).
MAJOR DISCUSSION POINT
Indian‑language voice architecture platform
AGREED WITH
Vaibhavath Shukla, Pradyum Gupta
Argument 2
Developing low‑latency, dialect‑specific Indian language models with sovereign data residency
EXPLANATION
The solution achieves sub‑500 ms latency, supports numerous Indian dialects, and stores all data within Indian sovereign cloud infrastructure to ensure data residency and security.
EVIDENCE
He mentions latency of 300-400 ms, dialect-specific accuracy, and that as an Indian company the data resides on sovereign servers, avoiding foreign hyperscalers ([364-371], [389-393]).
MAJOR DISCUSSION POINT
Developing low‑latency, dialect‑specific Indian language models with sovereign data residency
Argument 3
Indian data‑residency guarantees and sovereign cloud deployment
EXPLANATION
Vivek emphasizes that all data processed by the platform remains within India, complying with data‑sovereignty requirements and enhancing trust for domestic customers.
EVIDENCE
He states that because Indus Labs is an Indian company, the data stays in India, providing sovereign data residency guarantees ([389-393]).
MAJOR DISCUSSION POINT
Indian data‑residency guarantees and sovereign cloud deployment
Argument 4
Need for dialect‑specific voice AI to achieve high adoption across India’s linguistic diversity
EXPLANATION
He argues that India’s linguistic landscape, with dialect changes every 20 km, necessitates a voice AI that can handle regional variations to be widely adopted.
EVIDENCE
He points out that after 20 km dialects change, and that global providers offer generic Hindi, whereas Indus Labs builds dialect-specific models to meet this need ([364-369]).
MAJOR DISCUSSION POINT
Need for dialect‑specific voice AI to achieve high adoption across India’s linguistic diversity
Argument 5
Emotion‑aware voice AI that detects caller sentiment and adapts responses
EXPLANATION
Vivek describes a component that analyses call sentiment, enabling the AI agent to recognise emotions such as happiness or anger and respond appropriately.
EVIDENCE
He notes “the important part is emotional handling… you need to understand the emotions… we can recognise if somebody is laughing… happy or angry” ([388-391]).
MAJOR DISCUSSION POINT
Emotion‑aware voice AI
Argument 6
White‑label partnership model for global expansion
EXPLANATION
Indus Labs AI offers its platform under a white‑label arrangement to partners in Dubai, Germany and other regions, allowing them to resell the technology as their own and share revenue.
EVIDENCE
He says “we are white-labeling our platform… we have partners in Dubai, Germany… they can onboard their clients on their platform and share revenue” ([428-433]).
MAJOR DISCUSSION POINT
International white‑label partnership strategy
A
Archana Jahargirdar
2 arguments69 words per minute569 words488 seconds
Argument 1
Moderator’s rule to keep discussion product‑focused, avoiding business/pitch content
EXPLANATION
Archana sets the session’s format, stating that founders should present only product details without business, funding, or pitching elements, ensuring a technical focus for the audience.
EVIDENCE
She instructs presenters to talk only about the product, avoiding business or funding discussions, and emphasizes that the audience should understand jargon but also simplify for non-AI natives ([2-8]).
MAJOR DISCUSSION POINT
Moderator’s rule to keep discussion product‑focused, avoiding business/pitch content
Argument 2
Balancing technical jargon with simplification to ensure inclusive learning for non‑AI participants
EXPLANATION
Archana instructs presenters to use industry terminology but also to simplify explanations when needed, aiming to make the summit educational for attendees without AI backgrounds.
EVIDENCE
She says “please use jargon… however also be mindful… if you want more people to learn who may not be AI natives… So if you can simplify it, it’s fine” ([2-4]).
MAJOR DISCUSSION POINT
Inclusive knowledge sharing at tech events
A
Audience
4 arguments161 words per minute492 words183 seconds
Argument 1
Concern about DPDP compliance and privacy when handling petabyte‑scale video data
EXPLANATION
Audience members repeatedly ask how the platform ensures compliance with India’s Data Protection and Data Privacy (DPDP) law, especially regarding personal identifiers in massive visual datasets.
EVIDENCE
The audience asks “How are you handling that? How are you DPDP compliant?” and repeats the question about personal data, faces, number plates ([201-206], [220-224]).
MAJOR DISCUSSION POINT
Data protection and privacy compliance
DISAGREED WITH
Pradyum Gupta
Argument 2
Questioning the lack of incentives for dash‑cam owners to contribute data
EXPLANATION
The audience seeks clarification on why the platform does not pay dash‑cam holders, indicating a need for incentive mechanisms to sustain data collection.
EVIDENCE
Audience asks “what are the incentives you are giving to the dashcam holders?” and the founder replies “We don’t pay incentives, they pay us” ([244-246]).
MAJOR DISCUSSION POINT
Incentive structures for data contributors
Argument 3
Skepticism about scalability and reliability of foundational AI models
EXPLANATION
Audience members express doubts about how the startups will ensure robustness when scaling foundational models, citing a known failure (Servam) as an example.
EVIDENCE
Audience says “how are you scaling… we have even seen Servam breaking” and follows up with “how are you thinking of combating that scenario?” ([103-108]).
MAJOR DISCUSSION POINT
Scalability and reliability of AI models
Argument 4
Trust concerns in health‑AI deployments
EXPLANATION
An audience member points out that trust is crucial for AI in health and asks how the solution gains user confidence.
EVIDENCE
Audience asks “how do you make sure that the technology and the science behind it is trusted?” ([346-347]).
MAJOR DISCUSSION POINT
Building trust in medical AI
Agreements
Agreement Points
Provision of DIY/no‑code platforms that let users build AI‑driven automation without deep technical expertise
Speakers: Ravindra Kumar, Vaibhavath Shukla, Vivek Gupta
DIY, agentic AI approach for industrial automation End‑to‑end voice‑driven call‑center automation Indian‑language voice architecture platform
All three founders stress that their solutions are offered as DIY or no-code platforms: Technodate makes automation as easy as DIY using agentic AI ([28-32]), Quonsys AI provides a voice infrastructure that removes humans from the loop ([68-71]), and Indus Labs AI delivers a no-code voice-agent builder where users define journeys and nodes ([435-442]).
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with concerns about DIY scientific AI and the need for oversight, as discussed in the Advancing Scientific AI with Safety Ethics and Responsibility forum which highlighted DIY-type activities and limited regulatory coverage [S41], and with calls to promote open-source APIs to broaden developer access [S40].
Strategic collaborations with large enterprises, government or research organisations to validate and scale technology
Speakers: Ravindra Kumar, Vaibhavath Shukla, Pradyum Gupta
Strategic collaborations with academic experts and defence organisations to validate and accelerate the technology Partnership with OpenAI to provide Indic‑language voice capabilities Funding and scaling support from the international development agency JICA
Ravindra cites partnerships with Dr Sumit Chopra and a forthcoming Indian Air Force deployment ([45-48]), Vaibhavath mentions a partnership with OpenAI for Indic-language voice infrastructure ([74-75]), and Pradyum notes that JICA funded their city-wide mapping rollout in Delhi ([198-199]).
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of public-private-research partnerships for scaling AI and data collection is reflected in the Funding and Incentive Structures discussion emphasizing private-sector collaborations [S42] and in the clean-tech scaling report that stresses startup-large-company-government alliances [S45].
Strong emphasis on data‑privacy and DPDP compliance when handling large‑scale visual or personal data
Speakers: Pradyum Gupta, Audience
DPDP compliance: face/plate blurring, bare‑metal European servers
Pradyum explains that faces and number plates are blurred and that all video data is stored on bare-metal servers in Europe to meet DPDP requirements ([208-214]), while audience members repeatedly ask how the platform ensures DPDP compliance ([201-206][220-224]).
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on DPDP compliance and privacy-preserving techniques were central in the Founders Adda conversation where differing approaches to blurring and European data centres were contrasted with comprehensive DPDP frameworks [S47], and in the coalition’s request for an extended compliance timeline under India’s DPDP Act [S48] and the law’s activation restricting data use [S50].
Adoption of usage‑based pricing models (per‑minute or per‑tile) to align costs with consumption
Speakers: Vaibhavath Shukla, Pradyum Gupta, Vivek Gupta
Per‑minute subscription pricing for automated call‑center agents Tile‑based, per‑day pricing for mapping data services
Quonsys AI charges customers per minute of AI-driven call handling ([94-99]), Papri Labs sells map tiles for a day at a fixed price ([267-271]), and Indus Labs AI highlights a per-minute cost that is 70 % lower than global alternatives ([380-382]).
POLICY CONTEXT (KNOWLEDGE BASE)
Government cloud policy mandates consumption-based billing for public services [S51], and broader industry trends toward outcome-based or usage-based pricing are outlined in the Future of the Internet discussion [S52].
Localization for the Indian market through language‑specific models, data residency and on‑premise deployment
Speakers: Vaibhavath Shukla, Vivek Gupta, Pradyum Gupta
Partnership with OpenAI to provide Indic‑language voice capabilities Indian‑language voice architecture platform DPDP compliance: face/plate blurring, bare‑metal European servers
Vaibhavath’s platform supports Indic languages via an OpenAI partnership ([74-75]), Indus Labs builds dialect-specific Indian language models and guarantees sovereign data residency ([364-371][389-393]), and Pradyum ensures personal data is anonymised and stored outside India to meet privacy rules ([208-214]).
POLICY CONTEXT (KNOWLEDGE BASE)
Data localisation measures requiring storage and processing within national borders are examined in the Japan-focused analysis of localisation implications [S37], while India’s DPDP consent framework emphasizes residency and language-specific controls for Indian users [S49].
Recognition that building large foundational models is difficult and that scaling AI solutions presents challenges, leading to a focus on application‑layer development
Speakers: Ravindra Kumar, Vaibhavath Shukla, Audience
Debate over building a foundational model vs. focusing on application layer Scaling challenges for small vs. large enterprises; focus on high‑volume customers
Ravindra argues that funding a foundational model in India is hard and suggests focusing on applications ([33-40][55-60]), Vaibhavath notes that their solution currently targets large enterprises and that scaling for smaller firms is not a priority ([111-115]), and audience members voice skepticism about scalability, citing the Servam failure ([103-108]).
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple forums note the shift toward application-layer solutions due to foundational model scalability limits, including the Indigenous Peoples Languages session advocating application-layer development [S61], the Founders Adda observation on foundational model deployment challenges [S62], and academic analysis distinguishing the two AI layers [S63], as well as the Global AI Policy Framework highlighting barriers to scaling foundational models [S64].
Similar Viewpoints
All three founders present their products as DIY/no‑code platforms that enable non‑experts to create AI‑driven automation solutions ([28-32][68-71][435-442]).
Speakers: Ravindra Kumar, Vaibhavath Shukla, Vivek Gupta
DIY, agentic AI approach for industrial automation End‑to‑end voice‑driven call‑center automation Indian‑language voice architecture platform
Each founder highlights partnerships with major organisations (academic, defence, OpenAI, JICA) as a way to validate and accelerate their offerings ([45-48][74-75][198-199]).
Speakers: Ravindra Kumar, Vaibhavath Shukla, Pradyum Gupta
Strategic collaborations with academic experts and defence organisations to validate and accelerate the technology Partnership with OpenAI to provide Indic‑language voice capabilities Funding and scaling support from the international development agency JICA
Both the presenter and the audience stress the necessity of strict DPDP compliance when handling petabyte‑scale visual data ([208-214][201-206][220-224]).
Speakers: Pradyum Gupta, Audience
DPDP compliance: face/plate blurring, bare‑metal European servers
Both companies adopt usage‑based pricing models that charge customers according to actual consumption ([94-99][267-271]).
Speakers: Vaibhavath Shukla, Pradyum Gupta
Per‑minute subscription pricing for automated call‑center agents Tile‑based, per‑day pricing for mapping data services
Both emphasize building language‑specific solutions for India’s diverse linguistic landscape ([74-75][364-369]).
Speakers: Vaibhavath Shukla, Vivek Gupta
Partnership with OpenAI to provide Indic‑language voice capabilities Indian‑language voice architecture platform
Both acknowledge the difficulty of scaling foundational AI models and therefore prioritize application‑level development and targeting large‑volume customers ([33-40][55-60][111-115]).
Speakers: Ravindra Kumar, Vaibhavath Shukla
Debate over building a foundational model vs. focusing on application layer Scaling challenges for small vs. large enterprises; focus on high‑volume customers
Unexpected Consensus
Regulatory compliance as a trust‑building measure across different sectors
Speakers: Meenal Gupta, Pradyum Gupta
HIPAA, ISO 13485, SEDESCO certification and human‑in‑the‑loop validation DPDP compliance: face/plate blurring, bare‑metal European servers
Although operating in distinct domains (health-tech vs. smart-city mapping), both founders stress adherence to stringent regulatory frameworks (HIPAA/ISO for medical AI and DPDP for visual data) as essential for building user trust ([290-294][348-353][208-214]).
POLICY CONTEXT (KNOWLEDGE BASE)
Privacy compliance as a trust mechanism is highlighted in the Founders Adda debate on DPDP compliance [S47] and in cybersecurity capacity-building best practices that stress cross-sector collaboration and trust [S57].
Overall Assessment

The founders largely converge on delivering DIY, locally‑tailored AI platforms that respect privacy, use usage‑based pricing, and rely on strategic partnerships to overcome scaling and funding challenges.

High consensus across technical, business and policy dimensions, indicating a shared understanding that accessible, compliant, and partnership‑driven AI solutions are key to advancing India’s digital ecosystem.

Differences
Different Viewpoints
Whether to invest in building a large foundational AI model versus focusing on application‑layer solutions
Speakers: Ravindra Kumar, Audience
Debate over building a foundational model vs. focusing on application layer Skepticism about scalability and reliability of foundational models (Servam example)
Ravindra argues that building a foundational model in India is financially difficult and suggests experimenting with existing models and concentrating on application-level solutions ([33-40][55-60]). The audience counters by questioning how the startups will scale and remain reliable, citing the failure of the Servam model as a cautionary example ([103-108]).
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between building sovereign foundational models and focusing on application-layer tools is captured in discussions on application-layer priority [S61], scalability challenges of foundational models [S62], and the two-layer AI development framework [S63].
Target market focus and scalability of AI solutions for enterprises of different sizes
Speakers: Vaibhavath Shukla, Audience
Scaling challenges for small vs. large enterprises; focus on high‑volume customers Skepticism about scaling and reliability of foundational AI models
Vaibhavath acknowledges that Quonsys AI currently targets large, high-volume customers and that pricing and scaling for small firms are not a priority ([111-115]). The audience raises concerns about how the solution will scale and remain reliable, referencing the Servam breakdown ([103-108]).
POLICY CONTEXT (KNOWLEDGE BASE)
Scaling AI for varied enterprise sizes is addressed in the clean-tech partnership report emphasizing multi-scale collaborations [S45] and in the outcome-based pricing analysis that considers enterprise-specific consumption patterns [S52].
Incentive structure for dash‑cam owners contributing visual data
Speakers: Pradyum Gupta, Audience
Lack of incentives for dash‑cam owners; data contribution driven by B2B contracts Question about incentives offered to dash‑cam holders
When asked about incentives for dash-cam holders, the audience expects some reward ([244-245]), but Pradyum clarifies that the platform does not pay incentives; instead, data providers pay the company ([246]).
POLICY CONTEXT (KNOWLEDGE BASE)
Funding and incentive structures for data contribution, including private-sector partnerships for comprehensive data collection, were examined in the Funding and Incentive Structures forum [S42] and in the broader discussion on aligning incentives for AI deployment [S43].
Compliance with India’s DPDP privacy law for massive video data collection
Speakers: Pradyum Gupta, Audience
DPDP compliance: face/plate blurring, bare‑metal European servers Concern about DPDP compliance and privacy when handling petabyte‑scale video data
The audience repeatedly asks how the company ensures DPDP compliance given the personal data in video feeds ([201-206][220-224]). Pradyum responds that faces and number plates are blurred, videos are not released publicly, and all data is stored on bare-metal servers in Europe, avoiding public hyperscalers ([208-214][225-231]).
POLICY CONTEXT (KNOWLEDGE BASE)
Specific concerns about DPDP compliance for large-scale video collection were raised in the Founders Adda privacy debate [S47], the coalition’s request for timeline extensions [S48], and the DPDP law’s strict data-use limitations [S50].
Unexpected Differences
Different philosophies on model development versus data‑centric engineering
Speakers: Ravindra Kumar, Vaibhavath Shukla
Debate over building a foundational model vs. focusing on application layer Creation of a proprietary data engine to fine‑tune and scale voice models
Ravindra emphasizes the difficulty of building a foundational model and suggests leveraging existing models while concentrating on application development ([55-60]). In contrast, Vaibhavath describes building an in-house data engine to generate large-scale training data after public datasets proved insufficient, indicating a data-centric approach to model improvement ([109-115]). This contrast in strategy was not anticipated given the shared focus on AI-driven automation.
POLICY CONTEXT (KNOWLEDGE BASE)
Contrasting philosophies-sovereign, domain-specific model stacks versus centralized data warehouses-were highlighted in the AI-Driven Enforcement discussion [S58], and policy evaluation frameworks that move beyond model-centric assessment to include data-centric factors were outlined in the Advancing Scientific AI policy evaluation report [S59].
Overall Assessment

The discussion revealed several points of contention: the role of foundational models versus application‑layer focus, the appropriate market segment for scaling AI solutions, incentive mechanisms for data contributors, and compliance with privacy regulations. While all speakers concurred on AI’s transformative potential, they differed on implementation pathways and business models.

Moderate – disagreements are primarily strategic and operational rather than ideological, suggesting that consensus on AI’s benefits exists but coordination on execution, regulation, and market inclusion will require further dialogue.

Partial Agreements
All presenters agree that AI can dramatically improve efficiency and service delivery in their respective domains (industrial automation, call‑center operations, smart‑city mapping, medical imaging, voice services) and that a product‑focused, non‑pitch approach is appropriate for the summit ([11-12][68-71][118-124][288-290][360-363]). However, they diverge on the technical pathways, data strategies, and market models to achieve these goals.
Speakers: Ravindra Kumar, Vaibhavath Shukla, Pradyum Gupta, Meenal Gupta, Vivek Gupta
Automation of industrial robotics using agentic AI End‑to‑end voice‑driven call‑center automation Real‑time city‑wide mapping and video analytics from dash‑cams AI‑assisted cancer imaging and treatment planning Indian‑language voice architecture platform
Takeaways
Key takeaways
The summit focused exclusively on product discussions, avoiding pitches, funding talks, or business‑only presentations. Ravindra Kumar (Technodate AI) presented an agentic‑AI platform that automates the entire lifecycle of industrial robotics – from concept design to deployment, commissioning, and troubleshooting – and highlighted the need for a foundational model despite funding constraints. Vaibhavath Shukla (Quonsys AI) showcased an end‑to‑end voice‑driven call‑center automation solution that can operate without human‑in‑the‑loop, uses a per‑minute subscription model, and leverages a proprietary data‑engine to fine‑tune Indian‑language voice models. Pradyum Gupta (Papri Labs) described a city‑wide, real‑time mapping and video‑analytics platform built from dash‑cam and CCTV feeds, sold on a tile‑per‑day basis, and emphasized privacy measures (face/plate blurring, bare‑metal European servers) to meet DPDP compliance. Meenal Gupta (EasyOPI / Imagix AI) introduced an AI‑assisted cancer imaging and radiation‑treatment‑planning system that is HIPAA‑compliant, ISO 13485 certified, and operates with a human‑in‑the‑loop workflow, achieving 92‑99% accuracy on 5 million medical images. Vivek Gupta (Indus Labs AI) announced a sovereign, low‑latency Indian‑language voice architecture platform (STT, TTS, LLM, speech‑to‑speech) with DIY no‑code flow builder, dialect‑specific models, 70% cost reduction versus global providers, and a partner‑white‑label strategy. Common technical themes emerged: debate over building proprietary foundational models versus focusing on application layers; the importance of proprietary data engines for scaling; handling petabyte‑scale data while ensuring privacy; and the need for dialect‑specific, low‑latency models for Indian markets. Business‑model trends included DIY/agentic AI licensing, usage‑based (per‑minute or per‑tile) pricing, and targeting large‑enterprise customers to achieve economies of scale. Regulatory and trust considerations were highlighted across domains: DPDP compliance for video data, HIPAA/ISO/SEDESCO certifications for medical AI, and sovereign data residency for voice platforms.
Resolutions and action items
Founders were invited to continue one‑on‑one discussions with interested audience members after the session. Ravindra Kumar agreed to pursue development of a foundational model after further customer experiments. Vaibhavath Shukla will continue scaling the proprietary data engine and increase concurrency capacity beyond the current 50 concurrent sessions. Pradyum Gupta committed to maintaining DPDP compliance via blurring, bare‑metal hosting, and to explore incentive structures for dash‑cam data providers. Meenal Gupta will keep the human‑in‑the‑loop validation process and expand deployments across additional Indian states. Vivek Gupta will expand dialect coverage, onboard more channel partners (e.g., in Dubai, Germany), and continue optimizing latency and cost for the voice platform.
Unresolved issues
How to fund and technically build a robust foundational model for industrial automation without large capital investment. Scalability of voice models for massive concurrent usage (e.g., preventing failures like the Servam incident). Concrete incentive mechanisms for dash‑cam owners or transport operators to contribute data at scale. Detailed DPDP compliance workflow for petabyte‑scale video data, especially regarding cross‑border data transfers. Pricing strategy for small‑to‑medium enterprises versus large‑scale customers across the presented solutions. Long‑term trust and validation processes for medical AI beyond current accuracy metrics and radiologist oversight. Regulatory clearance pathways for deploying autonomous robotics solutions in high‑risk industrial settings.
Suggested compromises
Moderator Archana allowed presenters to use technical jargon but also encouraged simplification for non‑AI audiences. Ravindra Kumar chose to iterate with customer deployments before committing to a full foundational model, balancing resource constraints with product validation. Vaibhavath Shukla adopted a per‑minute subscription model rather than a fixed licensing fee to accommodate varied usage patterns. Pradyum Gupta employed tile‑based, per‑day pricing and bare‑metal European hosting to address privacy concerns while still offering a commercial product. Meenal Gupta positioned the AI system as an assistive tool with human‑in‑the‑loop oversight, mitigating trust concerns while delivering automation benefits. Vivek Gupta offered a no‑code DIY platform with extensive tutorials and support, lowering the barrier for enterprises to adopt without heavy engineering effort.
Thought Provoking Comments
The only way you can learn … is by listening to each other. No pitch, no business or funding talk – only product.
Sets a clear, non‑commercial framework that encourages deep technical sharing rather than sales, establishing a collaborative atmosphere.
Guided the entire session to focus on product details; participants framed their presentations accordingly and audience questions stayed product‑centric.
Speaker: Archana Jahargirdar
We are aiming to automate automation itself… we want to make automation as easy as DIY using agentic AI.
Introduces the meta‑concept of ‘automation of automation’, pushing the conversation beyond typical AI applications to a higher abstraction level.
Shifted the discussion toward the challenges of building foundational models and sparked later dialogue about the necessity of such models versus application layers.
Speaker: Ravindra Kumar
Even if OpenAI builds a foundational model, you still have to build the application layer. Model can become ASI, but the application is still needed.
Highlights a strategic viewpoint that separates model development from real‑world deployment, questioning the assumption that large models alone solve industry problems.
Prompted a deeper examination of practical deployment constraints, influencing subsequent speakers (e.g., Vaibhavath and Vivek) to emphasize their own application‑focused platforms.
Speaker: Ravindra Kumar
India is the customer support capital of the world… the entire model is outdated in the agentic era. We can automate the whole call centre end‑to‑end.
Identifies a massive market (India’s $55 billion call‑centre industry) and proposes a disruptive, fully automated solution, expanding the scope from niche AI tools to industry‑wide transformation.
Generated immediate audience queries about implementation details, leading to concrete explanations of architecture (web‑socket handshakes, per‑minute pricing) and moving the conversation toward practical usage scenarios.
Speaker: Vaibhavath Shukla
We never export video content; faces and number plates are blurred; we run on bare‑metal servers in Europe, not on hyperscalers.
Directly addresses data‑privacy and regulatory compliance (DPDP) concerns, showing a concrete strategy for handling sensitive visual data.
Shifted the tone to regulatory compliance, prompting follow‑up questions about incentives for data contributors and reinforcing the seriousness of privacy in AI deployments.
Speaker: Pradyum Gupta
We are not replacing doctors; we assist them. Final approval is always by radiologists – a human‑in‑the‑loop approach.
Acknowledges trust issues in health‑tech AI and offers a pragmatic solution that balances automation with professional oversight, crucial for adoption in medical settings.
Reassured the audience about safety and trust, differentiating the product from black‑box AI and leading to a smoother acceptance of the technology.
Speaker: Meenal Gupta
We are building the voice operating system of India, focusing on dialect diversity and sovereign data residency.
Highlights linguistic fragmentation in India and the strategic importance of data sovereignty, positioning the platform as uniquely suited to local needs.
Introduced a new dimension—regional language support—that broadened the discussion from generic voice AI to culturally and legally tailored solutions, prompting interest in latency and localization.
Speaker: Vivek Gupta
Emotional handling is core; we launched an emotion model to recognize happiness, anger, etc., during calls.
Adds affective computing to the technical roadmap, suggesting that true conversational AI must understand emotions, not just words.
Elevated the conversation to the next level of AI sophistication, leading to questions about real‑world effectiveness and differentiating the platform from competitors.
Speaker: Vivek Gupta
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from a high‑level, product‑only premise to concrete technical, market, and regulatory challenges. Archana’s opening rule set a collaborative tone, while Ravindra’s meta‑automation and model‑vs‑application insights reframed the technical debate. Vaibhavath’s bold claim about automating India’s massive call‑centre sector sparked practical implementation questions, and Pradyum’s privacy response introduced regulatory depth. Meenal’s human‑in‑the‑loop stance built trust in health AI, and Vivek’s focus on dialect diversity, data sovereignty, and emotional intelligence expanded the scope to cultural and affective dimensions. Collectively, these comments redirected the flow, deepened analysis, and highlighted the multifaceted hurdles—technical, market‑size, compliance, and trust—that founders must navigate.

Follow-up Questions
How can the AI agent be integrated into a phone to answer calls according to specific requirements?
Clarifies the technical feasibility of deploying the voice AI directly on end‑user devices, which is essential for real‑world adoption.
Speaker: Audience (unidentified participant)
Can institutes or companies use the solution as a standalone product or is it only subscription‑based?
Determines the business model and accessibility for potential B2B customers.
Speaker: Audience (unidentified participant)
How are you scaling foundational models for voice AI, given issues like Servam breaking?
Addresses reliability and scalability challenges of large language models in production environments.
Speaker: Audience (unidentified participant)
How are you ensuring DPDP (Data Protection and Data Privacy) compliance while handling petabytes of video data containing personal information?
Legal and ethical compliance is critical when processing large volumes of personally identifiable visual data.
Speaker: Audience (multiple participants)
What incentives are offered to dashcam holders (e.g., DTC buses) to collect data?
Understanding incentive structures is key to sustaining data collection pipelines.
Speaker: Audience (unidentified participant)
How do you build trust in AI‑driven cancer treatment planning among clinicians and patients?
Trust and validation are essential for adoption of AI in sensitive health‑care contexts.
Speaker: Audience (unidentified participant)
Is the platform a DIY flow builder where users just click to start agents, or do they need to manually connect nodes?
Clarifies the usability and low‑code/no‑code nature of the voice‑AI platform, impacting user onboarding.
Speaker: Audience (unidentified participant)
How did the founders transition from their previous jobs to start these ventures?
Provides insight into founder journeys and the challenges of leaving established careers.
Speaker: Audience (unidentified participant)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Matthew Prince Cloudflare

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Matthew Prince Cloudflare

Session at a glanceSummary, keypoints, and speakers overview

Summary

At the India AI Summit, Cloudflare CEO Matthew Prince outlined a vision for the future of artificial intelligence, drawing a historical parallel with the spread of the printing press to illustrate how transformative technology can democratize knowledge [7-15]. He argued that we are at a similar turning point today, where AI must be distributed rather than concentrated, echoing the rapid, decentralized diffusion of printing in the 15th century [18-19].


Prince proposed five guiding principles: AI should be owned by hundreds of thousands of companies worldwide rather than a handful, ensuring true democratization [24-26][118-119]; content creators, journalists, and researchers must be compensated for their work instead of having their material merely scraped and regurgitated by AI systems [28-30][33-34]; small businesses, especially in the global South, need tools to compete in an increasingly agent-driven commerce environment, preventing AI from becoming a consolidator of power [31-34]; AI must respect and amplify diverse cultures and languages, avoiding a homogenizing “Americanization” of the world and ensuring humanity is enhanced, not erased [35-38][39-40]; and finally, the technology should be affordable and accessible to the poorest, not restricted to those who can afford high subscription fees, embodying the principle that AI be available to all [40-42][44-48].


Prince warned that the traditional internet business model based on traffic and ad revenue is eroding, as search engines and AI firms now scrape massive numbers of pages for few human visitors, threatening creators’ livelihoods [73-82][83-89][90-96]. He suggested a new reward system that pays creators for advancing human knowledge rather than generating sensationalist traffic, aligning incentives with AI companies’ desire to fill “holes” in collective knowledge [101-104][108-112].


Although Cloudflare is not an AI company, it underpins over 20 % of global internet traffic and serves more than 80 % of leading AI firms, positioning it as a broker between content creators and AI developers [56-64][65-71]. To promote the five principles, Cloudflare is deploying top AI models on its global network, regionalizing them for local languages and laws, and launching programs such as the “AI for Bharat” multilingual suite and a large Indian startup accelerator that offers free credits and education [124-132][136-138][139-140]. The company also emphasizes secure-by-design, affordable infrastructure to prevent the need for massive capital outlays, thereby lowering barriers to entry for new AI ventures worldwide [141-147].


Prince concluded by urging all stakeholders to adopt these five values to ensure AI remains open, equitable, culturally diverse, and universally accessible, shaping a future where the technology benefits the entire global community [148-153].


Keypoints


Major discussion points


Democratize and decentralize AI – Prince argues that AI should not be controlled by a handful of firms but be spread across hundreds of thousands of companies worldwide, echoing the diffusion of the printing press ([24-26]; [118-119]; [150-152]).


Create sustainable business models that compensate creators – He stresses that content creators, journalists, researchers and small businesses must be paid for the value they generate rather than having their work scraped and regurgitated by AI systems ([27-30]; [31-35]; [90-97]; [101-104]).


Preserve cultural and linguistic diversity – AI must respect and amplify local cultures, languages, and identities instead of homogenizing everything into a single (often Western) perspective ([35-38]; [136-138]).


Ensure global accessibility, especially for the Global South – The technology should be affordable and reachable for the poorest regions, with Cloudflare providing free credits, education, regional models, and infrastructure to lower entry barriers ([40-42]; [120-124]; [126-138]; [145-147]).


Cloudflare’s concrete actions as an enabler – Prince outlines how Cloudflare’s global network, AI-model hosting, accelerator programs, and “secure-by-design” approach are intended to operationalize the five-point framework ([56-65]; [124-138]; [141-147]).


Overall purpose / goal


Prince’s speech is a call to shape the future of artificial intelligence around five guiding principles-decentralization, creator compensation, support for small businesses, cultural preservation, and universal accessibility. He urges policymakers, industry leaders, and civil society to adopt these values and presents Cloudflare’s role as a facilitator that can help build the infrastructure, education, and equitable business models needed to achieve them.


Overall tone


The tone begins with a reflective, historical analogy that sets an optimistic, visionary mood. It then shifts to a more urgent, cautionary stance as Prince highlights risks of centralization, loss of revenue for creators, and cultural homogenization. The remainder of the talk becomes solution-focused and hopeful, emphasizing collaborative action, concrete initiatives, and the potential for a more inclusive AI ecosystem. Throughout, the tone remains persuasive and forward-looking, moving from concern to constructive optimism.


Speakers

Matthew Prince


Role/Title: Co-founder and Chief Executive Officer of Cloudflare; World Economic Forum Technology Pioneer; Council on Foreign Relations member.


Areas of Expertise: Internet infrastructure, cybersecurity, cloud services, AI policy, technology entrepreneurship. [S1][S3]


Speaker 1


Role/Title: Event moderator/host (introducing the keynote speaker).


Areas of Expertise: (not specified)


Additional speakers:


(none identified beyond the listed speakers)


Full session reportComprehensive analysis and detailed insights

Matthew Prince opened the India AI Summit by thanking the audience, expressing his excitement for the event, and noting the upcoming gathering in Geneva [2-4][5-7].


Invoking his background as a former history professor, Prince drew a parallel with the invention of the printing press. He explained that the press, invented near Mainz in Germany, quickly spread across Europe-reaching Paris, Rome, the Netherlands, Spain, and London within a few decades-through itinerant German technicians who set up shops with local investors and printed laws, languages, and culture in many cities, a diffusion that could not be centrally gated [8-14][15-18][19-20].


Prince highlighted Cloudflare’s scale, noting that its infrastructure carries more than 20 % of global Internet traffic and serves over 80 % of leading AI companies, positioning the firm as a broker that can help distribute AI capacity without becoming a monopolist [56-65][66-71]. He emphasized that Cloudflare is not an AI company and does not own its own AI model [70-73].


Prince outlined five goals for AI: (1) decentralise ownership to hundreds of thousands of firms worldwide [24-26]; (2) create business models that fairly compensate creators, journalists, and researchers [27-35]; (3) preserve the small-business ecosystem, especially in the Global South, by providing tools for agentic commerce [31-42]; (4) protect cultural, linguistic, and regional diversity, avoiding a monolithic “American-isation” of AI [35-38]; and (5) ensure universal, affordable access, even for the poorest populations [40-42][145-147].


Prince argued that AI should be controlled by roughly 500 000 entities rather than a handful of firms, warning that concentration would repeat the gate-keeping patterns seen in telecoms, social networks, and hyperscalers, and that a broad-based ecosystem is essential for a healthy AI future [115-119].


Regarding creator compensation, Prince said the traditional Internet business model-creating compelling content, driving traffic, and monetising via ads or subscriptions-is eroding because AI now delivers answers directly to users. He noted that “human eyeball traffic, the current currency of the internet, is going away,” and that AI-generated answers are supplanting traditional page views [73-82]. He cited data showing that Google now returns one human visitor for every 30 pages scraped, while Anthropic scrapes 500 000 pages per visitor, dramatically reducing traffic-based revenue for publishers [83-89]. To counter this, he illustrated the knowledge-gap problem with a “Swiss-cheese” metaphor, arguing that AI companies will pay creators to fill the holes representing unknowns in human knowledge [101-108], aligning creator rewards with the need to improve AI datasets [105-107].


On supporting small businesses and the Global South, Prince warned that AI-driven agents could erode personal, relationship-based commerce, especially for enterprises that rely on convenience rather than algorithmic optimisation. He called for tools that enable these firms to survive and thrive, citing Cloudflare’s initiatives-free credits, a large Indian startup accelerator, and education programmes-that lower entry barriers for emerging markets [124-132][136-138][145-147]. By making top AI models runnable on Cloudflare’s global network, developers can deploy them in the city where users reside, reducing latency and infrastructure costs [126-129].


Cultural and linguistic diversity formed another pillar of Prince’s argument. He stressed that AI must honour local laws, languages, and customs, warning against a single, Western-centric worldview [35-38]. Cloudflare’s “AI for Bharat” project, which supports 22 official Indian languages, exemplifies this regionalisation, and the accompanying IIT build-a-thon showcased how students can create locally relevant AI applications [136-138][139-140].


Affordability and security were presented as cross-cutting requirements. Prince argued that AI should not be limited to organisations that can afford trillion-dollar data centres; instead, efficient, secure-by-design systems must be built to keep costs low and accessible to all [141-147]. He highlighted Cloudflare’s work on more efficient hardware and software stacks to pass savings on to users, preventing a new “nuclear-power-plant” barrier to entry [146-147].


Prince concluded by urging policymakers, industry leaders, and civil society to adopt the five values he outlined, noting that the world stands at a crossroads where decisive action is needed [49-53]. He thanked the AI Summit hosts and reiterated his excitement for continued collaboration at the upcoming Geneva summit [154-155].


Session transcriptComplete transcript of the session
Speaker 1

Ladies and gentlemen, please welcome Mr. Matthew Prince, CEO, Cloudflare.

Matthew Prince

Thank you. Thank you. It’s an honor to be here at India’s AI Summit, and I look forward to what we’ll be doing in Geneva next year. I know that here I’m supposed to be talking about the future, but forgive me for a second. I used to be a professor, sometimes teaching history. And so I think sometimes in order for us to understand the future, it’s actually good for us to understand some of the past. The past we start with and what the previous speakers were talking about was another technological marvel, which was the birth of the printing press. The printing press started as transformative technology built in Germany, just outside of Mainz. And it was, though not held there, not contained there, but spread incredibly quickly across the whole.

Of Europe, expanding not so that it was in any one place, but. to a thousand cities within less than 60 years, which at that time was remarkable. It started in Germany, but it was never just a German thing. By 1470, there were presses in Paris, Rome. By 1473, the Netherlands and Spain. By 1476, in London. German technicians who learned from Gutenberg literally walked across Europe with that knowledge and shared it across all of Europe. And they would set up a shop in a new city, find a local investor, a merchant or a bishop, and then start printing local laws, local languages, local cultures. And because the technology was not centrally controlled, no single country could gatekeep it or shut it down.

This was one of those once -in -a -lifetime moments where technology spread and the world got better as a result. And I think today that is that turning point that we are now. And so inspired by the Honorable Prime Minister’s words yesterday, I thought I would frame what I would think of. As a framework of five things that we should all be playing for. And I think we can almost all agree that these things, if AI delivers them, will be better than if it doesn’t. So the first is. Much like the printing press, this should not be a technology which is controlled by five companies. It should be 500 ,000 companies, and those companies should be spread around the world.

We need to make sure that, as the honorable prime minister said, we democratize this technology and make it available for everyone and anyone. Secondly, we need to make sure that we’re building business models around this technology. Too often today in the early times of AI, AI takes but it does not give back. We need to make sure that content creators, that journalists, that academics, that researchers are able to be compensated for the hard work that they do to create their content, rather than just having that content taken, regurgitated, and spit back through AI systems. And this is one of the key challenges that we have to think about as we go forward. We also need to make sure that what has thrived in the early Internet, small businesses, individual entrepreneurs, the global South being able to ship to the world, that that needs to be done.

That needs to be able to continue, as opposed to AI being a consolidator. And what I worry about is the fact that the small businesses that most of us do business with today, the relationship that we have with them is personal or based on mere convenience. your AI agent isn’t going to necessarily care about those things. And so we need to make sure that small businesses, and especially those in the global South, have the tools to be able to survive as the world moves to more and more agentic commerce. We also need to recognize that unique cultures and unique identities, languages, shouldn’t be homogenized by AI. There is no one universal culture, and we can’t forget those things that make each region and each part of the world unique.

AI needs to respect and actually emphasize that. We don’t want to make the mistake of just merely Americanizing the world, but instead we want to honor the culture of all of those places around the world and honor those things that have made us unique. AI shouldn’t remove our humanity, it should accelerate it and enhance it. And finally, we need to make sure that the technology is available to all, especially the poorest of those in the global south. This can’t be something where you can only get the latest, unbiased, unfiltered, highest technology if you can afford to spend thousands of dollars per month on a subscription. There needs to be a business model that allows AI to be available to the broadest set of users and make sure that we aren’t leaving people behind with this incredibly powerful technology.

That’s the framework that I would aim for. One where AI is distributed, not controlled. One where AI is actually enabling creators and research. One where AI is enabling businesses, small and large, to compete on a fair playing field. One where AI is bringing about our humanity and our differences, not homogenizing us. And one where it is available to all, not only held by the rich. I think that’s something that most of the people in this room can agree to. And I think that as we think about policy, as we… As we think about technology, we should be thinking about making sure that we are moving in that direction, moving towards all five of those goals, not moving away from them.

Unfortunately, we are not yet there. And I think we are at a crossroads and we need to all, whether in business or government or civil society, be thinking about what are the actions that we can take in order to achieve those five milestones. So how am I the person here talking about this? What in the world gives me any right to be up here speaking? Cloudflare runs one of the world’s largest networks. We have presence in over 120 countries, more than 300 cities worldwide. We see an enormous percentage of the world’s global Internet traffic. Over 20 % of the Internet sits behind us. And so we are not an AI company. We don’t have a model ourselves. But today, over 80 % of the leading AI companies use us.

So a huge percentage of the Internet uses us. A huge percent of the AI companies use us. And we sit in between those things and are working towards our mission, which is to help build a better Internet. When I say help. is really important. We don’t believe that we can do it alone. We believe that we need the work of all of the people in this room in order to contribute to that. But we do see and can act as a broker between these two sides, the content creators on one, the AI companies on the other, trying to figure out what is that future of the internet going to be? What does it look like?

How can we make sure that it continues to achieve all of those goals? And there are some real challenges. The internet that we know today was really built based on a very simple formula. And that formula was create great content that drove traffic and then monetize that traffic through either selling things, subscriptions, or ads. And if you think about it, that’s how the internet was funded over that period of time. And Google was the great patron of funding that. In fact, the way that we can measure how this has changed is to actually look at how Google’s behavior has changed. Ten years ago, we have data on this based on Cloudflare. for every two pages that Google scraped on the internet, they sent you back one human visitor.

And with that human visitor, again, you could sell them something, you could show them an ad, you could get them to subscribe to whatever you were doing. That was the business model of the internet. And that’s what caused the internet to flourish. But that business model is fading away. If you look at Google themselves, they have gotten to the point that for every 30 pages they scrape today, they only send you one. It’s gotten 15 times harder to get traffic from a Google search. Microsoft is even worse, 70 to one. But that’s the good news. If we look at the pure AI companies, OpenAI, 3 ,700 pages taken from the internet for every one visitor they send back. And in Anthropic’s case, 500 ,000, a half a million pages scraped for every one visitor you send back.

The world is going to… …look more like Anthropic over time. And that is going to put pressure on what has been the historic business model of the internet and what I worry about. is that researchers, journalists, small businesses are going to get crushed by this change unless we recognize it and try and figure out what is a new way of dealing with this. How are we able to stay in front of these changes? What is the new business model of the internet going to look like? And so when we think about this, human eyeball traffic, the current currency of the internet, is going away. It’s going to be, and it’s never going to return in the same way.

We are all getting our answers more from AI than from original sources. And so we have to figure out some new way in order to compensate creators. And that might be very pessimistic, but I actually am optimistic about that. Because you see, it turns out that what we really want to compensate people for, for a better internet, is not repeating the mistakes of the internet’s past. The internet was never built with security in mind. We should be thinking about that with AI. And it was always wrong to equate traffic with value. There are a lot of times that are things that are salacious, that generate a lot of traffic, but don’t actually further human knowledge.

And so there’s an opportunity as we think about what the new business model of the internet is to try and figure out a reward system that actually rewards creators for furthering human knowledge. And what’s amazing is this is directly aligned with what the AI companies want. If you think about it, for the first time in human history, we have something close to a mathematical model of all of human knowledge. It’s not perfect, but that’s what the sum total of the AI systems that we have are today. They are taking up that way, and they’re a way of quantifying what we know and what we don’t know. And what’s interesting is I think of it as like a giant block of Swiss cheese.

And that block has a lot of cheese in it, but it also has a lot of holes. And those holes are the places where there are holes in human knowledge. And what the AI companies want, what all of us actually want, is for those holes to be filled. And if we could create a system where creators are actually rewarded by filling in those blanks in the Swiss cheese, those holes on the Swiss cheese, by rewarding people not for creating content which is rage baiting, content which makes people angry, content which is designed just to provoke, but instead content which is designed to further human knowledge, that is something that we have a market for today and that the AI companies are excited to pay for.

What we also have to think about is how we avoid the cycle of centralization and control. And we’ve seen this with technology over and over again. Telecoms exhibited it, social networks exhibited it, the hyperscalers are exhibiting it. And there is real risk that if we don’t make it so that more and more people can create an AI company, if we end up with a world of five AI companies, not 500 ,000, that is worse for everyone around the rest of the world. And so what we’re trying to do is think about how we can create and how we can make sure that anyone, anywhere in the world has the tools and the knowledge and the ability to compete in this incredibly exciting space.

We need to stop the consolidation of AI and, again, lead to 500 ,000 companies, not just five. So what we’re fighting for at Cloudflare, as an example, and what I would ask that anyone who is playing in this space fights for, is how do we make sure that we level the playing field and that we make sure that everyone around the world can participate in what is this incredible technology? We need to make sure that AI is coming to all the parts of the world, including the global south. And I am inspired by the stories of startups and students here in India that are inventing an AI future. We need to make sure we cultivate an environment where that AI future can grow and it doesn’t get stifled by a handful of companies that are out there.

So at Cloudflare, what specifically are we doing in order to make sure that this is the case? We’re trying to figure out how can we make sure that content is available all around the world and is accessible? And widely available to everyone. That’s by taking the top models and making them available across our global network so they can be run in the city where you are actually living. Um, that, that also means that we should make it easy to use and enroll in these, in these systems. So making it so that you don’t have to have a degree in computer science to start playing with AI models and making sure that that’s, that’s the case.

What we also are doing is actually funding the education of both startups and students to, uh, to do this. So we have our own startup accelerator and in India, it is the second largest by a country participants come from here. And it’s amazing to see what all of the startups in India are creating. And we’re proud of the fact that we are giving enormous credits to be able to use our services for free for startups that are trying to build that next generation and take on some of those giants. Okay. Okay. We’re trying to make sure that this is adaptable and multimodal around the world. So we have adopted the ability to roll out models across our platform that support all of the different things that you need, wherever you are in the world.

And those models should be regionalized so that they can be trained on local laws, local languages, and local cultures. I’m proud of the fact that we have done this with AI for Bharat, which we rolled out with 22 official languages across all of India and made it available for students in India to be able to experiment and try. And it’s incredible what we’re seeing people build with these models. We also launched an IIT build -a -thon to be able to take this with AI for Bharat and Cloudflare Workers AI. And it’s incredible what the students there were able to build and deliver. We also need to have secure by design. That’s the key to what we’re doing.

We need to not make the same mistakes that we had with the internet before. And we need to make sure that it’s actionable and affordable. It can’t be that you have to have trillions of dollars of budget. You have to stand up your own nuclear power plant in order to be the next AI company. And so we’re designing systems and we’re working not just to say how much money can we throw at the problem, but how can we make these systems more efficient so that we can pass on that cost and make it more affordable for everyone. These are the work that we’re doing at Cloudflare, and I would challenge anyone in the audience, if you’re working in AI, strive for these five values.

How can we make sure that everyone has a chance to participate in the AI economy? We want to make that available for the world. We can’t say that this is going to be a technology that is restricted to literally five companies in one postal code in San Francisco that have access to it. It needs to be available to the world. We’re here to help. I appreciate all of the effort and the great hosts from the AI Summit in India, and I’m looking forward to Geneva. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (17)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Matthew Prince opened the India AI Summit by thanking the audience and expressing gratitude for being there in India.”

The plenary transcript records Prince thanking the audience and saying it is an honor to be at the summit in India, confirming his opening remarks and gratitude [S58] and his presence on stage [S61].

Confirmedhigh

“Prince invoked his background as a former history professor and drew a parallel between AI and the invention of the printing press, which originated near Mainz, Germany and quickly spread across Europe.”

The keynote notes describe Prince introducing himself as a former history professor and comparing today’s AI diffusion to the 15th-century spread of Gutenberg’s press that began in Germany, confirming both his academic background and the historical analogy [S7] and [S8].

Additional Contextmedium

“Prince explained that the printing press spread through itinerant German technicians who set up shops with local investors, printing laws, languages and culture in many cities, a diffusion that could not be centrally gated.”

While the knowledge base confirms the overall analogy to the press’s rapid European diffusion, it adds nuance by emphasizing Prince’s focus on the decentralized, grassroots nature of that spread, though it does not list specific cities or technicians [S7] and [S8].

Confirmedhigh

“Prince argued that the traditional Internet business model—driven by traffic and ad revenue—is eroding because AI delivers answers directly, reducing the value of human eyeball traffic.”

Prince’s call for new creator-compensation models that move away from traffic-based revenue is echoed in the knowledge base, which notes his argument that internet content will need business models based on quality rather than page views [S1].

External Sources (61)
S1
Open Internet Inclusive AI Unlocking Innovation for All — Very few individuals have done more to bring revolutionary and transformative technology into the hands of millions than…
S2
https://app.faicon.ai/ai-impact-summit-2026/open-internet-inclusive-ai-unlocking-innovation-for-all — Very few individuals have done more to bring revolutionary and transformative technology into the hands of millions than…
S3
Defending the Cyber Frontlines / Davos 2025 — – Matthew Prince: CEO of Cloudflare Matthew Prince: Absolutely. So Cloudflare’s, our mission is to help build a bette…
S4
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S5
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S6
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S7
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Matthew Prince Cloudflare — “One where AI is distributed, not controlled.”[1]. “We need to stop the consolidation of AI and, again, lead to 500 ,000…
S8
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Matthew Prince Cloudflare — Thank you. Thank you. It’s an honor to be here at India’s AI Summit, and I look forward to what we’ll be doing in Geneva…
S9
Protecting Democracy against Bots and Plots — In summary, Cloudflare utilizes AI and machine learning to anticipate and address threats and vulnerabilities, while pro…
S10
Embracing the future of e-commerce and AI now (WEF) — Moving on to the third speaker, they focused on how AI will become a new trend and transform the business environment. I…
S11
Powering AI Global Leaders Session AI Impact Summit India — But I was a history major in college, so I get to play amateur historian, emphasis on amateur. Everyone has their own fa…
S12
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — Many online platforms profit from journalistic content without adequately compensating those who produce it. The analys…
S13
A Decade Later-Content creation, access to open information | IGF 2023 WS #108 — Continued efforts are required to deploy reliable infrastructure with affordable pricing options. Content creators striv…
S14
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Audience:Hamid Hawja is my name, from Morocco, director of Hebdo magazine. I have two questions. First, I’m just wonderi…
S15
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — AI policies in Africa should ideally espouse a context-specific and culturally sensitive orientation. The prevailing ten…
S16
Inclusive AI_ Why Linguistic Diversity Matters — And so when I joined Current AI early this year, multilingual diversity was already a topic. And I was very happy about …
S17
WS #148 Making the Internet greener and more sustainable — There is a need to balance sustainability efforts with ensuring affordable access, especially in developing regions
S18
Prosperity Through Data Infrastructure — In terms of bridging the digital divide, it is vital to ensure that connectivity and AI technologies are accessible to a…
S19
Policy Network on Meaningful Access: Meaningful access to include and connect | IGF 2023 — In addition to providing digital access, libraries are also unique aggregators of ICT resources in a community. The Giga…
S20
Open Forum #13 Bridging the Digital Divide Focus on the Global South — Tripti Sinha: Thank you very much, and thanks to the World Internet Conference for convening this very important discuss…
S21
The potential of technical standards to either strengthen or undermine human rights and fundamental freedoms in case of artificial intelligence systems and other emerging technologies — Audience:Thank you, Nikki. Good morning, everyone. My name is Patrick Day from Cloudflare. Thank you so much for having …
S22
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Second, there needs to be a business model for journalists, content creators, and small businesses, because left to its …
S23
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Matthew Prince Cloudflare — “There needs to be a business model that allows AI to be available to the broadest set of users and make sure that we ar…
S24
Prosperity Through Data Infrastructure — In terms of bridging the digital divide, it is vital to ensure that connectivity and AI technologies are accessible to a…
S25
WS #279 AI: Guardian for Critical Infrastructure in Developing World — To make AI-powered security solutions more accessible to developing countries, companies should consider implementing ti…
S26
The Future of Digital Agriculture: Process for Progress — Technologies must be easily accessible, economically viable for the lowest-income groups, relevant to the context, and s…
S27
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Economic | Legal and regulatory Looking at the AI act, you see that there’s a lot of attention for the providers and fo…
S28
Cloudflare chief warns AI is redefining the internet’s business model — AI is inserting itself between companies and customers, Cloudflare CEOMatthew Prince warnedin Toronto. More people ask c…
S29
Agentic AI in Focus Opportunities Risks and Governance — Great. Thank you. Hi, everyone. My name is Carly Ramsey. I lead public policy for Asia Pacific for Cloudflare. I’m based…
S30
Cloudflare acquires Human Native to build a fair AI content licensing model — San Francisco-based company Cloudflarehas acquiredHuman Native, an AI data marketplace designed to connect content creat…
S31
AI and human creativity: Who should hold the brush? — Economic structures that value human creativity:If AI can flood the market with ‘good enough’ content at minimal cost, w…
S32
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — Many online platforms profit from journalistic content without adequately compensating those who produce it. This examp…
S33
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Matthew Prince Cloudflare — Prince argues against cultural homogenization through AI, emphasizing that technology should preserve and celebrate regi…
S34
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Preserving multilingual societies is essential because different language structures enable different ways of thinking a…
S35
Discussion Report: AI Implementation and Global Accessibility — Awesome question, really. And I think, and it goes back to the point that I raised earlier, which is that the benefit of…
S36
A Digital Future for All (afternoon sessions) — AI governance requires a multi-stakeholder approach due to the diverse nature of opportunities, risks, and inclusivity c…
S37
Responsible AI in India Leadership Ethics &amp; Global Impact — And the goal is for an infrastructure layer for content trust that anyone can adopt. It shouldn’t be owned by any one co…
S38
Multistakeholder Partnerships for Thriving AI Ecosystems — Both speakers emphasize that technology must be made accessible and available to all, not concentrated in the hands of a…
S39
AI in Action: When technology serves humanity — For many small business owners, the biggest challenge is not vision but capacity. Someone running a family coffee roasti…
S40
Embracing the future of e-commerce and AI now (WEF) — Moving on to the third speaker, they focused on how AI will become a new trend and transform the business environment. I…
S41
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Matthew Prince Cloudflare — “One where AI is distributed, not controlled.”[1]. “We need to stop the consolidation of AI and, again, lead to 500 ,000…
S42
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Global governance of AI is a precursor for a democratic development and evolution. And we need to continue to develop an…
S43
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Matthew Prince Cloudflare — Prince opened with a historical parallel, comparing today’s AI development to the transformative spread of the printing …
S44
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — Many online platforms profit from journalistic content without adequately compensating those who produce it. The analys…
S45
A Decade Later-Content creation, access to open information | IGF 2023 WS #108 — Content creators face the struggle of finding a sustainable model to continue their mission of educating and engaging pe…
S46
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Rather than having companies monetize user data without compensation, marketplace systems could be developed where data …
S47
Cultural diversity — While AI can help preserve cultural diversity, it is crucial to shed light on the problem of cultural homogeneity when d…
S48
How Multilingual AI Bridges the Gap to Inclusive Access — Communities should preserve their own cultures and languages rather than having it done for them in a condescending way,…
S49
Inclusive AI_ Why Linguistic Diversity Matters — And so it needs. It needs to be privacy -preserving. It needs to be held by an actor you trust, even if you don’t go and…
S50
Inclusive AI_ Why Linguistic Diversity Matters — The successful demonstration challenges assumptions about cloud-based AI necessity, opening possibilities for deployment…
S51
Prosperity Through Data Infrastructure — In terms of bridging the digital divide, it is vital to ensure that connectivity and AI technologies are accessible to a…
S52
WS #148 Making the Internet greener and more sustainable — There is a need to balance sustainability efforts with ensuring affordable access, especially in developing regions
S53
Open Forum #13 Bridging the Digital Divide Focus on the Global South — Tripti Sinha: Thank you very much, and thanks to the World Internet Conference for convening this very important discuss…
S54
Fostering Global Digital Cooperation for Prosperity — Developing countries require tailored affordability and pricing models to access digital technologies and infrastructure…
S55
Defending the Cyber Frontlines / Davos 2025 — – Matthew Prince: CEO of Cloudflare 1. Public-Private Partnerships: Prince emphasised the crucial role of collaboration…
S56
The potential of technical standards to either strengthen or undermine human rights and fundamental freedoms in case of artificial intelligence systems and other emerging technologies — Audience:Thank you, Nikki. Good morning, everyone. My name is Patrick Day from Cloudflare. Thank you so much for having …
S57
Cloudflare launches Moltworker platform after AI assistant success — The viral success of Moltbot has prompted Cloudflare tolaunch a dedicated platformfor running the popular AI assistant. …
S58
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Thank you for inviting me to this important summit. It is an honor to be here in India at this pivotal moment for global…
S59
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — And second, we need an inclusive global governance framework with the UN as our vehicle of choice. I want to congratulat…
S60
Keynote Adresses at India AI Impact Summit 2026 — And we’re doing it in a partnership with the world’s largest democracy, a nation of 1 .4 billion people that share our v…
S61
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Matthew Prince
11 arguments183 words per minute2838 words925 seconds
Argument 1
AI must be owned by many, not a handful of firms (Matthew Prince)
EXPLANATION
Prince argues that AI should not be monopolized by a few dominant players but distributed across a vast number of entities worldwide. This decentralization mirrors the spread of the printing press and is essential for democratic access to the technology.
EVIDENCE
He explicitly states that AI should not be controlled by five companies but by 500,000 firms spread around the world, emphasizing the need for democratization of the technology [24-26].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince emphasizes that AI should be distributed rather than controlled by a few firms, likening it to the printing press and calling for 500,000 companies instead of five [S7].
MAJOR DISCUSSION POINT
Decentralization of AI ownership
Argument 2
Enable anyone worldwide to create AI companies, preventing consolidation (Matthew Prince)
EXPLANATION
Prince calls for an environment where any individual or organization, regardless of location, can launch AI ventures, thereby avoiding concentration of power in a few firms. This openness is presented as a safeguard against future monopolies.
EVIDENCE
He warns against a world dominated by five AI companies and stresses the need to allow anyone, anywhere, to have the tools, knowledge, and ability to compete in the AI space, advocating for 500,000 companies instead of just five [116-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He calls for an environment where anyone can launch AI ventures, warning against consolidation into five dominant firms and advocating for 500,000 global companies [S7].
MAJOR DISCUSSION POINT
Preventing AI consolidation
Argument 3
Cloudflare’s infrastructure makes AI models run locally and supports 500,000 firms (Matthew Prince)
EXPLANATION
Prince describes how Cloudflare’s extensive global network underpins the distribution of AI models, enabling them to be executed close to users. This infrastructure is positioned as a catalyst for the decentralized ecosystem he envisions.
EVIDENCE
He notes that Cloudflare operates in over 120 countries, handles more than 20 % of global Internet traffic, and that over 80 % of leading AI companies rely on its services, allowing top AI models to be deployed across its global network so they can run in the city where users live [56-60][62-65][126-127].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince notes that Cloudflare operates in over 120 countries, handles more than 20 % of global Internet traffic, and that over 80 % of leading AI companies rely on its network to deploy models close to users [S7].
MAJOR DISCUSSION POINT
Infrastructure for distributed AI
Argument 4
AI should compensate journalists, researchers, and other creators rather than merely regurgitate their work (Matthew Prince)
EXPLANATION
Prince asserts that current AI practices often appropriate content without rewarding its original creators. He calls for business models that ensure journalists, academics, and other knowledge producers receive fair compensation for their contributions.
EVIDENCE
He highlights the need for content creators, journalists, academics, and researchers to be compensated for their work instead of having their content simply taken, regurgitated, and spit back by AI systems [28-30].
MAJOR DISCUSSION POINT
Fair compensation for creators
Argument 5
Propose a reward system that values knowledge creation over traffic volume (Matthew Prince)
EXPLANATION
Prince proposes shifting the internet’s value metric from traffic to the generation of genuine knowledge. He suggests rewarding creators for filling gaps in human understanding rather than for producing sensational or click‑bait content.
EVIDENCE
He describes a new reward system that would compensate creators for advancing human knowledge, using the metaphor of a Swiss-cheese block where AI companies want the holes (knowledge gaps) filled, and argues that AI firms are willing to pay for such valuable contributions [103-112].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He proposes shifting from traffic-based monetisation to a knowledge-based reward model that compensates creators for filling gaps in human understanding [S8].
MAJOR DISCUSSION POINT
Knowledge‑based reward model
Argument 6
AI should help small enterprises compete, preserving personal relationships in commerce (Matthew Prince)
EXPLANATION
Prince warns that AI‑driven agentic commerce could erode the personal, relationship‑based interactions that characterize many small businesses, especially in the Global South. He stresses the need for tools that enable these enterprises to thrive alongside larger players.
EVIDENCE
He points out that small businesses rely on personal or convenience-based relationships, which AI agents may not respect, and calls for tools that allow small and Global South businesses to survive as commerce becomes increasingly agentic [33-35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince warns that AI-driven agentic commerce threatens small businesses that rely on personal relationships and calls for tools to help them survive; the need for SMEs to adapt is also highlighted in a WEF discussion [S8][S10].
MAJOR DISCUSSION POINT
Support for small businesses
Argument 7
Provide credits, accelerators, and education to startups in India and other emerging markets (Matthew Prince)
EXPLANATION
Prince outlines concrete initiatives by Cloudflare to nurture AI entrepreneurship in emerging economies, including financial credits, a startup accelerator, and educational programmes. These measures aim to lower barriers for innovators in regions like India.
EVIDENCE
He mentions Cloudflare’s startup accelerator (the second largest globally, with many Indian participants), generous free credits for startups, the AI for Bharat rollout in 22 Indian languages, and an IIT build-a-thon that showcased student creations [130-141].
MAJOR DISCUSSION POINT
Capacity building for emerging AI startups
Argument 8
AI must respect and highlight regional cultures and languages, avoiding a single, homogenized worldview (Matthew Prince)
EXPLANATION
Prince emphasizes that AI should preserve cultural and linguistic diversity rather than impose a monolithic, often Western‑centric perspective. He argues that honoring local identities is essential for an inclusive AI future.
EVIDENCE
He stresses that AI must not homogenize unique cultures and languages, warning against “Americanizing” the world and calling for AI to honor the distinct cultural heritage of every region [35-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He stresses that AI should honor diverse cultures and languages and avoid “Americanizing” the world, advocating for culturally aware AI systems [S7][S8].
MAJOR DISCUSSION POINT
Cultural and linguistic diversity
Argument 9
Deploy regionalized models trained on local laws and languages (e.g., AI for Bharat) (Matthew Prince)
EXPLANATION
Prince describes the technical approach of creating AI models that are tailored to specific jurisdictions, incorporating local legal frameworks and languages. This strategy supports the broader goal of culturally aware AI deployment.
EVIDENCE
He explains that models should be regionalized, trained on local laws, languages, and cultures, citing the AI for Bharat initiative which supports 22 official Indian languages and is available for students to experiment with [136-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince describes the creation of regionalized models such as AI for Bharat, trained on local laws, languages and cultures, supporting a localized AI approach [S7][S8].
MAJOR DISCUSSION POINT
Regionalized AI models
Argument 10
AI technology should be affordable for the poorest, not limited to costly subscriptions (Matthew Prince)
EXPLANATION
Prince argues that AI must be accessible to the most disadvantaged populations, rejecting a model where only wealthy users can afford the latest, unfiltered AI services. Affordability is presented as a prerequisite for equitable impact.
EVIDENCE
He states that AI should be available to the poorest in the Global South and must not require expensive subscriptions, calling for business models that broaden access beyond those who can spend thousands of dollars per month [40-42].
MAJOR DISCUSSION POINT
Universal affordability
Argument 11
Design efficient, secure systems that lower entry costs for new AI participants (Matthew Prince)
EXPLANATION
Prince highlights the need for AI infrastructure that is both secure by design and cost‑effective, ensuring that new entrants do not need massive capital expenditures. Efficiency and security are framed as essential to democratizing AI.
EVIDENCE
He notes that Cloudflare is focusing on secure-by-design solutions, improving system efficiency to reduce costs, and avoiding the need for “trillions of dollars” or massive infrastructure to become an AI company, thereby making participation affordable [141-147].
MAJOR DISCUSSION POINT
Cost‑effective, secure AI infrastructure
Agreements
Agreement Points
AI must be owned by many, not a handful of firms
Speakers: Matthew Prince
AI must be owned by many, not a handful of firms (Matthew Prince)
Prince repeatedly stresses that AI should be distributed across 500,000 companies worldwide rather than controlled by five firms, likening this to the spread of the printing press and calling for democratization of the technology [24-26][44-48]
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for a pluralistic AI ownership model echo broader policy discussions on preventing market concentration and ensuring digital sovereignty, as highlighted in Cloudflare’s keynote on making AI available to the broadest set of users [S23] and in analyses stressing the need to bridge the digital divide so AI does not remain in the hands of a few firms or countries [S24], [S35], [S38].
AI should compensate journalists, researchers, and other creators rather than merely regurgitate their work
Speakers: Matthew Prince
AI should compensate journalists, researchers, and other creators rather than merely regurgitate their work (Matthew Prince)
He argues that current AI practices appropriate content without rewarding the original creators and calls for business models that ensure fair compensation for journalists, academics and researchers [28-30][96-102][103-112]
POLICY CONTEXT (KNOWLEDGE BASE)
The need for fair remuneration of content creators is reflected in recent initiatives such as Cloudflare’s acquisition of Human Native to build a licensing marketplace for creators and AI developers [S30], as well as scholarly commentary on revising economic models and copyright frameworks to value human creativity [S31], and concerns about platforms profiting from journalistic content without compensation [S32].
AI should help small enterprises compete, preserving personal relationships in commerce
Speakers: Matthew Prince
AI should help small enterprises compete, preserving personal relationships in commerce (Matthew Prince)
Prince warns that AI-driven agentic commerce could erode the personal, relationship-based interactions of small businesses, especially in the Global South, and calls for tools that enable these enterprises to survive and thrive [33-35][31-34][40-42][120-122]
POLICY CONTEXT (KNOWLEDGE BASE)
Policy and industry forums have emphasized AI as a tool for SMEs to stay competitive while retaining personal customer interactions, noting the challenges small firms face and the potential of AI-driven market research and support services [S39], and broader calls for AI adoption among SMEs to maintain relevance in the digital economy [S40].
AI must respect and highlight regional cultures and languages, avoiding a single, homogenized worldview
Speakers: Matthew Prince
AI must respect and highlight regional cultures and languages, avoiding a single, homogenized worldview (Matthew Prince)
He emphasizes that AI should not homogenize cultures or “Americanize” the world, but instead honor local languages, laws and cultural identities, citing regionalized models such as AI for Bharat [35-38][136-138]
POLICY CONTEXT (KNOWLEDGE BASE)
Leadership at the Cloudflare keynote warned against cultural homogenization and advocated for AI that celebrates regional differences [S33]; similarly, multistakeholder discussions stress preserving multilingual societies and decolonizing AI to reflect diverse epistemologies [S34].
AI technology should be affordable for the poorest, not limited to costly subscriptions
Speakers: Matthew Prince
AI technology should be affordable for the poorest, not limited to costly subscriptions (Matthew Prince)
Prince states that AI must be accessible to the poorest in the Global South and should not require expensive subscriptions, urging business models that broaden access and reduce entry costs [40-42][145-147]
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple sources call for affordable AI access, including Cloudflare’s business-model proposal for broad, low-cost availability [S23], development-focused recommendations to ensure AI reaches the global south and low-income groups through tiered or subsidized pricing [S24], [S25], and design principles that make technologies economically viable for the poorest [S26].
Similar Viewpoints
Prince consistently advocates for a highly decentralized AI ecosystem in which thousands of entities can build, run and tailor AI models, supported by Cloudflare’s global network and regionalized deployments, to avoid concentration of power [24-26][44-48][56-60][62-65][116-119][136-138]
Speakers: Matthew Prince
AI must be owned by many, not a handful of firms (Matthew Prince) Enable anyone worldwide to create AI companies, preventing consolidation (Matthew Prince) Cloudflare’s infrastructure makes AI models run locally and supports 500,000 firms (Matthew Prince) We need to stop the consolidation of AI and, again, lead to 500,000 companies, not just five (Matthew Prince) Deploy regionalized models trained on local laws and languages (e.g., AI for Bharat) (Matthew Prince)
Both arguments call for a shift from traffic‑based monetisation to a knowledge‑based reward model that pays creators for filling gaps in human understanding rather than for click‑bait content [28-30][96-102][103-112]
Speakers: Matthew Prince
AI should compensate journalists, researchers, and other creators rather than merely regurgitate their work (Matthew Prince) Propose a reward system that values knowledge creation over traffic volume (Matthew Prince)
Prince links cultural‑linguistic diversity with technical implementation, insisting that AI systems be regionalized and trained on local laws, languages and cultures to preserve diversity [35-38][136-138]
Speakers: Matthew Prince
AI must respect and highlight regional cultures and languages, avoiding a single, homogenized worldview (Matthew Prince) Deploy regionalized models trained on local laws and languages (e.g., AI for Bharat) (Matthew Prince)
Unexpected Consensus
Recognition that a non‑AI company (Cloudflare) can play a pivotal broker role in the AI ecosystem
Speakers: Matthew Prince
Cloudflare’s infrastructure makes AI models run locally and supports 500,000 firms (Matthew Prince) We are not an AI company… but over 80 % of the leading AI companies use us (Matthew Prince)
Despite Cloudflare not being an AI developer, Prince positions the firm as essential infrastructure for AI deployment, a stance that might be unexpected for a traditional internet-infrastructure provider [56-60][62-65]
POLICY CONTEXT (KNOWLEDGE BASE)
Cloudflare’s positioning as an intermediary infrastructure layer for AI-highlighted by its CEO’s remarks on AI reshaping the internet business model [S28] and by executives describing the company’s role between customers and users [S29]-demonstrates how non-AI firms can act as brokers, a view reinforced by its acquisition of Human Native to facilitate fair AI content licensing [S30] and calls for standards-based, non-proprietary infrastructure not owned by a single entity [S37].
Overall Assessment

Matthew Prince presents a coherent set of five overarching goals—decentralisation, creator compensation, support for small businesses, cultural‑linguistic diversity, and universal affordability—backed by concrete Cloudflare initiatives. All points are internally consistent and reinforce each other.

High internal consensus (single speaker aligning multiple arguments). No cross‑speaker agreement is observable because no other participant contributed substantive statements, limiting broader multistakeholder consensus.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains only an introductory remark by Speaker 1 and a single substantive contribution by Matthew Prince. No opposing viewpoints or contrasting positions are presented, so there are no identifiable disagreements, partial agreements, or unexpected conflicts.

Minimal – the discussion is essentially a monologue by Matthew Prince, implying consensus or lack of debate on the topics addressed.

Takeaways
Key takeaways
AI should be a distributed, decentralized ecosystem rather than controlled by a handful of firms. Creators (journalists, researchers, academics) must be fairly compensated for the content that powers AI models. New business models are needed because traditional traffic‑based monetization is eroding with AI‑driven content delivery. Small businesses and enterprises in the Global South must be equipped with tools, credits, and education to compete in an AI‑driven economy. AI systems must preserve and promote cultural, linguistic, and regional diversity instead of homogenizing content. Universal, affordable access to AI is essential; the technology should not be limited to expensive subscriptions or massive infrastructure investments. Security‑by‑design and efficiency are critical to avoid repeating the early Internet’s security shortcomings.
Resolutions and action items
Cloudflare will deploy leading AI models on its global network, enabling local (city‑level) execution. Launch and expand regionalized models (e.g., AI for Bharat) trained on local laws, languages, and cultures. Provide free credits and support through Cloudflare’s startup accelerator, especially for Indian and other emerging‑market startups. Fund education initiatives (e.g., IIT build‑a‑thon) to lower the technical barrier for developers and students. Design more efficient, secure AI infrastructure to reduce entry costs for new AI companies. Call on all stakeholders (businesses, governments, civil society) to adopt the five outlined values as a guiding framework.
Unresolved issues
Specific mechanisms for compensating content creators and measuring the value of knowledge versus traffic remain undefined. How to create and enforce a sustainable, globally‑applicable business model that replaces the traditional ad/traffic revenue stream. Concrete policy proposals or regulatory frameworks to prevent AI market consolidation were not detailed. Methods for ensuring that AI remains affordable for the poorest populations without a clear funding model were not resolved. The process for monitoring and verifying that AI respects cultural and linguistic diversity across all regions was not fully addressed.
Suggested compromises
Acknowledgement that the current traffic‑based internet model is fading, paired with a proposal to transition toward a knowledge‑reward system that aligns with AI companies’ interests. Balancing decentralization with security by designing AI services that are both widely accessible and built with security‑by‑design principles. Providing free credits and educational support as an interim measure while longer‑term affordable pricing models are developed.
Thought Provoking Comments
Much like the printing press, this should not be a technology which is controlled by five companies. It should be 500,000 companies, spread around the world.
He draws a historical parallel to the printing press to argue for massive decentralization of AI, framing the debate around ownership and control rather than just technical capability.
Sets the overarching theme of the talk, steering the audience toward thinking about policy and market structures. It introduces the first of his five‑point framework and prompts listeners to consider anti‑monopoly measures.
Speaker: Matthew Prince
AI takes but it does not give back – we need to make sure content creators, journalists, academics, researchers are compensated for the hard work that they do, rather than having that content regurgitated by AI systems.
Highlights a concrete economic injustice that AI introduces, moving the conversation from abstract benefits to the livelihoods of knowledge workers.
Shifts the tone from visionary optimism to a critical examination of current AI practices, opening space for discussion about licensing, royalties, and new business models.
Speaker: Matthew Prince
We must ensure that small businesses, especially those in the global South, have the tools to survive as the world moves to more agentic commerce; otherwise AI will become a consolidator that crushes personal, convenience‑based relationships.
Brings in the geopolitical dimension of AI adoption, emphasizing equity and the risk of widening the digital divide.
Introduces a new topic—support for the global South—and challenges the audience to think about inclusive infrastructure, influencing later mentions of regionalized models and local language support.
Speaker: Matthew Prince
Unique cultures and languages shouldn’t be homogenized by AI. AI must respect and emphasize regional identities rather than Americanizing the world.
Raises cultural preservation as a core design principle for AI, expanding the conversation beyond economics to sociocultural impact.
Leads to the discussion of localized model training (e.g., AI for Bharat) and reinforces the need for diverse, region‑specific AI ecosystems.
Speaker: Matthew Prince
Human eyeball traffic, the current currency of the internet, is disappearing; AI will become the primary way people get answers, so we need a new business model that rewards creators for advancing knowledge, not just generating clicks.
Identifies a fundamental shift in how value is measured online and proposes a knowledge‑gap‑filling reward system, using the “Swiss cheese” analogy to illustrate the concept.
Creates a turning point in the narrative, moving from critique of existing models to a constructive proposal for a future incentive structure, influencing the later call for “rewarding creators for filling holes in human knowledge.”
Speaker: Matthew Prince
We need to stop the consolidation of AI and aim for 500,000 companies, not just five, by giving anyone anywhere the tools, knowledge, and ability to compete in this space.
Reiterates the decentralization thesis with a concrete numeric goal, framing the issue as a race against centralization seen in telecoms, social networks, and hyperscalers.
Reinforces the opening point, solidifying the anti‑consolidation narrative and setting the stage for Cloudflare’s own initiatives (accelerator, free credits) as examples of how to achieve it.
Speaker: Matthew Prince
We are making top AI models available across our global network so they can be run in the city where you live, and we’ve launched ‘AI for Bharat’ in 22 Indian languages, plus a startup accelerator and free credits for Indian founders.
Provides concrete actions that embody the earlier principles, turning abstract values into tangible programs.
Demonstrates that the discussion is not purely theoretical; it grounds the conversation in real‑world implementations, encouraging the audience to envision replicable models elsewhere.
Speaker: Matthew Prince
Overall Assessment

Matthew Prince’s remarks structured the entire session around a five‑point framework that linked historical precedent, economic fairness, cultural diversity, and global accessibility. Each of his pivotal comments introduced a new dimension—decentralization, creator compensation, support for the global South, cultural preservation, and a re‑imagined internet business model—that shifted the conversation from a generic AI hype narrative to a nuanced debate about equity, sustainability, and concrete policy. By repeatedly circling back to actionable initiatives at Cloudflare, he transformed abstract concerns into demonstrable solutions, thereby shaping the discussion’s direction, depth, and tone.

Follow-up Questions
What will the new business model of the Internet look like in an AI‑driven world?
Understanding a sustainable economic model is crucial because traditional traffic‑based revenue is collapsing as AI delivers answers directly, threatening creators, journalists, and small businesses.
Speaker: Matthew Prince
How can we stay ahead of the rapid changes caused by AI scraping and the shift away from human‑eyeball traffic?
Proactive strategies are needed to protect ecosystems that rely on web traffic and to anticipate future disruptions.
Speaker: Matthew Prince
What mechanisms can be created to fairly compensate content creators, journalists, and researchers whose work is used by AI systems?
Ensuring creators receive value for their contributions is essential for maintaining high‑quality content and incentivizing continued creation.
Speaker: Matthew Prince
How can we ensure that small businesses, especially in the Global South, have the tools and support to thrive in an increasingly agentic‑commerce environment?
Without targeted assistance, these businesses risk being left behind as AI intermediaries replace personal relationships and convenience‑based interactions.
Speaker: Matthew Prince
What policies or technical approaches can prevent AI from homogenizing cultures, languages, and identities, and instead promote cultural diversity?
Preserving regional uniqueness is vital to avoid a monolithic, American‑centric AI output and to respect local heritage.
Speaker: Matthew Prince
How can AI technology be democratized to enable 500,000 companies worldwide rather than a handful of dominant players?
Broad distribution reduces the risk of concentration of power, fosters competition, and aligns with the historical diffusion of transformative technologies like the printing press.
Speaker: Matthew Prince
What design principles and cost‑reduction strategies are needed to make AI systems secure by design and affordable for organizations without trillion‑dollar budgets?
Security and affordability are prerequisites for widespread adoption and for preventing the creation of a barrier to entry that only massive hyperscalers can overcome.
Speaker: Matthew Prince
How can we develop reward systems that incentivize the creation of knowledge‑advancing content rather than sensational or rage‑bait material?
Aligning AI incentives with human knowledge growth will improve the quality of information fed into models and benefit both creators and AI developers.
Speaker: Matthew Prince
What technical frameworks are required to regionalize AI models so they respect local laws, languages, and cultural norms?
Localized models ensure compliance, relevance, and trust in diverse markets, especially in multilingual regions like India.
Speaker: Matthew Prince
What business models can make the most advanced, unbiased AI technology accessible to the poorest populations in the Global South?
Equitable access prevents a digital divide where only affluent users benefit from cutting‑edge AI capabilities.
Speaker: Matthew Prince
What metrics and data collection methods should be used to monitor AI’s impact on web traffic, creator revenue, and content consumption patterns?
Reliable data is needed to inform policy decisions and to design compensation mechanisms for affected stakeholders.
Speaker: Matthew Prince
What educational programs, accelerator initiatives, and funding mechanisms are most effective for nurturing AI talent and startups globally, especially in emerging markets?
Building capacity in diverse regions ensures a vibrant, competitive AI ecosystem and supports the broader goal of democratization.
Speaker: Matthew Prince

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.