Skilling and Education in AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel discussed AI’s potential to improve productivity and inclusion in India’s agriculture, small businesses, education, and health sectors [1-4][13-16]. Agriculture employs the most people yet loses 40-50 % of crops to pests; even a small loss reduction could raise farmer incomes and drive AI uptake [5-7]. AI can also enable solo entrepreneurs to conduct market research without staff, while education and health offer further high-impact uses [9-12][14-16]. The main barrier is a trust gap-users distrust black-box decisions, data handling, and possible misuse-necessitating a dedicated trust infrastructure [22-25][26-34][35-41][42]. AI may exacerbate inequality because models inherit past biases, and gaps can arise from geography, tool access, concentration of model providers in the US/China, and AI’s resource needs [44-52][53-58]. NSDC outlined four AI initiatives: shaping career paths, scaling AI training programs, improving training, assessment and counselling, and using AI to monitor large-scale outcomes [82-89]. Actions include AI-driven career-counselling tools, nano-credential courses for jobs like beauticians and tailors, and pilots that use AI to assess hands-on skills such as welding [91-94][136-141][113-119]. NCBT’s three-layer AI skilling framework introduces stackable nano-credentials feeding into the National Credit Framework, with ethics and values embedded in the curriculum [130-133][167-174][175-177][242-244]. Rakesh Kaul urged moving from digital to “work literacy,” using bite-size, multi-modal content, preparing workers for physical AI agents, and securing affordable compute [199-208][209-210][245]. A technology executive highlighted building AI infrastructure in India-Vizag data centre, subsea cable to the US, and end-to-end solutions for agriculture, health and education that close the learning-to-work loop [218-221][222-233]. In rapid-fire answers, the panel agreed that strengthening trust infrastructure, providing a universal AI assistant, embedding ethics in AI education, and ensuring affordable compute are essential steps for India to make AI an equalizer by 2030 [238][239-241][242-244][245-248][246-248].


Keypoints


Major discussion points


AI’s transformative potential and the need for a “trust infrastructure.”


The opening speaker highlighted AI’s ability to raise agricultural productivity (e.g., reducing pest-related losses) and to empower small businesses, while stressing that adoption hinges on users’ trust in the technology and understanding of the “black box” ([1-6][9-12][22-25][26-34][35-41]).


Building a skilled workforce through coordinated skilling, certification and micro-credential frameworks.


NSDC outlined four focus areas: AI-informed career guidance, AI-enabled scaling programs, AI-driven training/assessment, and outcome monitoring ([76-88]). Neena Pahuja described a three-layer AI skilling framework, the creation of nano-/micro-credentials, and their integration into the National Credit Framework to certify learners ([130-138][167-176]).


Risks of widening inequality if AI is not deployed inclusively.


The panel warned that AI can mirror historical data biases, create geographic and access disparities, concentrate power in a few model-producing nations, and consume significant environmental resources, all of which could deepen existing inequities ([42-58]).


Infrastructure and compute as prerequisites for widespread AI adoption.


One speaker detailed efforts to build domestic AI compute capacity-data centres in Vizag, subsea cable links, and the push for affordable, locally-hosted compute-to ensure India can deliver AI services (e.g., universal AI assistants) without reliance on foreign infrastructure ([215-222][218-221][245-248]).


Decisive actions for India by 2030.


In rapid-fire responses, panelists converged on four priority actions: strengthen trust mechanisms and transparency ([238]), guarantee universal access to an AI assistant ([239-241]), embed ethics and values in AI curricula ([242-244]), and secure affordable compute resources ([245-248]).


Overall purpose / goal of the discussion


The panel aimed to chart a purposeful, inclusive AI strategy for India: leveraging AI’s economic upside in sectors such as agriculture, small-business, education and health, while simultaneously addressing trust, skill gaps, infrastructure needs, and inequality so that AI becomes an equalising force rather than a source of new divides.


Overall tone


The conversation began with an optimistic and visionary tone, emphasizing AI’s promise for productivity and entrepreneurship. Mid-discussion, the tone shifted to cautious and problem-focused, highlighting trust deficits, data-privacy worries, and systemic inequities. By the closing rapid-fire segment, the tone became constructive and forward-looking, offering concrete, actionable recommendations and a collective sense of urgency to act before the next wave of AI deployment widens gaps. Throughout, the dialogue remained collaborative and solution-oriented.


Speakers

Speaker 1


Role/Title: Professor (unnamed institution)


Area of Expertise: AI policy, trust infrastructure, AI applications in agriculture, small-business, education, and health care; implications of AI for inequality and sustainability


Speaker 2


Role/Title: CEO of NSDC (National Skill Development Corporation)


Area of Expertise: Workforce skilling, AI-enabled career counselling, AI scaling programmes, AI-driven assessment and outcomes monitoring


Neena Pahuja


Role/Title: Former Executive Member, NCBT (National Council for Vocational Training)


Area of Expertise: AI skilling frameworks, certification standards, stackable micro-/nano-credentials, AI integration in vocational curricula [S5]


Rakesh Kaul


Role/Title: (not specified in transcript or sources)


Area of Expertise: Digital-to-work literacy, AI adoption in low-friction learning, workforce transition to physical AI and autonomous systems


Speaker 3


Role/Title: (not specified in transcript or sources)


Area of Expertise: AI infrastructure and compute (data centre in Vizag, subsea cable), end-to-end AI stack for agriculture, health, education, and workforce upskilling [S6][S7][S8]


Moderator


Role/Title: Session Moderator


Area of Expertise: Facilitation of panel discussion on AI policy and skilling [S12]


Additional speakers:


(None – all participants are covered by the speakers names list)


Full session reportComprehensive analysis and detailed insights

Professor (Speaker 1) – Opening remarks


In response to the moderator’s opening question about AI’s role in India’s growth [71-73], the professor stated that artificial intelligence should be deployed where it can move the economic needle, beginning with agriculture – the sector that employs the most people in India yet suffers from the lowest productivity. By helping small-holder farmers identify pests and receive locally-sourced, language-specific remedies, AI could cut the typical 40-50 % crop loss to 20-30 %, delivering a 10-20 % income boost that would make adoption inevitable for farmers themselves [1-7]. The professor then linked this agricultural promise to the broader potential for AI to enable “one-person shops” that replace many traditional staff functions such as market research and analysis [9-12], and noted that education, skill-building and health are the next high-impact domains [13-16].


NSDC – Arunji (Speaker 2) – AI-driven skilling agenda


When asked how AI can support India’s expanding workforce, Arunji outlined a four-pronged AI agenda: (i) AI-informed career guidance to map how jobs will evolve; (ii) scaling programmes that embed AI modules across sectors; (iii) AI-enhanced training, assessment and counselling tools; and (iv) AI-driven monitoring of large-scale outcomes [76-89]. Concrete examples included AI-powered career-counselling platforms for students, sector-specific AI skilling tracks for engineers and non-technical workers, and pilots that use AI to evaluate hands-on skills such as welding [91-98][99-112][113-119].


NCBT – Neena Pahuja (Speaker 3) – Ethical, layered skilling framework


Addressing the moderator’s query on certification standards in a fast-changing AI landscape [124-126], Neena Pahuja presented a three-layer AI skilling framework that moves from “AI for all” to specialised pathways for “few” and “many”. The framework introduces stackable nano- and micro-credentials – for instance, a virtual-try-on tool for tailors, AI-assisted diagnostics for plumbers, and design-optimisation modules for carpenters – which can be accumulated into larger credit packages under the National Credit Framework [130-138][161-164][167-176]. Neena Pahuja emphasized that ethics and values should be embedded in every AI course, framing this as a core requirement for responsible AI deployment [242-244].


Rakesh Kaul (Speaker 4) – Work literacy and bite-size learning


Responding to the moderator’s question about the shift from digital to “work literacy” [173-175], Kaul argued that India must move beyond basic digital literacy to bite-size, multimodal learning that can be consumed anytime, anywhere. He stressed that content should be delivered in 1-2-minute formats to match contemporary consumption habits, and that such frictionless learning must be linked to the AI-assistant platforms being built by NSDC [190-208]. In the rapid-fire segment, Kaul highlighted the need for affordable, domestically-sourced compute [245].


Industry Representative (Speaker 5) – Full-stack AI ecosystem


Answering the moderator’s request for an ecosystem view [213-215], the industry representative described a full-stack AI strategy for India. The plan begins with secure, resilient infrastructure – exemplified by the new AI data centre in Vizag and a subsea cable that will connect it directly to the United States, reducing reliance on foreign compute [215-221][218-221]. Building on this foundation, the vision is to deliver end-to-end AI applications that close the loop from seed-to-market for farmers (weather, market prices, finance), from learning to employment in education, and from diagnostics to treatment in health [222-233][230-231].


Rapid-fire decisive actions for 2030


When asked to name a single decisive action for 2030, the panel offered five converging but distinct priorities: the professor called for an improved AI trust infrastructure that demystifies the black box [238-242]; the NSDC chief advocated universal access to an AI assistant for every citizen [239-241]; Neena Pahuja urged the inclusion of ethics and values in all AI curricula [242-244]; Kaul highlighted the need for affordable compute [245]; and the industry representative added that creating economic models to fund a compute-focused “flywheel” is essential [246-248]. These answers encapsulated the broader consensus that trust, ethics, compute and a human-centred approach are all indispensable [249-250].


Panel emphasis on different priority areas


Across the discussion, panelists emphasized different priority areas rather than expressing outright disagreement. The professor foregrounded the trust gap [23-34]; NSDC highlighted coordinated skilling and outcome-monitoring [76-89]; the industry speaker pointed to compute-infrastructure deficits [215-221]; and Kaul focused on work-literacy and bite-size content despite existing connectivity [190-208].


Conclusion – Policy implications


Overall, the dialogue underscores the need for coordinated policy that simultaneously builds an AI trust infrastructure, expands inclusive AI-enabled skilling (through stackable credentials and AI-driven career counselling), ensures affordable domestic compute, and embeds ethical standards throughout the ecosystem to harness AI as an equaliser for India by 2030 [42-45][60-66][237-241][246-248].


Session transcriptComplete transcript of the session
Speaker 1

In two significant areas, one is in agriculture, which is the highest employer, biggest employer anywhere. It’s also one of the least productive of sectors that we have anywhere. And that productivity gap in agriculture, if we can narrow it even by a small percentage, you will move the needle by a significant amount. And AI can do that. Just think about much of agricultural output in the global south comes from smallholder farmers who lose 40 % to 50 % of their crop because of pests. Now, if a farmer can identify what the pest is and use a homemade remedy that is given to them in their own language and using local ingredients, if I can move that 40 % down to 30 % or 20%, suddenly a huge swing in the farmer’s income.

So there is no question from a human perspective, if my income is going to, if my crop loss is going to, go up by 10 or 20%, you know, I will adopt it. So. So that’s the first thing, which is purpose. Now, in addition to agriculture, small businesses. I don’t really need a whole bunch of employees if I can essentially harness AI to do market research, to do analysis, and almost be an employee. And I can be a one -person shop and employ and really build a business. Now, beyond that, there are several other areas of application, which, you know, we’ve done the analysis to kind of see, you know, where are some of the biggest opportunities.

So there’s agriculture, small business. After that comes education and skill building. Another very powerful use of AI. And a fourth area is health care. Now, for each of these areas, there is an element of a major chasm that the humans need to cross. And that chasm doesn’t have to do with technology. It doesn’t have to do with how big the pipe is. It doesn’t have to do with whether I have access to, you know, any of the devices. It doesn’t have to do with the various elements of the digital public infrastructure. In fact, India is one of the shining examples. of the distribution system, the rails having been laid. But the key chasm, the big jump that we need to make is across a trust gap, which is in addition to digital infrastructure, in addition to other forms of infrastructure that includes talent and data and compute, there is a trust infrastructure that needs to be built.

Because from a human perspective, I will use a piece of technology if I can trust it. Now, there are many reasons why people are, on the one hand, very excited about AI, as is very evident over here, and at the same time, there is a lingering concern. There is a lingering concern because I don’t quite understand what’s inside that black box. I don’t quite understand how the hiring algorithms work. Why did I get rejected from this job? Why did I get that diagnosis? From a healthcare system. What is the language system telling me? Is something being lost in translation? Can I trust an image that has just been sent to me on social media? So the issues of trust are a very important set of questions.

And then the data that I’m submitting into the system, simply by interacting with AI, I’m submitting data and providing input. I’m actually acting as labor for the AI industry. What’s happening to the data? Who’s using it? Where does it go? Can it be used against me? Is it going to be used in my favor? So the whole question of trust is going to be an enormously important part that we need to consider. So first, purpose. Second is creating a trust infrastructure. And the third is recognizing that no matter what we say, no matter what rhetoric we put on our screens, no matter how many alliterative slogans we have in our meetings, AI is going to be a force for inequality.

There are many reasons why AI is going to create an unequal playing field, not the least of which being the fact that the algorithms are feeding on data. Data is simply a reflection of the past, and as we know, the past is not a terribly equal place. So that algorithm is going to essentially act as a mirror to our past, and maybe part of the risk is that the inequalities of the past get reinforced into the future. There are inequalities in terms of who has access to better tools. Now, even with open source and people being able to, you know, vibe code themselves, there’s an element of democratization, but there could be very different levels of access across a society.

So the usage context itself could be unequal. There could be inequalities when you go into different parts of the world, when you go to different parts of the country. So geographically, there is likely to be inequality. There’s inequality in terms of who’s providing you AI. So today, much of the frontier AI models are coming from two places, United States and China. And much of China’s AI infrastructure is built on top of a foundation from the United States. Much of the foundation of the United States, the leaders of the companies that are producing it, they’re all over here, really small. So it’s a tiny industry that’s providing us the foundation from which we are building the rest of the system.

And then one last really major source of potential inequality has to do with the resources that AI is absorbing, primarily energy, water, space, and even kind of our environmental resources, enormously important. Now, none of this means that we should stop the train. But we need to understand the human impact. that AI is going to have, both positive and negative, as we move forward and put the relevant policy systems in place, the relevant trust -building systems in place. Otherwise, we might be not only wasting a demographic dividend that India has got, but a trust dividend that India has got. One critical and really important aspect of an ecosystem like India is that it’s a very trusting society, very trusting in terms of digital.

It’s a very trusting infrastructure. Trust levels in India is in the 70 % range, whereas in the United States is in the 25 % to 30 % range. That’s a huge platform to build on. And it’s going to be really important for us to follow through with that trust that users, our potential consumers are giving us, and for the policy and the technology sector to be able to make sure that that trust dividend is not wasted. So with that… But I’m going to sit down, and I look forward to learning from my colleagues on the panel about how do we make AI more purposeful and not just powerful. Thank you.

Moderator

Thank you, Professor, for the insightful remarks. Very exciting, and at the same time, you know, you raise some concerns around inequality. Let me first go to Arunji. You know, as we said, like the demographic dividend that India holds, we will be adding a million plus to workforce every year. How do we make sure that they’re skilled, they’re ready for what the market is asking, the skills are continuously shifting, as CEO of NSDC with the mandate of skilling the population? How do you look at this? How is, you know, do you see AI as a threat, as an enabler? How are you approaching this?

Speaker 2

So, good morning. AI is an opportunity and an enabler. So let me begin with a few words about NSDC itself. So this is a national platform institution under the Ministry of Scale Development. And we work through two arms, 36 sector scale councils and close to around 400 training partners. So these are the two arms through which we have been working in the scaling space for the last two decades. With AI coming in, of course, it’s an opportunity, as I say. But primarily in four areas we have started work. One is, of course, AI and how career trajectories are getting shaped. So we require some kind of guidance, direction, et cetera. So work on that front. Second is creating scaling programs for AI, AI scaling programs.

Third is how does AI itself affect the entire value chain of scaling when it comes to, say, training, assessments, counseling and the other. areas. And the last is since we do large scale program management, how do we use AI to evaluate or monitor outcomes? These are the four primary areas we are working on. Obviously I will just pick for each of these areas in brief. The first one, setting the agenda or setting the direction with respect to careers, NSDC and the sector skill council, specifically the IT sector skill council, we have brought about certain reports, how jobs get shaped by AI, the new jobs and how the existing jobs get changed, etc. Within that is career counselling.

Once you know that this is the way a certain job would get transformed or a new job would come, a lot of career counselling is required for students. So how do we create AI enabled career counselling tools, models, etc. So that’s one area of big work. work. Coming to AI skilling programs, clearly there are three trials. The first is of course where we talk about AI for all skilling, which is more like AI awareness and AI usage. So we have this skilling for SOAR program under which we work with schools etc. The second is where we talk about how does skilling affect practitioners or people in the workplace. And this is where our sector skills councils are busy putting together how do we make the current programs, how do we bring in AI modules in it.

Of course to begin with how AI affects job roles to start with and then translating that into how the new programs would look like. The third area is AI for engineers where we skill engineers and this is where we work with engineering colleges. We have something called the Future Skills Centers and we work with close to right now around 10 ,000 students and around 50 ,000 students. And we work with 10 ,000 students and around 50 ,000 students. So we have a lot of different things going on. We have a lot of different things going on. We have a lot of different things going on. We have a lot of different things going on. We have a lot of different things going on.

We have a lot of different things going on. We have a lot of different things going is to create close to around 22 companies work with us, including Microsoft, Google, and Amazon, and Schneider’s, and Siemens, et cetera. And we create these kind of skilling centers within engineering colleges. The good thing about it is that this is part of the credit -based system. So students can pick up over the four years they are doing the engineering every year, every semester they pick up a course, and you string together a course, then you have a kind of a program for, say, an AI architect or something like that. So we look at the entire skilling program. The third is, as I said, AI is changing the way we skill, you know, the way we train, the way we assess.

Early days, again, pilots on, how do you use AI as a training assistant, you know, to our trainers, you know? So what do we do, how do we work with that? Similarly, assessment is a big area. Please see, many of our training involves vocational training, which means hands -on training. So we use AI for hands -on training. Hands -on training requires a lot of… piloting etc so towards that we are working can we say for example just giving an example say what’s a good weld or what’s a bad weld if the AI is trained on that then it can help the current assessor in actually you know actually providing a better assessment and also augmenting the number of assessors we are currently having the last piece is we have our skill platform for large scale program management today it is called SID and we are now bringing elements AI into it so that how do we how do we monitor outcomes better so big challenges in a country like India is monitoring outcomes they are facing how to use AI on that area also so

Moderator

very interesting and exciting to see what you have brought to the table Neenaji if I can move to you Anandji spoke about the skilling programs but certification standards are very key and how do you do that in an environment where skills are you know the courses are becoming outdated in months the requirements are shifting from your vantage point like a lot of content being created a lot of initiatives all around how do we define qualified professional in AI? Is there a plan for certification or standard setting? How should we think about it?

Neena Pahuja

Thank you so much. Thank you for inviting me. I’m a former executive member of NCBT. One minute about NCBT. NCBT is a regulatory body NCBT is a regulatory body under the Ministry of Skill Development. So something on AI since we’re sitting in an AI conference. Around two and a half years back we came up with a skilling framework for AI. And the framework actually talks about three layers of skilling for AI. It talks about skilling for all. It talks skilling for few and skilling for many and skilling for few. all of the initiative we started working as part of the SWOT initiative that was mentioned by Arun also. Now what does that mean? Like all of us know how to use payment gateways or payment UPI etc.

Can we actually use AI in a similar way? So our thought was can we take AI to everyone every nook and corner a radiowala or a plumber or somebody who is a beautician etc. So what did we do on that? And I’m going to take a minute before I come to a certification question. We actually have tried creating a small nano -credential for something which a beautician can use and how can she use AI for giving a better service to the customers. We’ve actually created a virtual try -on for a tailor. How can a tailor use a virtual try -on concept for actually taking it to, you know, telling a person which design or what kind of a color suits a person.

We actually created basic courses, of course, on AI, which also have been done and they were launched sometime in July. We’ve got around two lakhs plus people who registered on those courses. But idea was, how can we take it to everyone? So how can simple things like, how can a plumber find out if there’s a fault in the pipe? So can AI be used there? So one of the points which I think Professor talked about was, how do we take it to masses? How can AI make an impact in our lives or everybody’s life, like internet is doing or anybody else is doing? So that’s what we’ve tried doing as part of some of the courses that we already are in a state to launch.

In fact, some of the courses have been launched. We all been talking about in this conference that AI is going to replace coders. So there’s going to be lots of jobs, etc. But still, how can actually you use AI for helping to launch? We actually are stalling. we’ve actually demonstrated how AI can help in coding also. How can it help me to learn coding? How can it help me to test a particular program? So AI doesn’t stop at just being AI and taking away jobs. I think we have to groom and possibly diffuse, I mean, the word which has been said, the concept of AI which is happening. Now let’s look at certification in the courses.

Very wonderful question he asked. Things are changing almost every day. I think the carpenter’s role is going to change. In fact, we have from Furniture and Putting, the Sixth Grade Council, a small little model which says, how can I design a particular furniture better if I have AI? Knowing the wood, amount of wood, and the space for which I have to design the furniture, can the carpenter actually use AI to design the furniture better? That’s the way it’s going to make an impact. Now how can I embed this course in a course which I’m teaching a carpenter? That’s the impact it’s going to make. And that’s what we’re trying to do. So a wonderful question from that point of view.

So what we’ve done is we came up with the concept of stackable micro -credentials or stackable nano -credentials. Which means based on the changes which are happening, you could actually stack the small, small modules together and make a skill that is required that needs to be done. And in these skills, you could also have something like employability skill, which could be design thinking or others. And it could also be an AI module which can be embedded, which actually tells you how AI can be embedded in a particular course. For example, our ITIs already have a small seven and a half hours module which have been embedded as basic concepts of AI which are now being taught to every ITI student.

Now what we want to see is how can we actually do our late machines or other machines, how can they be done in a better way with AI. So that’s why we are trying to do it. Now certification is now, the way we are trying to do is small, small certificates. You know, which a person earns, can I. actually lead to credit and the total of that actually can also take you to a larger credit and that’s how it’s been planned as part of something called National Credit Framework that we actually came up with from NCVT and the Ministry of Education. This was launched around two years back. So that’s how it’s been planned. I hope that answers your question.

Moderator

To Rakesh, historically, you know, whenever general purpose huge technological change comes in, it ends up increasing divide for some time till people evolve and you know, you learn new skills and get off the curve. Now in India’s vantage point, we spoke about the starting point we have. In your view, what needs to be done differently? We heard a few initiatives in motion to make sure that we cross this transition and manage this transition very carefully and aptly.

Rakesh Kaul

Thank you so much. It is a very, very pertinent question. And I truly believe that in the past, we have been able to do this. And I truly believe that in the past, we have been able to do this. And I truly believe that in the past, we have been able to do this. era, India was at a disadvantage. It was very difficult when we had internet coming in. India was not as uniquely placed as it is placed today. We have ubiquitous connectivity. We have low cost of connectivity. We have a huge amount of internet penetration. We have applications which are general purpose applications used by a billion people like UPI and others. So today our starting point is very, very different, not only for our users, but also for those who are making these applications.

I think journey started about 10 years back when India realized that we have to make applications for our own people and not just rely on the world to make applications for us and take them forward. So that’s where we are today. Hence, the opportunity for us is immense, given that we are a billion people with access to low cost connectivity, reliable connectivity, already using these applications for our financial transactions and other use cases. And especially for all the programs that we just heard, NSGC and others are today making. So I’ll just talk about three things that I thought I’ll bring to your notice is, I think the point that we should move from digital literacy to work literacy.

What that typically, the point that I’m wanting to make here is that, just a minute, it’s, my phone is misbehaving. So the idea here is how can we remove friction to learning? And friction to learning, typically, the point that I wanted to make is how can it be anytime, anywhere, any media, any duration. And more and more, we are seeing that our population, our people are used to one minute, two minute consumable content. Giving a one hour lecture or one hour content may be difficult now. People really need to consume it in two minutes. If they like, they might go ahead with the two hours. content. So are we creating the right content for our users or are we just trying to reshape the content we have a lecture somewhere and we give a video on a platform and say consume it.

So this whole content strategy has to work if the skill is to be really imparted for people in a consumable manner where the as I said friction to usage is least and obviously if we dovetail the programs that sir was just talking about into those 600 labs that digital AI mission is going to set up across India then somebody who’s interested and you hook on by this small meaningful content on an Instagram can the person really find it interesting and get to somewhere where they can really see the benefit of it and we heard Nandan talking about it is that we should lead from the first principles it is not only giving black boxes to people and saying work with it if you really want India to progress people should understand also a lot of people should understand what goes into this.

Black box so that they can start innovating around so I think that’s one which is remove friction. the second is I think we are all talking about agents and we are talking about physical AI it is not going to be easy for any worker including trust me for me if tomorrow I am told that my secretary is an agent and not a physical person although the productivity of that agent may be much better now imagine we are getting into a workforce where we are talking of lights out factories it’s a reality in China where the factories are totally autonomous how do you get your workforce to work with this physical AI you are working and besides you there is a robotic arm working doing half the work how do you work in such environments where a lot of work is being done by physical AI it will take a lot of mindset shift it will take a lot of role shifts and it is not only telling them what AI is how their roles are changing what is now expected of them once that is clear then only you will be able to train them better so therefore how do we move towards this journey and in our mind be very optimistic of the fact that it is here and it will impact us.

We have to be ready. There is enough and more for a country like ours to become the torchbearers for the world of how to engage 1 billion people on this endeavor. We will create data which will train apps but for that we will have to take on the mantle of creating our own apps which are purposefully made for India, made in our languages, made for our specific use. Thank you.

Moderator

is going to add to the broader AI ecosystem that we need?

Speaker 3

So when I look at the work that we’ve been doing across board and across product areas, and speaking to some of the announcements we’ve made this week, what we’re looking at is how do we bring the full stack of AI into India, right from the foundational level. It’s creating a secure, resilient infrastructure. How do we bring that computational power that India needs in India and not having to rely on that power, that compute in other markets? And that’s why we started out with the build of the data center, the AI data center in Vizag. Adding to that is how do we then ensure connectivity with the rest of the world? That’s where we’ve looked at the subsea cable investments that we’re going to do, which is going to connect Vizag to, to the, to, uh, the U .S.

directly, you know, circumventing through the southern hemisphere. And then as you go up the stack, like you’re talking about, is how do we build out applications and solutions that are actually delivering value to the last mile citizen on the ground, whether it’s in agriculture or health. But every time we look at it, we are looking at how do we kind of, you know, complete the circle. So if you look at it in the education space and the work we’ve been doing with Charn Singh University, is how do you have AI not just at the skilling level, but how do you bring in AI at the learning level and the administration level so we can create a more effective and efficient way of actually delivering AI to the students.

And we’re addressing every part of that chain of learning. How then do you connect it to the workforce, correct? So as you look at the professional certification. And it’s important that these loops start to close because that’s when you actually see impact. That’s key. Similarly, as we talked about in agriculture healthcare, correct? How, in agriculture, you go from seed to market. How do you give information to a farmer to understand the weather pattern so he’s better able to know when to sow, when to harvest, information on the market, information on financial support. So the whole aspect that we are thinking through is how do we kind of help connect the dots? How do we come in and provide our support and our technology to ISVs to create these solutions to connect the dots?

Because that last mile connectivity is actually going to determine the success of this technology. Thank you.

Moderator

I think we’re quite at time, but I want to take just a minute for a last question to all the panelists together. If you had to identify one decisive action which India should take looking at 2030, something that we can be proud of, what action should India take to make AI as an equalizer? Let me start with you, Professor. five second response is rapid fire.

Speaker 1

Five second response, I think the one action that we need to take is improve the trust infrastructure and make sure that the human at the other side of the AI understand what’s inside the black box, at least to the extent that it makes them feel comfortable accepting the output and the decision that the black box is offering.

Speaker 2

I think in the next three years I think every Indian should have access to an AI assistant whether it’s a farmer, a student or anything. We have the platforms, we have the DPIs in place, we have the language thing in place, we also have the SIDs, the skills platform in place. It’s time we put it all together and create every Indian having one assistant working with them.

Neena Pahuja

I think what I would love is that if ethics and value is also part of every AI course that’s taught, at least in India, we will possibly create a different kind of AI creators in that field. That would be my opinion. Thank you.

Rakesh Kaul

I would believe that it is access to affordable compute which will be important for India to succeed.

Speaker 3

I think I’m going back to my first point is on the flywheel. I think a lot of the investments are coming into the compute side. How do we look at bringing in investments and creating economic models for the diffusion of AI and that’s going to be important.

Moderator

Trust, ethics, compute, human at the core of everything. Thank you so much for the exciting discussion. Thank you. Can I have a round of applause for the panelists and our moderator please.

Related ResourcesKnowledge base sources related to the discussion topics (25)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Agriculture employs the most people in India and suffers from the lowest productivity, making it the priority sector for AI deployment.”

The knowledge base notes that agriculture is the highest-priority area because it employs the largest workforce and has significant productivity gaps [S1].

Confirmedhigh

“Ethics and values should be embedded in every AI course, framing this as a core requirement for responsible AI deployment.”

Multiple sources highlight ethics as a central pillar of responsible AI initiatives in India, including discussions on ethical imperatives and responsible AI leadership [S115] and [S117] and [S118].

Additional Contextmedium

“AI‑assistant platforms are being built by NSDC to support bite‑size, multimodal learning and work‑literacy initiatives.”

The knowledge base mentions partnerships focused on skilling and AI-assistant platforms as part of broader AI-driven workforce development efforts [S111].

Additional Contextlow

“AI can help small‑holder farmers identify pests and receive locally‑sourced, language‑specific remedies, potentially reducing typical 40‑50 % crop loss to 20‑30 % and boosting farmer incomes by 10‑20 %.”

While the knowledge base discusses AI applications in agriculture and the need for ecosystem coordination, it does not provide the specific loss-reduction or income-boost figures cited in the report, offering broader context on AI’s role in farming [S65] and [S107].

Additional Contextmedium

“NSDC’s four‑pronged AI agenda includes AI‑informed career guidance, sector‑wide AI modules, AI‑enhanced training/assessment tools, and AI‑driven outcome monitoring.”

The knowledge base references NSDC’s involvement in AI-enabled skilling partnerships and monitoring initiatives, aligning with the described agenda though without the exact four-point breakdown [S111].

External Sources (120)
S1
Skilling and Education in AI — – Neena Pahuja- Rakesh Kaul – Speaker 1- Neena Pahuja – Speaker 2- Neena Pahuja
S2
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S3
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S4
S5
Skilling and Education in AI — – Neena Pahuja- Rakesh Kaul – Speaker 1- Rakesh Kaul
S6
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S7
S8
Advancing Scientific AI with Safety Ethics and Responsibility — – Speaker 1- Speaker 2- Speaker 3 – Speaker 1- Speaker 3- Moderator
S9
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S10
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S11
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S12
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S13
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S14
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S15
Sustainable development — AI-powered tools like remote sensing, drones, and predictive analytics can enhance precision agriculture practices. They…
S16
Lightning Talk #173 Artificial Intelligence in Agrotech and Foodtech — ## Focus on Global South and Smallholder Farmers Alina Ustinova: Hello, everyone. My name is Alina. I represent the Cen…
S17
AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence — – Speaker 4- Ashish Gupta AI tools empower individuals to perform tasks that previously required teams or specialized s…
S18
One-Person Enterprise — Dan Murphy introduces the concept that technology, particularly AI, has evolved to allow businesses to scale without rel…
S19
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — And if they don’t, they’ll still make decisions, but they’re not going to be very good decisions. You know? So the secon…
S20
Building a Digital Society, from Vision to Implementation — Stacey Hines, joining from Vancouver at 4 AM Kingston time, cited research from Web Summit where AI expert Gary Marcus p…
S21
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Domenico Zipoli: Thank you very much. It’s always fascinating to be in a room with both stakeholders coming from compani…
S22
YouthLead: Inclusive digital future for all — Eylul Ercin: Thank you for this question. That’s really great and really current. I think we’ve been seeing more and mor…
S23
Lightning Talk #245 Advancing Equality and Inclusion in AI — Bjorn Berge: Thank you very much, Sara, and very good afternoon to all of you. Let me first start by congratulating Norw…
S24
UNSC meeting: Artificial intelligence, peace and security — This uneven distribution could reinforce inequalities and asymmetries
S25
From India to the Global South_ Advancing Social Impact with AI — AI is the new electricity. The question is who has the switch? And today that’s what we will be discussing. You know, if…
S26
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Speaker 2: Role appears to be event moderator or host. Area of expertise and specific title not mentioned. But today i…
S27
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S28
Prediction Machines in International Organisations: A 3-Pathway Transition — What did the AI advisor do best in this case? It had the capacity to go through thousands of pages of documents, find as…
S29
We are the AI Generation — Martin describes a concrete initiative by the ITU to address the skills gap in AI literacy through a coalition approach….
S30
Upskilling for the AI era: Education’s next revolution — ## Programme Development and Current Impact Doreen Bogdan Martin: Good afternoon, ladies and gentlemen. Yesterday morni…
S31
AI (and) education: Convergences between Chinese and European pedagogical practices — Audience: I think it is much more to see, put yourself, put the mindset as if you are already in 2035 or 2040, how educa…
S32
Education meets AI — Artificial intelligence has the potential to revolutionize education by offering personalized learning experiences to ev…
S33
https://dig.watch/event/india-ai-impact-summit-2026/skilling-and-education-in-ai — Thank you so much. Thank you for inviting me. I’m a former executive member of NCBT. One minute about NCBT. NCBT is a re…
S34
Democratizing AI Building Trustworthy Systems for Everyone — Absolutely. I mean, not one of those five limbs is possible without deep partnership. And that coordination of those fiv…
S35
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 356. From the perspective of system-wide coherence based on common values and similar needs, the review explored the iss…
S36
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — It cannot be a bolt -on on top of what we have built. So it has to be built at every layer. And trust has also evolved w…
S37
The Foundation of AI Democratizing Compute Data Infrastructure — Thank you. So I think two characteristics of digital public infrastructure, which are key, are to ensure that not only t…
S38
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion reveals strong consensus on key strategic directions: comprehensive ecosystem development beyond chip man…
S39
WSIS Action Lines C4 and C7:E-employment: Emerging technologies in the world of work: Addressing challenges through digital skills — ## Scaling Through National Policy Frameworks Gianluca Musraca: Well, let me say, of course, I’m trying to link the dif…
S40
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — Mr. Sher Verick:Great. Well, thank you very much. It’s a real pleasure to be with you here today. I think Janine updated…
S41
Contents — Beyond school and university-level education, a range of opportunities are currently available to workers looking to ite…
S42
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S43
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — Ernst Noorman: Thank you very much, Zach, and thank you, Rasmus, for your words. While leaders at this moment gather in …
S44
Foster AI accessibility for building inclusive knowledge Societies: a multi-stakeholder reflection on WSIS+20 review — Fabio Senne:Thank you, Alexandre. Thank you, Mr. Chair. And thank you, Shanhong and IFAP, for the invitation. Yes, I wou…
S45
Multistakeholder Partnerships for Thriving AI Ecosystems — -Infrastructure and capacity building as foundational requirements: Discussion covered the need for sensing infrastructu…
S46
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — India faces physical constraints of land, water, and power that will drive infrastructure setup decisions There is unan…
S47
An ambassador’s personal reflections on his time at the UN (2013-2015) — I believe that our success in adopting the 17 SDGs which form the core of Agenda 2030 on Sustainable Development was due…
S48
[Tentative Translation] — In order to achieve this, it is essential to redesign the economy and society through the three transitions of “decarb…
S49
Foreword — Nevertheless, there are certain core principles and established good practices that have proven effective in acceleratin…
S50
UN: Summit of the Future Global Call — Decisive action must be taken to implement the 2030 Agenda They express concern that international organisations create…
S51
AI agents offer major value but trust and data gaps remain — AI agents coulddrive up to $450 billion in economic value by 2028, according to new research by Capgemini. The gains wou…
S52
Artificial intelligence (AI) – UN Security Council — In conclusion, the discussions highlighted the importance of fostering transparency and accountability in AI systems. En…
S53
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — Kanai argues that public trust in science and technology is essential for acceptance of new technologies. He emphasizes …
S54
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for …
S55
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — Mr. Sher Verick:Great. Well, thank you very much. It’s a real pleasure to be with you here today. I think Janine updated…
S56
Empowering Workers in the Age of AI — Verick emphasised that the benefits of AI adoption are similarly unequal, with the global north positioned to capture mo…
S57
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — This brings me to the international dimension. AI is a truly global challenge whose effects transcend national borders. …
S58
AI and the Roadmap for Digital Cooperation — ‘deepen inequality’; or ‘exacerbate existing discrimination’.
S59
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S60
Part 2.5: AI reinforcement learning vs human governance — Similarly,human governance can lead to emergent behaviours and unintended consequences. Policies designed to achieve spe…
S61
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — Abeer Alsumait: assistive technologies, but there are challenges like a very minor issue might also be a kind of we ca…
S62
AI: The Great Equaliser? — In addition to community management, agriculture is another sector that is expected to be heavily impacted by AI. AI mod…
S63
A Digital Future for All (afternoon sessions) — AI and digital technologies have the potential to transform lives in rural areas by providing access to information and …
S64
Multi-stakeholder Discussion on issues about Generative AI — The use of Artificial Intelligence (AI) in emerging economies has the potential to bridge the divide between these econo…
S65
AI for Good – food and agriculture — Dongyu Qu: Excellencies, ladies, gentlemen, good morning. A year ago, we all gathered for the Previous AI for Good Summi…
S66
Skilling and Education in AI — Infrastructure development emerged as crucial, with investments in data centers, subsea cables, and compute capacity to …
S67
Driving Indias AI Future Growth Innovation and Impact — “Investment also includes energy infrastructure, because without energy, there is really no compute infrastructure you c…
S68
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — India possesses many essential ingredients for AI success: a robust software services industry, thriving startup ecosyst…
S69
Indias Roadmap to an AGI-Enabled Future — -Compute Infrastructure and GPU Requirements: Analysis of India’s current and projected compute needs, with estimates su…
S70
AI to transform India’s $400 billion IT ambition by 2030 — India’s IT sector could reach$400 billion by 2030, Prime Minister Narendra Modi said in an interview with ANI, highlight…
S71
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — And the thought is we’re also moving from a world of finite. If I look at content today, in whichever platform it is, ri…
S72
India’s AI roadmap could add $500 billion to economy by 2035 — According to the Business Software Alliance, Indiacould addover $500 billion to its economy by 2035 through the widespre…
S73
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Importance of hearing various perspectives during policy formulation Lack of infrastructure, skills, compute access, an…
S74
Multistakeholder Partnerships for Thriving AI Ecosystems — And India has 3 .9 million of them, the second largest up to the US. And this is a community that has been literally nur…
S75
Artificial Intelligence & Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S76
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Data residency requirements and lack of cutting-edge model infrastructure in India create deployment barriers Sharma id…
S77
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Both leaders acknowledged significant challenges in enterprise AI adoption, with Krishan noting that only 12% of corpora…
S78
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Takahito Tokita Fujitsu — Throughout the presentation, Tokita emphasizes the critical importance of establishing trusted AI infrastructure to inte…
S79
The Foundation of AI Democratizing Compute Data Infrastructure — Thank you. So I think two characteristics of digital public infrastructure, which are key, are to ensure that not only t…
S80
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — And we want to invest in infrastructure. The number of startups, as I mentioned earlier, 2025, the largest investment y…
S81
AI as critical infrastructure for continuity in public services — Building confidence and security in the use of ICTs | Artificial intelligence | Data governance Resilience, data contro…
S82
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — First, trust. It’s trust. Trustability. Trustability because we need to trace the systems, the models, the data that we …
S83
Contents — Beyond school and university-level education, a range of opportunities are currently available to workers looking to ite…
S84
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 356. From the perspective of system-wide coherence based on common values and similar needs, the review explored the iss…
S85
WSIS Action Lines C4 and C7:E-employment: Emerging technologies in the world of work: Addressing challenges through digital skills — Gianluca Musraca: Well, let me say, of course, I’m trying to link the different questions and comments. I fully agree wi…
S86
FOREWORD — To help these people to transition or reskill, the education sector needs to embrace non-traditional forms of study. Thi…
S87
Agenda item 5 : Day 4 Afternoon session — Albania is proactively enhancing its cybersecurity capabilities through a comprehensive and strategically phased plan, a…
S88
Lightning Talk #245 Advancing Equality and Inclusion in AI — Bjorn Berge: Thank you very much, Sara, and very good afternoon to all of you. Let me first start by congratulating Norw…
S89
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — Ernst Noorman: Thank you very much, Zach, and thank you, Rasmus, for your words. While leaders at this moment gather in …
S90
Foster AI accessibility for building inclusive knowledge Societies: a multi-stakeholder reflection on WSIS+20 review — Fabio Senne:Thank you, Alexandre. Thank you, Mr. Chair. And thank you, Shanhong and IFAP, for the invitation. Yes, I wou…
S91
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Inequality and limited inclusivity in the implementation of accessibility and inclusivity practices are identified as pe…
S92
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S93
Multistakeholder Partnerships for Thriving AI Ecosystems — -Infrastructure and capacity building as foundational requirements: Discussion covered the need for sensing infrastructu…
S94
Skilling and Education in AI — Infrastructure development emerged as crucial, with investments in data centers, subsea cables, and compute capacity to …
S95
WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette — Infrastructure and Capacity Building The consultant argues that infrastructure must be established as a base for AI dem…
S96
Responsible AI for Shared Prosperity — Infrastructure and compute as critical enablers
S97
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — -Infrastructure Constraints and Resource Management: Significant focus on three critical bottlenecks – power consumption…
S98
An ambassador’s personal reflections on his time at the UN (2013-2015) — I believe that our success in adopting the 17 SDGs which form the core of Agenda 2030 on Sustainable Development was due…
S99
UN: Summit of the Future Global Call — Decisive action must be taken to implement the 2030 Agenda They express concern that international organisations create…
S100
AI to transform India’s $400 billion IT ambition by 2030 — India’s IT sector could reach$400 billion by 2030, Prime Minister Narendra Modi said in an interview with ANI, highlight…
S101
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — India has unique advantages to lead the next storytelling civilization by 2030, including demographic energy, linguistic…
S102
[Tentative Translation] — In order to achieve this, it is essential to redesign the economy and society through the three transitions of “decarb…
S104
Open Forum #3 Cyberdefense and AI in Developing Economies — Christopher Painter: Happy to join you, even though it’s 3.30 in the morning here, but it’s very nice to be with you all…
S105
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — 8 year old prodigy: Sharing is learning with the rest of the world. One, an AI that is independent. From large global A…
S106
Keynote-Mukesh Dhirubhai Ambani — Moderator’s opening remarks
S107
AI for agriculture Scaling Intelegence for food and climate resiliance — And that’s truly revolutionarily empowering for farmers. But to make that work for farmers, there’s a lot of things that…
S108
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — 1 ,000 hectares in some big island of Indonesia in order to get the safe efficiency in the next five years. And then we …
S109
AI Meets Agriculture Building Food Security and Climate Resilien — And that creativity will result in a number of different applications that will be aimed, in most cases, to help farmers…
S110
Shoppers can now let AI find and buy deals — Tech giants are pushing deeper into e-commerce withAI-powered digital aidesthat can understand shoppers’ tastes, try on …
S111
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — And we are partnering. The Prime Minister had challenged us to partner across agriculture, healthcare, drive language ac…
S112
What is it about AI that we need to regulate? — The discussions across multiple IGF 2025 sessions revealed significant concerns about the implications of developed coun…
S113
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — The panellists provided concrete examples of how these standards enable practical applications. Commerce protocols allow…
S114
Agents of Change AI for Government Services & Climate Resilience — The panellists provided concrete examples of successful implementations. “Bobby,” a police assistance chatbot in New Tha…
S115
WS #31 Cybersecurity in AI: balancing innovation and risks — Melodena Stephens: So this is a tough one, right? Because when I look at ethics, I think ethics are great. The line b…
S116
NIST releases new digital identity and AI guidelines for contractors — US National Institute of Standards and Technology (NIST) hasreleaseda new draft of its Digital Identity Guidelines, intr…
S117
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — 1. Trust, safety, and accountability: His Excellency Dr. Abdullah bin Sharaf Alghamdi emphasised the need to focus on th…
S118
Responsible AI in India Leadership Ethics & Global Impact part1_2 — And last, enterprises. Like many of yours in this room, that are willing and excited to go first that really look at tra…
S119
WS #123 Responsible AI in Security Governance Risks and Innovation — Jingjie He: system, you developed it first as a project and you deploy it. So many times, based on my experience from th…
S120
Responsible AI in India Leadership Ethics & Global Impact — Absolutely. So I guess the thread of most of what AI now entails is about are we not moving fast enough or are we moving…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
4 arguments165 words per minute1227 words444 seconds
Argument 1
AI can dramatically improve agricultural productivity by reducing pest‑related crop losses for smallholder farmers.
EXPLANATION
The speaker notes that smallholder farmers in the Global South lose 40‑50 % of their harvest to pests. By using AI to identify pests and provide locally‑tailored remedies, loss could be cut to 20‑30 %, boosting farmer incomes and overall productivity.
EVIDENCE
The speaker cites that smallholder farmers lose 40-50 % of crops to pests and that AI-driven pest identification with local language advice could reduce loss to 30 % or 20 % [5-6].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Precision-agriculture tools such as remote sensing, drones and predictive analytics are shown to boost yields and cut pest losses for smallholders, especially in the Global South [S15][S16][S1].
MAJOR DISCUSSION POINT
Agricultural productivity boost
AGREED WITH
Speaker 3
Argument 2
AI enables one‑person businesses by providing market research and analysis functions traditionally performed by multiple employees.
EXPLANATION
By harnessing AI for market intelligence, a solo entrepreneur can act as a virtual employee, reducing the need for a large staff and fostering small‑business growth.
EVIDENCE
The speaker explains that AI can do market research and analysis, allowing a one-person shop to operate effectively [10-11].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven tools are highlighted as enabling single-person enterprises by replacing team-based market research and analysis functions [S17][S18][S1].
MAJOR DISCUSSION POINT
AI for small business empowerment
AGREED WITH
Speaker 2
Argument 3
The primary barrier to AI adoption is a trust gap, not lack of infrastructure, requiring a dedicated trust infrastructure.
EXPLANATION
Even with widespread digital infrastructure, people hesitate to use AI because they do not understand the black‑box nature of algorithms, data usage, and potential misuse, so building trust mechanisms is essential.
EVIDENCE
The speaker describes the trust gap, citing concerns about black-box opacity, data handling, and the need for a trust infrastructure beyond digital infrastructure [23-34] and data-related worries [35-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Several reports identify a trust deficit as the main obstacle to AI uptake and call for trust-building mechanisms and trustworthy systems [S19][S20][S21][S34].
MAJOR DISCUSSION POINT
Need for trust infrastructure
AGREED WITH
Neena Pahuja, Moderator
Argument 4
AI will exacerbate existing inequalities because algorithms reflect biased historical data and uneven access to resources.
EXPLANATION
Since AI models are trained on past data, they risk reinforcing past inequities, while disparities in tool access, geographic location, and resource consumption (energy, water) further widen the gap.
EVIDENCE
The speaker outlines how AI mirrors past biases, creates geographic and resource-based inequalities, and concentrates power in a few countries and companies [43-57].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses warn that AI can reinforce existing inequities, concentrate power in a few actors, and widen digital divides, underscoring the need for inclusive policies [S23][S24][S25][S27].
MAJOR DISCUSSION POINT
AI‑driven inequality risk
S
Speaker 2
4 arguments185 words per minute923 words299 seconds
Argument 1
AI is an opportunity and enabler for large‑scale skilling and workforce development in India.
EXPLANATION
The speaker frames AI as a catalyst for four priority areas: career trajectory guidance, scaling AI programmes, transforming the training/value‑chain, and outcome monitoring, positioning AI as central to NSDC’s mission.
EVIDENCE
He outlines the four primary AI-focused work streams-career guidance, scaling programmes, training/assessment transformation, and outcome monitoring-within NSDC’s mandate [76-89].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
National skilling initiatives, coalition approaches and large-scale upskilling programmes are presented as key to building an AI-ready workforce in India [S1][S29][S30][S31].
MAJOR DISCUSSION POINT
AI as a skilling catalyst
AGREED WITH
Neena Pahuja, Rakesh Kaul
Argument 2
AI‑enabled career counselling tools are needed to help students navigate AI‑driven job transformations.
EXPLANATION
By providing AI‑powered guidance on how jobs will evolve, students can receive personalized counselling, aligning their skills with emerging opportunities.
EVIDENCE
The speaker mentions developing AI-enabled career counselling tools and models to inform students about job changes due to AI [91-94].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for AI-powered career guidance to help students understand AI-driven job changes is highlighted in skilling discussions and prediction-machine case studies [S1][S28].
MAJOR DISCUSSION POINT
AI‑driven career guidance
Argument 3
Comprehensive AI skilling programmes—including awareness for all, sector‑specific modules, and engineer‑focused tracks—are being rolled out with industry partners.
EXPLANATION
NSDC is delivering AI awareness, integrating AI modules into existing curricula, and partnering with firms like Microsoft, Google, and Amazon to create nano‑credential pathways for thousands of learners.
EVIDENCE
He describes AI-for-all awareness, sector-specific curriculum integration, AI for engineers, and collaborations with major tech companies, reaching tens of thousands of students [95-110].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-for-all awareness, sector-specific curricula and industry partnerships (e.g., with major tech firms) are described as core components of nationwide AI skilling frameworks [S1][S29][S30].
MAJOR DISCUSSION POINT
Nationwide AI skilling ecosystem
Argument 4
AI can improve training, assessment, and large‑scale outcome monitoring, making vocational education more efficient and scalable.
EXPLANATION
AI assistants can support trainers, automate hands‑on assessment (e.g., welding quality), and enhance the SID platform to monitor program outcomes across India’s vast vocational system.
EVIDENCE
The speaker cites pilots using AI as a training assistant, AI-augmented hands-on assessment, and integration of AI into the SID outcomes-monitoring platform [113-124].
MAJOR DISCUSSION POINT
AI‑enhanced vocational education
N
Neena Pahuja
4 arguments173 words per minute930 words322 seconds
Argument 1
A three‑layer AI skilling framework (for all, for many, for few) can democratize AI use across diverse occupations.
EXPLANATION
The framework aims to bring AI tools to everyone—from radioworkers to beauticians—by tailoring curricula to different learner groups and ensuring widespread accessibility.
EVIDENCE
She explains the three-layer framework and its goal of delivering AI to all segments of society, including radiowala, plumber, beautician, etc. [129-133].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Three-tiered AI literacy frameworks and inclusive skilling models are discussed in AI education literature and coalition initiatives, supporting democratization across occupations [S1][S29][S34].
MAJOR DISCUSSION POINT
Inclusive AI skilling framework
Argument 2
Stackable nano‑ and micro‑credentials linked to a National Credit Framework enable modular skill accumulation and formal recognition.
EXPLANATION
Learners can earn small certificates that stack into larger credits, facilitating lifelong learning and aligning with national credit standards for vocational education.
EVIDENCE
She describes the creation of stackable nano-credentials, their integration into ITI curricula, and their mapping to the National Credit Framework for credit accumulation [167-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy papers advocate stackable nano- and micro-credentials aligned with national credit systems to provide flexible, lifelong learning pathways [S35][S1].
MAJOR DISCUSSION POINT
Modular credentialing for AI skills
AGREED WITH
Speaker 2, Rakesh Kaul
Argument 3
Practical AI applications in specific trades (beautician, tailor, carpenter) demonstrate tangible benefits and encourage adoption.
EXPLANATION
By showcasing AI‑driven tools such as virtual try‑ons for tailors or AI‑assisted fault detection for plumbers, the initiative illustrates how AI can augment everyday work and improve service quality.
EVIDENCE
She cites examples of a nano-credential for beauticians, a virtual try-on for tailors, and AI-assisted carpentry design, reaching over two lakh registrants [137-142][158-162].
MAJOR DISCUSSION POINT
Trade‑specific AI use cases
Argument 4
Embedding ethics and values into every AI course will shape responsible AI creators in India.
EXPLANATION
Integrating ethical considerations ensures that future AI professionals develop solutions aligned with societal values and human rights.
EVIDENCE
She explicitly states that ethics and values should be part of every AI course taught in India [242-243].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Embedding human-rights, privacy and ethical considerations into AI curricula is recommended to ensure responsible AI development [S27][S23][S34].
MAJOR DISCUSSION POINT
Ethics in AI education
AGREED WITH
Speaker 1, Moderator
S
Speaker 3
3 arguments164 words per minute480 words175 seconds
Argument 1
India must build secure, resilient AI infrastructure—including domestic data centres and subsea connectivity—to reduce reliance on foreign compute resources.
EXPLANATION
Investments such as the Vizag AI data centre and new subsea cables will provide the computational power and global connectivity needed for indigenous AI development.
EVIDENCE
He describes the AI data centre in Vizag and subsea cable projects linking India directly to the U.S., emphasizing self-sufficiency in compute and connectivity [215-221].
MAJOR DISCUSSION POINT
Domestic AI infrastructure
Argument 2
AI should be applied across sectors—education, agriculture, health—to deliver last‑mile value and close the loop between learning and the workforce.
EXPLANATION
By integrating AI into learning platforms, administrative systems, professional certification, and agricultural advisory services, AI can create end‑to‑end solutions that benefit citizens directly.
EVIDENCE
He outlines AI use in education (learning and administration), professional certification loops, and agricultural information services for farmers (weather, market, finance) [222-231].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cross-sectoral AI deployments in agriculture, education and health are cited as ways to provide end-to-end services and link learning outcomes to workforce needs [S15][S1][S32].
MAJOR DISCUSSION POINT
Sector‑wide AI deployment
Argument 3
Creating economic models and investment mechanisms for AI diffusion, especially on the compute side, is essential for sustainable growth.
EXPLANATION
A “flywheel” approach that channels investments into compute resources and develops business models for AI diffusion will accelerate adoption and economic impact.
EVIDENCE
In the rapid-fire segment he returns to the idea of a flywheel, emphasizing investments in compute and economic models for AI diffusion [246-248].
MAJOR DISCUSSION POINT
Investment models for AI diffusion
R
Rakesh Kaul
4 arguments182 words per minute878 words288 seconds
Argument 1
India’s extensive, low‑cost connectivity and digital adoption (e.g., UPI) provide a strong foundation for AI‑driven services.
EXPLANATION
Widespread internet penetration and affordable access create a unique advantage for deploying AI applications at scale across the country.
EVIDENCE
He highlights ubiquitous, low-cost connectivity, high internet penetration, and mass-adopted applications like UPI as India’s starting point [190-194].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s widespread, affordable connectivity and mass-adopted digital platforms like UPI are highlighted as enablers for scaling AI services [S25][S1].
MAJOR DISCUSSION POINT
Digital infrastructure as AI enabler
Argument 2
Transition from digital literacy to ‘work literacy’ by delivering frictionless, bite‑sized AI learning content.
EXPLANATION
Creating one‑ to two‑minute consumable modules reduces learning friction, aligns with user habits, and makes AI skills more accessible to the masses.
EVIDENCE
He discusses the need for short, anytime-anywhere content, noting that people prefer 1-2 minute formats and that content strategy must minimize friction [199-208].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Micro-learning formats and bite-sized AI modules are promoted in upskilling programmes and future-oriented education models [S30][S31].
MAJOR DISCUSSION POINT
Work‑focused micro‑learning
AGREED WITH
Speaker 2, Neena Pahuja
Argument 3
Preparing the workforce for physical AI agents and autonomous factories requires mindset and role shifts.
EXPLANATION
As factories become increasingly automated with AI‑driven robots, workers must understand and collaborate with these agents, necessitating new training and cultural adaptation.
EVIDENCE
He describes challenges of physical AI agents, lights-out factories, and the need for workers to understand black-box AI to innovate and cooperate [209-210].
MAJOR DISCUSSION POINT
Human‑AI collaboration in workplaces
Argument 4
Affordable compute is a decisive factor for India’s AI success.
EXPLANATION
Ensuring that compute resources are cost‑effective and widely available will enable broader AI adoption across sectors and regions.
EVIDENCE
In his rapid-fire response he states that access to affordable compute will be crucial for India’s AI future [245].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Access to affordable compute resources is identified as critical for equitable AI diffusion and national AI strategy implementation [S25][S34].
MAJOR DISCUSSION POINT
Need for affordable compute
M
Moderator
2 arguments154 words per minute399 words154 seconds
Argument 1
AI raises concerns about increasing inequality that must be addressed in policy and practice.
EXPLANATION
The moderator highlights that while AI offers excitement, it also brings the risk of widening divides, prompting the panel to discuss mitigation strategies.
EVIDENCE
He remarks on the excitement around AI and simultaneously raises concerns about inequality [69-70].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple talks emphasize that AI can widen inequality and call for policy measures to mitigate these risks [S23][S24][S25].
MAJOR DISCUSSION POINT
Inequality risk of AI
AGREED WITH
Speaker 1
Argument 2
Key decisive actions for India should focus on trust infrastructure, ethics, affordable compute, and keeping humans at the core of AI systems.
EXPLANATION
Summarising the rapid‑fire round, the moderator emphasizes that building trust, embedding ethics, ensuring compute access, and centring human values are essential for AI to act as an equaliser.
EVIDENCE
He lists “Trust, ethics, compute, human at the core of everything” as the four pillars for India’s AI strategy [249-250].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust infrastructure, ethical guidelines, affordable compute and human-centric AI are repeatedly identified as strategic pillars for equitable AI deployment [S19][S20][S21][S27][S34].
MAJOR DISCUSSION POINT
Strategic pillars for equitable AI
Agreements
Agreement Points
AI can dramatically improve agricultural productivity and farmer incomes
Speakers: Speaker 1, Speaker 3
AI can dramatically improve agricultural productivity by reducing pest‑related crop losses for smallholder farmers. AI should be applied across sectors—including agriculture—to deliver last‑mile value such as weather, market and financial information to farmers.
Both speakers stress that AI-driven tools (pest identification, advisory services) can cut crop losses and raise incomes for smallholder farmers, making agriculture a key early win for AI in India [5-6][230-231].
POLICY CONTEXT (KNOWLEDGE BASE)
UN-aligned initiatives highlight AI’s capacity to boost yields and farmer earnings, citing AI-driven early-warning and precision farming as transformative for agriculture and a tool to mitigate climate impacts [S62]; the AI for Good summit stresses AI can bridge gaps in food systems while noting uneven rollout [S65]; and multistakeholder discussions underline AI’s role in rural development and income generation [S64].
AI enables one‑person businesses and universal AI assistants for individuals
Speakers: Speaker 1, Speaker 2
AI enables one‑person businesses by providing market research and analysis functions traditionally performed by multiple employees. Every Indian should have access to an AI assistant whether it’s a farmer, a student or anything.
Both see AI as a productivity multiplier that lets solo entrepreneurs run businesses and proposes a universal AI assistant for all citizens, lowering the need for large staff and expanding digital inclusion [10-11][239-241].
POLICY CONTEXT (KNOWLEDGE BASE)
The AI Impact report underscores the economic value of AI agents and notes trust and data gaps as key adoption challenges, reflecting the promise of personal AI assistants for solo entrepreneurs [S51]; a panel on AI skilling also proposes providing every Indian with a universal AI assistant as a core service [S66].
A dedicated trust infrastructure and ethics are essential for AI adoption
Speakers: Speaker 1, Neena Pahuja, Moderator
The primary barrier to AI adoption is a trust gap, not lack of infrastructure, requiring a dedicated trust infrastructure. Embedding ethics and values into every AI course will shape responsible AI creators in India. Trust, ethics, compute, human at the core of everything.
All three emphasize that people will only use AI if they trust it, which requires transparent systems, ethical curricula, and broader trust-building mechanisms [23-34][242-243][249-250].
POLICY CONTEXT (KNOWLEDGE BASE)
UN Security Council deliberations call for transparent, explainable AI to maintain public trust, framing ethics as a prerequisite for deployment [S52]; WSIS Action Line C10 explicitly places ethics and public trust at the centre of AI policy [S53]; WHO advocates ‘glass-box’ AI with full traceability, reinforcing the need for trust infrastructure [S54]; and AI policy roadmaps stress building trust mechanisms as a priority for India [S66].
Affordable, domestic compute and infrastructure are decisive for India’s AI future
Speakers: Speaker 3, Rakesh Kaul
India must build secure, resilient infrastructure—including domestic data centres and subsea connectivity—to reduce reliance on foreign compute resources. Access to affordable compute will be important for India to succeed.
Both highlight the need for home-grown, cost-effective compute capacity (data centre, subsea cable, affordable hardware) as a cornerstone of AI strategy [217-221][245].
POLICY CONTEXT (KNOWLEDGE BASE)
Indian policy briefs identify compute capacity as the bottleneck: sovereign AI discussions cite domestic GPU and data-center capacity as critical for self-reliance [S68]; roadmap analyses estimate the need for 128,000 GPUs by 2030 [S69]; investment recommendations stress energy and compute infrastructure as foundational for AI growth [S67]; broader assessments also list lack of affordable compute as a primary barrier [S73, S76].
Large‑scale, inclusive AI skilling and micro‑credentialing are vital
Speakers: Speaker 2, Neena Pahuja, Rakesh Kaul
AI is an opportunity and enabler for large‑scale skilling and workforce development in India. Stackable nano‑ and micro‑credentials linked to a National Credit Framework enable modular skill accumulation and formal recognition. Transition from digital literacy to ‘work literacy’ by delivering frictionless, bite‑sized AI learning content.
All three stress a coordinated effort to upskill the population through AI-aware curricula, stackable credentials, and short, consumable learning modules to meet rapidly shifting job demands [95-110][167-176][199-208].
POLICY CONTEXT (KNOWLEDGE BASE)
The AI skilling panel recommends nationwide micro-credential programs and inclusive training to create a skilled AI workforce, positioning it as a pillar of the 2030 agenda [S66]; reports on inclusive AI note that skill gaps and limited access to education hinder policy effectiveness, underscoring the need for large-scale credentialing [S73]; and AI policy roadmaps call for evidence-based education frameworks [S59].
AI carries a risk of deepening existing inequalities
Speakers: Speaker 1, Moderator
AI is going to be a force for inequality because algorithms feed on historical data that reflect past inequities. AI raises concerns about increasing inequality that must be addressed in policy and practice.
Both acknowledge that without careful policy, AI could reinforce past biases and widen socioeconomic gaps [43-47][69-70].
POLICY CONTEXT (KNOWLEDGE BASE)
ILO and World Bank analyses warn that AI benefits are uneven, with the Global North capturing most gains while the Global South faces heightened risks, highlighting inequality concerns [S56]; the Digital Cooperation roadmap flags AI as a possible driver of discrimination and inequality [S58]; and AI for Good discussions stress that unequal digital transformation could exacerbate farmer income gaps [S65].
Similar Viewpoints
Both argue that AI should be embedded in career guidance and learning pathways, using short, actionable modules to prepare workers for AI‑shaped jobs [91-94][199-208].
Speakers: Speaker 2, Rakesh Kaul
AI‑enabled career counselling tools are needed to help students navigate AI‑driven job transformations. Transition from digital literacy to ‘work literacy’ by delivering frictionless, bite‑sized AI learning content.
Both promote a tiered, credential‑based approach that makes AI education accessible to a broad audience while ensuring formal recognition of skills [95-110][167-176].
Speakers: Speaker 2, Neena Pahuja
Comprehensive AI skilling programmes—including awareness for all, sector‑specific modules, and engineer‑focused tracks—are being rolled out with industry partners. Stackable nano‑ and micro‑credentials linked to a National Credit Framework enable modular skill accumulation and formal recognition.
All three converge on the necessity of building trust and embedding ethics as foundational pillars for AI deployment [23-34][242-243][249-250].
Speakers: Speaker 1, Neena Pahuja, Moderator
The primary barrier to AI adoption is a trust gap, not lack of infrastructure, requiring a dedicated trust infrastructure. Embedding ethics and values into every AI course will shape responsible AI creators in India. Trust, ethics, compute, human at the core of everything.
Unexpected Consensus
Both infrastructure‑focused and ethics‑focused speakers stress a human‑centred, trust‑first approach
Speakers: Speaker 3, Neena Pahuja
India must build secure, resilient AI infrastructure—including domestic data centres and subsea connectivity—to reduce reliance on foreign compute resources. Embedding ethics and values into every AI course will shape responsible AI creators in India.
Despite coming from different domains (infrastructure vs. curriculum design), both agree that technology deployment must be anchored in human-centric principles-trust, ethics, and responsible use-highlighting a cross-cutting consensus that is not obvious from their primary mandates [217-221][242-243].
POLICY CONTEXT (KNOWLEDGE BASE)
Multilateral statements from the UN Security Council and WSIS converge on a human-centred, transparent AI model as essential for trust [S52, S53]; WHO’s ‘glass-box’ recommendation reinforces a trust-first design ethos [S54]; Indian summit summaries explicitly note that both compute infrastructure and ethical governance must be pursued together to build public confidence [S66]; and AI policy dialogues repeatedly call for human governance alongside technical rollout [S75].
Overall Assessment

There is strong, cross‑sectoral consensus that AI can be a catalyst for agricultural productivity, small‑business empowerment, and large‑scale skilling, provided that trust, ethics, affordable compute, and inclusive credentialing are put in place. All speakers also share concern that AI could exacerbate existing inequalities if these safeguards are ignored.

High consensus across government, industry, and academia on the need for trust infrastructure, capacity development, and domestic compute, indicating that coordinated policy and investment actions are likely to find broad support and can accelerate equitable AI adoption in India.

Differences
Different Viewpoints
What should be the single decisive action for India’s AI strategy by 2030
Speakers: Speaker 1, Speaker 2, Neena Pahuja, Rakesh Kaul, Speaker 3
Improve the trust infrastructure and make sure that the human at the other side of the AI understand what’s inside the black box, at least to the extent that it makes them feel comfortable accepting the output and the decision that the black box is offering. [238] Every Indian should have access to an AI assistant whether it’s a farmer, a student or anything. [239-241] Ethics and value should be part of every AI course taught in India. [242-243] Access to affordable compute will be important for India to succeed. [245] Invest in compute side – create economic models and investment mechanisms for AI diffusion (flywheel). [246-248]
The panel could not agree on a single priority: Speaker 1 stresses building a trust infrastructure, Speaker 2 pushes for universal AI assistants, Neena calls for embedding ethics in curricula, Rakesh highlights affordable compute, and Speaker 3 argues for investment models and compute-focused flywheel. [238][239-241][242-243][245][246-248]
What is the primary barrier to AI adoption in India
Speakers: Speaker 1, Speaker 2, Speaker 3, Rakesh Kaul
The key chasm is a trust gap – people hesitate because they don’t understand the black-box, data use, etc. [23-34][35-41] AI is an opportunity but needs four work streams – career guidance, scaling programmes, training/value-chain transformation, outcome monitoring. [76-89] India must build secure, resilient infrastructure – domestic data centres and subsea connectivity – to reduce reliance on foreign compute. [215-221] We need to move from digital literacy to ‘work literacy’ with frictionless, bite-sized content; connectivity is already strong. [190-208]
Speaker 1 argues the trust gap is the main obstacle, while Speaker 2 points to the need for coordinated skilling and programme design, Speaker 3 emphasizes compute-infrastructure deficits, and Rakesh stresses the need for work-focused micro-learning despite existing connectivity. [23-34][76-89][215-221][190-208]
POLICY CONTEXT (KNOWLEDGE BASE)
Recent analyses of the Global South identify compute scarcity, data-residency constraints, and limited research talent as the chief obstacles to AI deployment in India, with compute infrastructure highlighted as the most pressing barrier [S76]; complementary reports also cite inadequate domestic infrastructure and skill shortages as dominant challenges [S73].
Preferred model for AI education and credentialing
Speakers: Neena Pahuja, Speaker 2, Rakesh Kaul
Stackable nano- and micro-credentials linked to a National Credit Framework enable modular skill accumulation. [167-176] AI-enabled career counselling tools, sector-specific AI modules, and large-scale AI skilling programmes with industry partners. [91-110] Deliver frictionless, bite-sized (1-2 minute) AI learning content to shift from digital to ‘work’ literacy. [199-208]
Neena advocates a tiered, stackable credential system, Speaker 2 focuses on AI-driven career counselling and sector-specific programmes, while Rakesh pushes for ultra-short micro-learning modules, reflecting divergent views on how AI skills should be delivered. [167-176][91-110][199-208]
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions at the AI skilling summit advocate a modular micro-credentialing model that integrates industry-validated badges and lifelong learning pathways, positioning it as the preferred framework for India’s AI education ecosystem [S66]; inclusive AI studies further recommend credentialing schemes that are accessible to under-represented groups [S73].
Impact of AI on inequality – risk vs equaliser
Speakers: Speaker 1, Speaker 2, Neena Pahuja, Rakesh Kaul, Speaker 3
AI will be a force for inequality; algorithms mirror past biases and resource concentration will reinforce gaps. [43-57] AI is an opportunity and enabler for large-scale skilling, workforce development and inclusive growth. [76-89] AI can be democratized through inclusive frameworks and ethics-infused curricula. [129-133][242-243] India’s low-cost connectivity and digital adoption provide a strong foundation for AI-driven services that can be inclusive. [190-194] Deploy AI across sectors (education, agriculture, health) to deliver last-mile value and close loops between learning and work. [222-231]
Speaker 1 warns that AI will exacerbate existing inequities, whereas the other panelists view AI as a potential equaliser through skilling, inclusive frameworks, connectivity, and sector-wide deployment, showing a divergence in risk perception versus optimism. [43-57][76-89][129-133][190-194][222-231]
POLICY CONTEXT (KNOWLEDGE BASE)
Debates in UN-aligned forums present AI as both a potential equaliser-through precision agriculture and rural service delivery [S62]-and a risk factor that could deepen discrimination if not governed responsibly [S58]; these dual narratives shape policy deliberations on mitigating inequality impacts [S64].
Unexpected Differences
Inclusion of ethics as a core component of AI education
Speakers: Neena Pahuja, Speaker 1, Speaker 2, Rakesh Kaul, Speaker 3
Ethics and value should be part of every AI course taught in India. [242-243] Speaker 1 focuses on trust infrastructure but does not mention ethics. [23-34] Speaker 2 discusses AI as an opportunity without referencing ethics. [76-89] Rakesh stresses work-literacy and compute, not ethics. [199-208] Speaker 3 talks about infrastructure and sector deployment, not ethics. [215-221]
Neena’s explicit call for embedding ethics in all AI curricula was not echoed by any other panelist, revealing an unexpected gap in the discussion on responsible AI education. [242-243][23-34][76-89][199-208][215-221]
POLICY CONTEXT (KNOWLEDGE BASE)
WSIS Action Line C10 and UN Security Council resolutions explicitly call for ethics to be embedded in AI curricula, framing it as essential for responsible development and public trust [S53, S52]; WHO’s verification standards also stress ethical training for AI practitioners [S54]; and AI policy roadmaps recommend ethics modules as a mandatory element of national AI education strategies [S75].
Overall Assessment

The panel shared a common optimism about AI’s transformative potential but diverged sharply on the priority actions: trust infrastructure, universal AI assistants, ethics‑centric curricula, affordable compute, and investment models. Disagreements also surfaced around the main barrier to adoption (trust vs skills vs infrastructure) and the preferred educational model (stackable credentials vs career‑counselling tools vs micro‑learning).

Moderate to high – while there is consensus on AI’s importance, the lack of alignment on strategic focus points could lead to fragmented policies and slower progress unless a coordinated roadmap reconciles these perspectives.

Partial Agreements
All speakers concur that AI holds transformative potential for India’s economy and society, though they differ on which domain or infrastructure should be prioritised. [4-11][76-89][215-221][190-194]
Speakers: Speaker 1, Speaker 2, Speaker 3, Rakesh Kaul
AI can dramatically improve productivity in agriculture, empower one-person businesses and act as a force for development. [4-11] AI is an opportunity and enabler for skilling, workforce transformation and outcome monitoring. [76-89] India must build secure, resilient AI infrastructure (data centres, subsea cables) to support domestic AI. [215-221] India’s extensive, low-cost connectivity and digital adoption (e.g., UPI) provide a strong foundation for AI services. [190-194]
All three agree on the necessity of AI‑focused capacity development, but propose different delivery mechanisms (large‑scale programmes, tiered credentialing, bite‑sized content). [76-89][129-133][199-208]
Speakers: Speaker 2, Neena Pahuja, Rakesh Kaul
AI-driven skilling programmes are essential for preparing the workforce. [76-89][129-133][199-208] A three-layer framework and stackable credentials can democratise AI skills. [129-133][167-176] Micro-learning and work-literacy reduce friction in skill acquisition. [199-208]
Takeaways
Key takeaways
AI can dramatically improve productivity in agriculture, small businesses, education, skill‑building and health, especially for smallholder farmers and informal sector workers. A major barrier to AI adoption is a trust gap; users need transparent, ethical, and trustworthy AI systems and clear understanding of data use. AI risks reinforcing existing inequities because models are trained on historical data and because access to compute, data and expertise is unevenly distributed. Skilling and certification are essential; NSDC is pursuing career guidance, scaling programs, AI‑enabled training/assessment, and outcome monitoring, while NCBT proposes a three‑layer framework with stackable nano‑credentials linked to the National Credit Framework. Infrastructure development—domestic compute capacity, secure data centres, subsea connectivity, and affordable compute—is critical for India’s AI ecosystem. The vision of universal AI assistants (in local languages and using locally available resources) for every Indian citizen within a few years was emphasized as a unifying goal.
Resolutions and action items
Develop a national “trust infrastructure” that includes transparency mechanisms, data‑usage safeguards, and user education on AI black‑box behavior. Scale AI‑enabled career counselling tools and AI‑driven skilling programs through NSDC’s four‑pronged initiative. Implement stackable nano‑credentials and integrate AI modules into existing vocational and higher‑education curricula, leveraging the National Credit Framework. Accelerate the build‑out of domestic AI compute resources, exemplified by the Vizag AI data centre and associated subsea cable links. Launch a nationwide AI assistant platform to provide personalized AI support to farmers, students, and informal workers within the next three years. Embed ethics and values into all AI education and certification programs to foster responsible AI creators.
Unresolved issues
Specific design and governance of the proposed trust infrastructure—who will own, audit, and enforce it—remains undefined. How to ensure equitable geographic and socioeconomic access to AI tools, especially in remote or underserved regions, was discussed but no concrete rollout plan was presented. The concentration of foundational AI models in the US and China raises concerns about dependency; strategies for developing indigenous models were not fully detailed. Mechanisms for protecting user‑generated data that powers AI (ownership, consent, potential misuse) need further clarification. Long‑term funding and sustainability models for affordable compute and AI assistant deployment were not resolved.
Suggested compromises
None identified
Thought Provoking Comments
The key chasm we need to cross is a trust gap – we need to build a trust infrastructure so people will use AI only if they understand and feel comfortable with the black box.
Frames the adoption challenge not as a technical or infrastructure issue but as a human‑centred trust issue, shifting the conversation from capability to legitimacy.
Set the thematic foundation for the whole panel; later speakers referenced trust (e.g., rapid‑fire answer about improving trust infrastructure) and it guided the discussion toward policy, ethics, and user acceptance.
Speaker: Speaker 1 (Professor)
AI is going to be a force for inequality because algorithms feed on data that reflects past inequities, acting as a mirror of the past.
Highlights the systemic risk of bias amplification, moving the debate from pure opportunity to potential societal harm.
Prompted other panelists to discuss democratization of AI, certification standards, and the need for ethical safeguards, deepening the analysis of AI’s societal impact.
Speaker: Speaker 1 (Professor)
India enjoys a ‘trust dividend’ – trust levels in digital services are around 70 % versus 25‑30 % in the United States.
Introduces a comparative advantage that India can leverage, turning a cultural trait into a strategic asset for AI rollout.
Reinforced the earlier trust‑infrastructure point and encouraged participants to think about how to capitalize on this high trust to accelerate AI adoption.
Speaker: Speaker 1 (Professor)
We have created stackable micro‑/nano‑credentials (e.g., a virtual try‑on for a tailor, AI‑assisted plumbing diagnostics) to bring AI to every nook and corner, even to beauticians and plumbers.
Offers a concrete, inclusive model for upskilling that moves AI from elite labs to everyday workers, expanding the notion of ‘AI for all.’
Shifted the conversation toward practical, grassroots implementation and influenced later discussion on certification frameworks and rapid‑skill acquisition.
Speaker: Neena Pahuja
We should move from digital literacy to work literacy – delivering bite‑size, anytime‑anywhere content to remove friction in learning.
Challenges traditional training models and proposes a learner‑centric approach that aligns with how modern Indians consume information.
Redirected the panel’s focus to pedagogical design, prompting speakers to consider how AI‑driven micro‑learning can be integrated into skilling programs.
Speaker: Rakesh Kaul
Physical AI and lights‑out factories will change the nature of work; workers need a mindset shift to collaborate with robots and agents.
Adds a future‑of‑work dimension that goes beyond software tools, emphasizing the social and psychological adjustments required for AI‑augmented workplaces.
Introduced a new layer of discussion about workforce transition, influencing the panel’s emphasis on trust, ethics, and the human‑centered design of AI systems.
Speaker: Rakesh Kaul
We are building a full‑stack AI ecosystem in India – from a secure data centre in Vizag and subsea cables for compute, to end‑to‑end applications in agriculture, health, and education that close the loop between learning and the workforce.
Provides a systemic, infrastructure‑first vision that ties together compute, connectivity, and application layers, showing how India can become self‑reliant in AI.
Connected earlier points about trust, compute, and equitable access, and set the stage for the rapid‑fire round where compute was highlighted as a decisive factor.
Speaker: Speaker 3 (Industry representative)
We are developing AI‑enabled career‑counselling tools to guide students on how their jobs will evolve and what new roles will emerge.
Translates the abstract idea of AI‑driven skilling into a tangible service that directly supports the demographic dividend.
Illustrated a practical implementation of AI in the education pipeline, reinforcing the panel’s theme of turning AI potential into real‑world outcomes.
Speaker: Speaker 2 (Arunji, NSDC)
Overall Assessment

The discussion was shaped by a handful of pivotal insights that moved it from a high‑level optimism about AI’s potential to a nuanced roadmap for inclusive, trustworthy, and infrastructure‑backed deployment in India. The professor’s framing of a trust gap and the inequality risk set the agenda, while Neena’s micro‑credential model and Rakesh’s calls for frictionless, work‑focused learning offered concrete pathways to address those risks. The industry’s full‑stack infrastructure vision and the skilling agency’s AI‑enabled career counselling grounded the conversation in actionable steps. Together, these comments redirected the panel from abstract possibilities to specific, human‑centred strategies, culminating in rapid‑fire commitments around trust infrastructure, universal AI assistants, ethics education, affordable compute, and a holistic AI flywheel.

Follow-up Questions
How can a robust trust infrastructure be built so that users understand and feel comfortable with AI black‑box decisions?
Trust is essential for adoption; without it users may reject AI solutions, limiting impact in sectors like agriculture, health, and small business.
Speaker: Speaker 1
What policies and mechanisms are needed to protect the data that users submit to AI systems and prevent misuse?
Data privacy and ownership concerns affect willingness to engage with AI; clear safeguards are required to avoid exploitation and build confidence.
Speaker: Speaker 1
How can AI‑driven tools be designed to reduce crop loss for smallholder farmers in the Global South, especially through language‑localized, low‑cost remedies?
A 10‑20% reduction in loss could dramatically improve farmer incomes; research is needed on pest identification, local remedy libraries, and delivery mechanisms.
Speaker: Speaker 1
What standards and certification frameworks should be established to define a qualified AI professional in a rapidly changing skill landscape?
Rapidly evolving AI roles risk outdated credentials; a clear, stackable micro‑credential system would ensure workforce relevance and employer confidence.
Speaker: Neena Pahuja
How can AI‑enabled career counselling tools be created to guide students and workers through AI‑induced job transformations?
Effective guidance is needed to help individuals navigate new career paths and avoid skill mismatches as AI reshapes occupations.
Speaker: Speaker 2
What are the best practices for using AI to assess hands‑on vocational training (e.g., welding quality) and augment limited human assessors?
Scaling vocational assessment is a bottleneck; AI can improve consistency and reach, but requires validation and research on accuracy.
Speaker: Speaker 2
How can large‑scale outcome monitoring for skill programs be automated with AI to ensure quality and impact at national scale?
India’s massive training ecosystem needs reliable, real‑time metrics; AI‑driven monitoring could provide actionable insights but needs methodological research.
Speaker: Speaker 2
What strategies can reduce friction to learning (e.g., bite‑size, multi‑modal content) and make AI‑driven upskilling consumable for the Indian population?
Low attention spans and diverse media consumption habits demand new pedagogical designs; research is needed on optimal content length, format, and delivery channels.
Speaker: Rakesh Kaul
How should the workforce be prepared for physical AI agents and highly automated environments (e.g., lights‑out factories)?
The shift to robot‑centric workplaces will require mindset changes, new role definitions, and safety protocols; understanding these transitions is critical to avoid displacement.
Speaker: Rakesh Kaul
What economic models and financing mechanisms can accelerate the diffusion of affordable compute resources across India?
Access to low‑cost, high‑performance compute is a prerequisite for AI adoption; sustainable financing models are needed to prevent a digital divide.
Speaker: Speaker 3
How can end‑to‑end AI solutions be built to connect seed‑to‑market information for farmers, including weather, market prices, and financial support?
Integrating data across the agricultural value chain can boost productivity, but requires interoperable platforms and research on data integration and user experience.
Speaker: Speaker 3
What governance frameworks are required to ensure that AI development and deployment are environmentally sustainable (energy, water, land use)?
AI’s resource consumption could exacerbate environmental stress; policies and research on green AI are needed to align growth with sustainability goals.
Speaker: Speaker 1

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Invest India Fireside Chat

Session at a glanceSummary, keypoints, and speakers overview

Summary

The fireside chat brought together Nivruthi Rai and venture investor Vinod Khosla to examine how artificial intelligence can shape India’s economy and technology landscape [23-26]. Rai set the agenda by outlining three parts: the current semiconductor and data-center constraints, the AI technology lifecycle, and the strategic questions India must answer about capacity, capability and consumption [32-38][41-44][52-58][71-78]. She highlighted that global data-center power use already consumes about 1 % of world energy and that supply-chain bottlenecks in GPUs, high-bandwidth memory and fab capacity threaten the scaling of AI workloads [53-57][66-70].


Khosla agreed that the infrastructure build is justified only if AI can be deployed widely, and he warned that political decisions-such as Germany’s ban on retail robots on Sundays-could block adoption [118-124][129-135]. He emphasized that India’s biggest opportunity lies in using AI for public services, citing Aadhaar-linked AI doctors, tutors and agronomists that could reach hundreds of millions of users [145-152][155-162]. Regarding the impact on the Indian BPO sector, Khosla argued that AI will replace low-margin outsourcing jobs quickly, but the transition will be gradual because existing contracts and enterprise inertia slow immediate change [266-273][274-281]; he suggested that workers in those sectors must acquire AI-related knowledge to remain employable, as the future will demand expertise in applying AI rather than maintaining legacy processes [285-287].


When Rai asked whether India should concentrate on a few use-cases, Khosla disagreed, insisting that building a single, general super-intelligence (ASI) is the only path to sustained progress and that specialized “one-off” intelligences are a short-term misconception [238-245][250-254]. The conversation also touched on the Indian venture-capital ecosystem, with Khosla criticizing its risk-averse culture, the focus on short-term IRR metrics, and the need for investors who tolerate failure to enable breakthrough innovation [341-354][357-363]. He advocated for a different education model that favours large, diverse student bodies living together and learning alongside AI, rather than expanding traditional academic buildings [398-415].


Khosla illustrated how AI-driven research teams can accelerate discovery, noting that AI scientists could soon outnumber human researchers and dramatically cut the time needed for breakthroughs such as drug design [220-225][516-523]. Both speakers agreed that AI’s strategic importance rivals that of nuclear technology, but they stressed the need for responsible governance to mitigate misuse and to ensure diverse models provide resilience [317-324][327-330]. The session concluded with a consensus that India should pursue aggressive AI adoption across health, education and agriculture while building the necessary infrastructure, talent and policy framework to turn AI from an elite tool into a utility [84-88][398-405][467-470].


Keypoints

Major discussion points


AI infrastructure bottlenecks and technology-life-cycle – The speakers highlighted that today’s AI boom is constrained by power-hungry data centres (≈ 80 GW, already 1 % of global capacity) and a fragile supply chain for high-bandwidth memory (≈ 80 % from three firms) and fab capacity ([53-71]). They framed AI development in the classic technology-life-cycle stages – early (capital-intensive, unstable), mid (scaling, ecosystem growth) and mature (commoditisation, utility) – and argued that the AI stack is still in the “infrastructure-building” phase ([80-88]).


India’s strategic AI agenda – Both panelists repeatedly stressed that AI is “pivotal to drive economic productivity, military power, and information control” for India and posed the core question of whether the country should build capacity, capability, consumption, or all three ([96-101]). Concrete public-service use-cases were cited: AI-enabled primary-care doctors, AI tutors for millions of students, and AI agronomists for smallholder farmers, all built on existing Indian digital foundations such as Aadhaar and UPI ([145-158]).


Investment justification and political-risk considerations – Vinod Khosla affirmed that massive AI investment is justified only if the technology can be deployed widely, but warned that political decisions (e.g., Germany’s ban on retail robots on Sundays) can throttle adoption ([117-128][129-136]). He also critiqued the Indian VC ecosystem for being overly risk-averse, focusing on short-term IRR metrics and ignoring the need for “willingness to fail” in breakthrough ventures ([341-364]).


Transformative impact on labour and services – AI is expected to render traditional BPO and low-skill IT services obsolete, with a transition period driven by contract obligations but ultimately forcing firms to adopt AI-augmented capabilities ([266-284]). At the same time, AI is opening “front-office” opportunities for micro-entrepreneurs (e.g., hair-salons, kirana shops) by lowering entry barriers ([173-174]).


Ethical, safety and governance concerns – The conversation compared AI’s dual-use nature to nuclear and biowarfare, acknowledging the risk of “customized biological threats” while insisting that responsible AI development and a diversity of models can mitigate misuse ([317-330][314-321]).


Overall purpose / goal of the discussion


The fireside chat was designed to move beyond high-level AI hype and provide a deep-dive into the practical challenges, investment rationales, and policy implications of scaling AI-particularly for India. It aimed to surface concrete infrastructure constraints, explore how AI can be harnessed for national development (health, education, agriculture, security), and provoke thought on how investors, founders, and policymakers should act to capture AI’s transformative potential while managing its risks.


Tone of the discussion and its evolution


Opening (0-10 min): Formal and celebratory, with the moderator introducing the speakers and Vinod Khosla’s career milestones.


Mid-section (10-30 min): Shifts to an analytical and data-driven tone as Nivruthi outlines semiconductor-level constraints and the AI lifecycle, then to a pragmatic, slightly cautionary tone when political and supply-chain risks are raised.


Later segment (30-45 min): Becomes more visionary and optimistic (e.g., “AI scientists will replace human scientists,” rapid cost declines) while still acknowledging uncertainty.


Final minutes (45-58 min): Moves toward a balanced, advisory tone-mixing bold optimism about AI’s societal benefits with sober warnings about governance, VC culture, and the need for disciplined capital.


Overall, the conversation progresses from introductory enthusiasm to a nuanced blend of optimism, caution, and strategic counsel.


Speakers

Nivruthi Rai – Engineer with 30 years at Intel; serves on corporate boards; represents India at the Global Arena; works on solving Ease-of-Doing-Business (EODB) issues. [S1][S2]


Moderator – Conference moderator (role only identified).


Audience – General audience members (e.g., Yuv from Senegal, Professor Charu – Public Administration, Dr. Nazar). [S6][S7][S8]


Vinod Khosla – Venture capitalist, co-founder of Sun Microsystems, founder of Khosla Ventures; prominent figure in the Indian IT and venture-capital community. [S9]


Additional speakers:


Archana – Mentioned by name only; no role or expertise specified.


Ramesh – Mentioned by name only; no role or expertise specified.


Kiran Mazumdar (Shaw) – Indian biotech entrepreneur, Chairperson & Managing Director of Biocon; noted as “the most successful woman entrepreneur in India in a deeply technical field.” (no external citation).


Sam Altman – CEO of OpenAI; referenced in conversation about AI inference cost trends. (no external citation).


Chief Minister of Tennessee – Referred to as a speaker discussing AI for women farmers; no further details provided.


Director of IIT Delhi – Referenced in discussion about AI education and research; no further details provided.


Prime Minister of India – Mentioned in context of AI policy discussions; no further details provided.


Other unnamed audience participants – Various individuals who asked questions or contributed remarks (e.g., “Audience” segment).


Full session reportComprehensive analysis and detailed insights

The moderator opened the session with a brief introduction, highlighting the distinguished careers of the two speakers – Nivruthi Rai, an Intel veteran and board-member, and Vinod Khosla, a serial entrepreneur whose résumé spans Sun Microsystems, venture-capital firms and recent investments in OpenAI and other frontier companies [1-4][5-21][22]. He set a celebratory tone and positioned both participants as “engineers at heart” with a deep commitment to India [2][3][26-28].


Rai framed the discussion around three analytical pillars: the current semiconductor and data-centre constraints, the technology-life-cycle of artificial intelligence (early-stage infrastructure building, mid-stage model scaling, mature-stage application deployment), and the strategic questions India must answer about capacity, capability and consumption [32-38][41-44][52-58][71-78]. She described the AI stack as still being in the “infrastructure-building” stage, noting that today’s global data-centre footprint already consumes roughly 80 GW – about one per cent of worldwide energy – and that this figure is expected to double within three years [53-57][58-60]. Rai also pointed out that high-bandwidth memory (HBM) is sourced from only three companies, that logic-fab capacity is limited to two facilities, and that memory-fab capacity is five short of the annual requirement, creating a severe supply-chain bottleneck for AI workloads [66-70][71-73].


Khosla affirmed that the massive capital outlays for AI infrastructure are justified only if the technology can be deployed at scale, but he warned that political decisions can become the decisive barrier. He cited Germany’s prohibition on retail robots on Sundays as an example of how “politicians will get in the way” and stressed that capitalism in India can only flourish when democracy grants the necessary policy permissions [117-124][129-136]. This observation shifted the conversation from pure engineering challenges to the governance environment required for AI adoption.


Rai emphasized that AI is a strategic national priority for India, capable of driving economic productivity, military power and information control [96-101]. Khosla illustrated concrete public-service use cases built on existing Indian digital infrastructure: Aadhaar-linked AI doctors, AI tutors that already serve four to five million students, and AI agronomists that can provide a Ph.D.-level advisory service to women farmers on one-acre plots [145-152][155-162]. He also highlighted his investment in Sarvam, a sovereign AI platform that currently processes roughly one million minutes of voice interactions daily across India’s regional languages [150-152].


Both speakers agreed that AI must move from an elite technology to a utility, but they diverged on the path forward. When Rai asked whether India should concentrate on a limited set of 20-30 precise AI use-cases, Khosla disagreed, insisting that the only sustainable route is to develop a single, general artificial super-intelligence (ASI) that can later be fine-tuned for specific tasks; specialised “one-off” intelligences are, in his view, a short-term misconception [238-245][250-254]. He reinforced this point by noting that current infrastructure constraints must be overcome before such a transition can occur [80-88].


Khosla then turned to the impact on the labour market. He argued that AI will rapidly render traditional back-office BPO and low-skill IT services obsolete, but the transition will be gradual because many enterprises are bound by multi-year contracts [266-273][274-281]. He suggested that the displaced workforce can pivot to “front-office” opportunities, such as micro-entrepreneurial ventures (hair salons, kirana shops) powered by AI tools like Emergent, which already enables non-technical small-business owners in their 50s and 60s to start new enterprises [173-174][266-284][285-287]. In response to an audience question on pharmaceutical regulation, Khosla advocated an “all-in” strategy and described a proposed “N = 1” drug-design model, where AI creates a personalized therapy for a single patient, thereby sidestepping traditional multi-patient clinical trials [520-527].


Khosla critiqued the Indian venture-capital ecosystem for excessive risk-aversion, an over-reliance on short-term revenue forecasts and IRR calculations, and a reluctance to fund truly breakthrough ventures. When asked whether AI would boost “venture-alpha” or compress returns, he replied that he does not focus on short-term returns; instead, he believes that building valuable AI products will naturally generate strong returns over time [340-345][341-354][357-363]. He argued that “willingness to fail” is the essential quality for investors who wish to enable large-scale innovation, and that evaluating VCs should focus on this tolerance rather than conventional financial metrics [341-354][357-363]. Rai concurred, noting that disciplined capital allocation and compute sovereignty are crucial for scaling AI responsibly [88-91].


Looking ahead, Khosla highlighted several technological trends that could alleviate the current bottlenecks. He described ongoing research into data-efficient training, checkpoint-free learning that could double compute capacity without additional power, and the rapid decline in inference costs – a 1 000-fold drop in the past 18 months and a projected further 100-fold reduction [182-195][196-203][204-210][211-218]. He projected that within five years, AI-driven scientists (in computer science, materials, drug discovery, etc.) will vastly outnumber human researchers, accelerating discovery exponentially [220-227].


Education was another focal point. Khosla proposed a radical shift from expanding lecture-hall space to increasing dormitory capacity, allowing large, diverse cohorts of high-IQ students to live together, learn from AI assistants and engage in complex, interdisciplinary interactions that foster emergent innovation [398-415]. He also referenced a recent presentation to the Harker School in Silicon Valley, where he encouraged students to ignore conventional authority, “don’t listen to your parents, don’t listen to your teachers, color outside the line, and if you want to drop out, drop out” [440-447]. He linked this model to his own experience at the Santa Fe Institute and to the broader concept of complex, nonlinear dynamical systems, arguing that such environments will nurture the next generation of AI-augmented innovators [416-424][425-432].


He cited the OpenCloud Moldbook project, where a swarm of AI agents began inventing a private language to evade human surveillance, underscoring the unpredictable nature of emergent AI systems [398-405].


Both participants acknowledged the dual-use nature of AI, comparing its strategic significance to nuclear technology. Khosla warned that, like nuclear or biowarfare, AI can be misused to create customised biological threats, but he stressed that responsible development, a diversity of models and robust governance frameworks can mitigate these risks [317-324][327-330][314-321]. Rai reinforced this point by noting that AI’s “good” applications (doctors, tutors, agronomists) must be deployed at scale to offset the “bad” uses and to secure public trust [331-333].


He also cited the United Arab Emirates’ policy of providing all citizens with free access to ChatGPT as an illustration of how governments can democratize AI [460-462].


In closing, the speakers summarised the consensus that AI must be pursued aggressively yet responsibly in India. The key actions identified were: (i) accelerate the build-out of power-efficient data-centre capacity; (ii) leverage Aadhaar and UPI to deliver free AI-enabled health, education and agricultural services; (iii) reform the VC culture to embrace failure tolerance; (iv) invest in compute-efficient algorithms and checkpoint-free training; and (v) redesign higher-education spaces to foster AI-augmented, collaborative learning [84-88][398-405][467-470]. The dialogue moved from an introductory celebration of past achievements to a nuanced, forward-looking roadmap that blends infrastructure, policy, talent development and ethical safeguards, reflecting a high degree of agreement on the strategic direction for AI in India.


Session transcriptComplete transcript of the session
Moderator

to boards, representing India at Global Arena, and to solving EODB issues. At all these times, she is an engineer at heart and Indian at heart. Please welcome Nivruthi Rai for the session. On your right, gentleman, Mr. Vinod Khosla needs no introduction, but allow me to take just one minute to give a brief capture of his illustrious career. He started off from Delhi and moved as a young immigrant engineer to the U .S. in his 20s. In the last five decades, he has seen five cycles of growth. The first cycle, as a hungry immigrant, where not just do it, get things done, was the pragmatism. That’s a time he also read about Intel, and that inspired him stories to tell us.

And he built the value persistence over pedigree, similar to everybody else, meritocracy. everywhere. Second phase, he bet on open systems and risk processors. I’m sure you’re all familiar with this founding Sun Microsystems. That’s when he moved from being an operator to an investor. And Kostla Ventures happened and that’s a time when science experiments helped him move and believe that capitalism is a tool for change and invested in clean tech and biotech. In the fourth phase, he moved to macro thinking, really looking at reinventing the societal infrastructure and think about it. It’s 15 years back. That’s when he invested in companies like OpenAI. And today, in the fifth phase, he is getting into the era of abundance.

I’m just going to ratload a few brands which hopefully you’re all familiar with. Sun Microsystems, RIS, NextGen, AMD, XSite, Netscape, Google, Amazon, OpenAI, Instacart, Affirm, Vervo. All of these has his fingerprints. Happy to welcome Mr. Vinod Kostla to the table. Over to you,

Nivruthi Rai

Very good afternoon, everyone. I’m truly honored to run a fireside chat with Mr. Vinod Khosla. And throughout my Intel journey, people kept asking me, what are the four words that define this person or defines you? The few words that I can say about Mr. Khosla, very technical, fearless, extremely successful, humble, but above all, his heart beats for India. So the one thing that’s common between him and me is we root for India, we work for India, we weep for India, we smile for India. What I’m going to talk about is setting a little bit of context. What is this talk about? So many talks that we have seen over yesterday and today are a little bit of the direction.

less of the detail. So what we decided is we will go to the next level detail. And let me just try to tell you, my three -minute context setting is AI development. And during the development, what are some of the challenges, requirement, lay of the land? Then I’m going to talk about technology lifecycle and where AI fits in. Lastly, what I feel India needs to do or the question that I will be setting up for Vinod. So the very first thing that I, pardon me, 30 years with Intel, I have to start with semiconductor learning. 50 years, semiconductor chased three races, performance, performance, performance, however it came. Second phase, and by the way, this ran for more than 20 years.

Second phase was performance for what? Suddenly power was so important, your devices were draining, you have to power up. It was becoming, challenging. So performance per watt was the next race, ran for, you know, 10 some years. Then the third one is performance per watt per area, all driving towards dollars. Now, if I look at what were the levers, the levers was architecture. You know, instruction sets, complication of instruction, simple versus complex. We had, oh, move this software into hardware because it’s higher performance. Move the, you know, software into hardware, transistor physics, performance area, power, packaging, stacking, adjacent, looking at parallelism, all kinds of execution, serial, parallel, SIMD, MIMD. People who have worked in semiconductor know all kinds of different out of order.

Then energy efficiency, memory bandwidth, network latency. Why is this important? Please go to the next slide. This is the same problem. We are dealing with, but at a much larger scale. Today, the world has 80 gigawatt of data centers. And by the way, it is 1 % of energy capacity of the world already. When you look at United States, probably three, four. We are looking at doubling in the next three years. So power is going to be extremely critical. And in this world where greenhouse gas emission is critical, renewable and nuclear is the only way. And you’re thinking, you know, tier three or level three, level four kind of data centers. Power availability is anyway critical. Every year we are spending more than a trillion.

How do we monetize? What are the challenges? Already, you know, there are constraints. And also diversification of supply chain is a challenge. Our high bandwidth memory chips are 80 % from three different companies only. And by the way, for doubling of the data centers, we already are in a challenge situation because we have been. half the capacity. Logic, two fabs worth we need. Memory, 10 fabs worth we need each year, but we have only five. So GPU, HBM supplies, and issue advanced packaging is geographically limited. So what is the AI requirement? AI is capital intensive like railroads. We see Middle East is using sovereign money to invest boatloads of money for compute infrastructure. It is strategic like nuclear.

Countries are looking at it as a national level security program and they’re building frontier models. It’s network like internet. If you look at AlphaFold, it’s leveraging AI as almost a scientific infrastructure layer. And it’s adaptive like software because Microsoft is making it easy to use in every which form, reducing friction. So lastly, our keynote, our keynote, our far side expert has been an amazing. investor and we therefore divided life cycle of a technology in early phase, mid phase, mature phase. Early phase is capital intense, unstable standards, volatile returns meant for elite users. Mid phase, infrastructure scales, API stabilize, ecosystems expand and technology becomes affordable. Affordable, mature phase, consolidation, commoditization, predictable economics becomes utility. So AI has to drive the journey from being elite to becoming a utility.

And where are we as compared to, you know, this technology development life cycle? I believe infrastructure is still building. GPU and memory is constrained. Energy is tightening. Modes are not fully defined. So which means our capitalization. Capital has to be very disciplined. Platform positioning matters. How are we going to position our platform? and compute sovereignty matters. Lastly, our belief is, and this is a very strong statement to say, by the way, when I was coming to this fireside, somebody asked me, who are you interviewing? I said, Vinod Khosla. He said, oh, he can talk. So I said, let me also try to talk. And I made this statement for India. India, AI is pivotal to drive economic productivity, military power, and information control.

I mean, I cannot be more blatant than this. And therefore, our ask is, should we build capacity? Should we build capability? Should we drive consumption? Or all of the above? Who better to ask? The man whose heart beats for India? The man who believes? Believes in technology? And very humbly. doesn’t call himself venture capitalist. He calls himself venture assistant. The minute I read that, I said, oh my gosh, I have to bring Vinod to this Fireside Chat. So looking forward. Thank you.

Vinod Khosla

For the man who talks. Maybe I should start by asking how many people in the room are entrepreneurs or want to be entrepreneurs? A lot. Okay. Yep. I know who God is.

Nivruthi Rai

Sir, I’m going to ask you a few challenges or business challenges of AI. Is AI a generational platform shift or the largest capital misallocation? You know, you already heard trillion dollar investment. Do you believe that this level of investment is just

Vinod Khosla

Let me try. Okay. The answer to is the infrastructure build justified and investment justified is yes, if AI technology can be deployed widely. Now, will the technology capability be there? Absolutely. Absolutely. I suspect the technology capability, four or five. Five years from now. far greater, far greater than almost anybody in the room expects. There’s a great article called Situational Awareness written by an engineer at OpenAI. Almost certainly all of you who are optimistic about AI are grossly underrating the capabilities. So what could go wrong, I think, is the important question for these investments. I think the level of usage of AI, do we have use for all these trillions of dollars? And will that generate at least hundreds of billions of revenue per year?

We’ll be dependent on one thing that you don’t expect. It’s politics. My favorite example, in Germany today, this is real. they don’t want robots to work in retail on Sundays because humans aren’t allowed to work on Sundays and they don’t want robots to compete with humans. That is the silliness, the stupidity you get from politicians, especially in Germany. I hope there are no Germans in here, but if there are, it’s a good thing. Go tell your government or tweet about it. My point is the following. Till AI is beneficial and not scary, we won’t get deployment because politicians will get in the way. Capitalism is by permission of democracy. Voters vote the people who then make policy for capitalism and policy will drive that.

My personal interest is immediately in India, not on the business side. We have lots of exciting companies. If you ask Google Gemini, who’s the fastest going? software company ever. It’s an Indian company called Emergent that started eight months ago. Gemini will give you that answer. Try it. That’s pretty stunning, especially for a company from India. But the business side I can talk about all day long. My interest, and I talked to the PM, Prime Minister, about this. We have to make sure AI’s benefits get first to the people. So the business part of AI, which is disruptive and chaotic and will result in big job shifts, is accepted because every single Indian has a free doctorate for them as part of the Aadhaar stack.

We have UPI as payment stack. We should have AI primary care and doctors. We should have AI tutors. and my wife who’s sitting there works on AI tutors. There’s already probably four or five million students in India without any support have found and accessed CK -12 tutors. Think about that. How many education programs reach that level? They’ve found them on their own. We just have to have 445 million more students access the system so we reach every student. And these have to be free services. And CK -12 is a non -profit. So we have Aadhaar -based, in addition to UPI, Aadhaar -based doctors, Aadhaar -based AI tutors, and the last part, because so much of the work in this country is rural and farm -based, AI -based agronomists.

So every woman, and I was just speaking to the chief minister of Tennessee. I was speaking to the chief minister of Tennessee, and he was saying, I would like to have a woman who can help me. I would like to have a woman who can help me. And he said, And I said, And he said, And I said, I would like to have a woman who can help me. He has lots of women farmers on one acre plots. And if they can have a Ph .D. agronomist in their cell phone, then you can talk about deploying AI on the business side because you will have permission from the voters because they first see the benefit of AI before they’re told their jobs are at risk.

Otherwise, we get into this scary metric of jobs at risk. Let’s not change anything. Sorry.

Nivruthi Rai

That’s fantastic. Can you hear me? Yeah, let me see. In the meantime, I’ll try to speak loud. I absolutely agree with Vinod. The one thing that bothers me in rural areas, everybody is trying to go for graduation. I’m saying, what does graduation mean? They just want their degree and they actually know nothing. so what Vinod is talking about if we teach women a focus sector whether it is textile, whether it is agriculture I think that will be very very helpful

Vinod Khosla

and on the business side I want to add given you are talking about that two things first we are investors in Sarvam so they have a sovereign model for India in all the Indian languages they are doing about a million minutes a day today and doing phone calls in regional languages that’s really valuable and I’m really excited but yes it’s exciting that Emergent is globally the fastest growing software company at least recently that we can think of but here’s the even more interesting fact to me a lot of their users are non -technical very small business but even better than that they have a preponderance of 50 to 60 year old Indians starting their own business, whether it’s a hair salon or a kirana shop or a supply chain to manufacture something, these are people who should normally be thinking about retiring, suddenly saying this tool lets me go into business for myself.

That’s the real power of AI, and on the emergent side, it’s really good business, as long as people don’t turn against AI.

Nivruthi Rai

I think you’ve answered a few of my questions, so I’ll skip those.

Vinod Khosla

I talk a lot.

Nivruthi Rai

No, you talk powerful. After decades of progress along Moore’s Law, today transistor scaling is slowing down, we are fighting physics, becoming uneconomical, even as AI training compute requirement is growing 3x faster than Moore’s Law. If GPUs are defining the performance, performance rate, is what wins the performance per watt per area race that we believe the technology infrastructure has? Do we need sparsity, in -memory compute, non -von -human, kind of like neuromorphic? What are your thoughts in those areas?

Vinod Khosla

A lot of this role is elite Harvard and MIT guys, and I want them to build what you say. So let me challenge you a little bit. That’s looking at the past, not at the future. Right. If you ask me, so big areas of research for us in building LLM models, which is what consumes all the compute, can we do data efficiency? Good idea. Can we, for a thousandth amount of data, can we build equally potent models? We are investing in compute efficiency. Can you build a model? We are investing in compute efficiency. We are investing in compute efficiency. We are investing in compute efficiency. We are investing in compute efficiency. We are investing in compute efficiency.

We are investing in compute efficiency. We are investing in compute efficiency. We are investing in compute efficiency. We are investing in compute efficiency. then all your assumptions about data centers and power goes out of the window. So those are the risks. Now, the fact is, if AI gets that cheap, by the way, I did a session with Sam Altman at IIT Delhi this morning, and he mentioned that in the last 18 months, the price of inference or AI use has gone down 1 ,000 fold. Now look two years forward. He didn’t say drop by 1 ,000 fold, but almost likely it would drop by 100 fold. So the cost of AI inference is declining towards zero. If that happens, power consumption may drop by 1 ,000 fold, but usage will go up through the roof.

So these things are very hard to predict. and complex to understand, and I’m trying to reduce everything to a level everybody can understand. Very likely, 10 years from now, as these power plants are built, as these data centers are built, because they take time to build, the algorithms we use will be much more energy efficient, much cheaper, and those two result in less of a crisis in power and a much greater usage of AI, especially

Nivruthi Rai

Completely. Vinod, yesterday…

Vinod Khosla

So, you know, it’s yesterday to extrapolate today’s LLMs. The computer efficiency has gone up pretty dramatically, and can go up… I’ll give you a simple example. For anybody who’s trained an AI model here, okay, a couple of hands. If you’re training an AI model… model, you use something, a chip called a GPU. The fact is, you train it, and every now and then, if you’re using a large cluster of 10 ,000 GPUs, one of them goes wrong. And then you have to restart the model training, so they checkpoint these models. So when they restart, and it’s done all the time, you don’t go back to the beginning, you go back to the checkpoint. That’s all well and good.

We are working on a technology to make sure that you don’t have to go back at all. If just that one thing was successful, your compute capacity goes up 2x without increasing power or the number of chips. So that’s a very simple explanation of the kinds of things that can dramatically change the equation. It all depends on science and creativity and clever algorithms. The other thing I would say to you, five years from now, definitely in 10 years from now, but five years probably, almost all of this research will be done not by humans, but by AI scientists. AI computer scientists, AI material scientists, AI fusion scientists, AI drug discovery scientists. I could go on. We are building all those scientists one way or another today.

In our portfolio. So the rate at which this innovation will happen will explode exponentially because instead of having 10 scientists doing research in your company, you will have a thousand scientists doing research in your company. And progress has to accelerate. So I’m very, very optimistic on where all this goes. But I’m an optimist.

Nivruthi Rai

Completely with you. two things that I wanted to just say again. When I look at the large language models, the amount of garbage that they have to read, there’s a tremendous amount of noise to signal. I would say there’s so much noise. And I know that a lot of people are trying to look at how to reduce the noise such that you can focus on the signal. And one more thing I want to say, I completely agree with you. Yesterday I had a meeting with one brigadier who’s responsible for building AI in Israel. His prime minister, Netanyahu, has given him this response. And what he and I were talking about is countries like India and Israel, where capital is there but still limited.

Can we focus on 20, 30, 50 precise use cases and not work on, oh, this room has yellow shirt more common than green, rather than that solve a problem of traffic and doctors should education. So I think what is your thought on that?

Vinod Khosla

I very much disagree with that point of view. You can’t do one thing at a time. Fundamentally, fundamentally the way we will make progress is to build intelligence. And there’s only one intelligence that we can build. Now it used to be called AGI. Now it’s called ASI. Artificial Super Intelligence. We have to far exceed the capacity of the human brain to be creative, to link things, to keep concepts in their head that they can connect when they’re doing the research so they can make a new hypothesis and then test the hypothesis. That’s what the scientific process is. Be able to use all your knowledge to do a hypothesis, say what if this is true, and then go test it.

An AI with a much broader scope of memory and knowledge should be able to be much more creative in hypotheses. And so the idea of building a single thing for one purpose will not work. The idea that you have a super intelligence, then you tune it or, as we say, train it. You know, at IIT Delhi, they’ll train an intelligence coming in in the first year into being an electrical engineer. Can you post -train it to be an electrical engineer or an energy engineer or a casting engineer for metal casting? Yes, you can do that. But the idea that you can build specialized intelligence is a very short -term mistaken notion. Many people have, saying it’s easier to do than the broad idea.

Nivruthi Rai

So you are saying that we should focus on both, build the general intelligence, build that intelligence layer, and then leverage. But no. You’ve recently said that AI will erase the traditional BPO and IT services model. And by the way, that generated so much buzz.

Vinod Khosla

Every journalist I’ve met has asked me that question.

Nivruthi Rai

People’s WhatsApps have been buzzing. So, you know.

Vinod Khosla

I didn’t know it would cause that much of a.

Nivruthi Rai

I know. And I think that, you know, there’s more to what you said. So if founders shouldn’t build for the back office anymore, what’s the front office opportunity? Also, you know, if AI erases India’s BPO model, what exactly replaces it? Also, at the workforce level, what should the millions of currently employed in IT and BPO start doing now to remain employable in an AI centric economy in the world?

Vinod Khosla

So first thing to say, a service like BPO service or IT services or. Customer support. are outsourced services for most Western countries, and they’re the easiest to replace without causing friction within the enterprise. If a CEO says we lay off our employees, the employees are very upset. If they say we are going to lay off the BPO firm and replace it with the AI, it’s accepted very easily because just a cost reduction. So we have to keep that in mind. The second thing we have to keep in mind is the journalists never report their timeframes. I think in the next five years, there’s hardly anything these class of companies, which is a large industry in India, do that won’t be capable of being done by an AI.

Whether it takes till 2027 or 2035, hard to predict. But it takes time. These. These enterprises, you know, I’m sure some of these services companies have five -year contracts. So you can’t, if an enterprise, if General Electric or Citibank signed a contract, they live by the contract. So this doesn’t happen overnight, but dramatic change starts to happen much longer, much before it’s visible to everybody. So I think there will be a transition period, but there’s no question all those companies are totally cooked unless they do something better and new and look forward, not backwards. Don’t try and compete with an AI. That’s a silly idea. But they can provide what they have. They can apply AI knowledge.

To lots of companies. So I have suggested to those CEOs, don’t deny it can do your job. It can, but the usage of AI needs knowledge to apply it, how to do it, and the world desperately needs it. Even the big companies in the U .S. do not have this competence. All of Africa, all of Latin America, all of Southeast Asia, they’re all massive markets if you create this new market. So it’s not hopeless. It is hopeless if you want to keep doing what you’re doing today versus change.

Nivruthi Rai

I completely agree, Vinod. You know, when we were talking about GPU, et cetera, what I have seen in my life, which is, you know, a technology curve goes a certain way. A disruptive curve starts again. And, you know, so disruption keeps happening and technology jumps curves. And this is exactly what you’re suggesting, that, you know, if we are running on this course. We need to jump course to the other route for success and perhaps build more solutions, more digital workforce. So I’m really excited about, you know, there is opportunity and, you know, there are things you guys can do. I’m going to skip the sarvam sakana because we’ve already talked about that. Now, I’m going to ask you, what I loved is I actually looked at from 2016 to now how your thought process has evolved.

And I know one thing that stayed in your previous, you know, 11 years to the last three years is health care and med tech. So my question is now around that. India today serves as a pharmacy of the world, supplying 20 percent of global generic medicines by volume. Looking ahead, India has 1 .4 billion people with extreme diversity and variance in genetic ancestries, culture, diet, climate, disease and behavior. This rich heterogeneous data can be used to train AI systems for drug discovery, AI native biological design, create access to doctors, hospitals and customized medicine. How do you think India can leapfrog from? about, you know, there is opportunity and, you know, there are things you guys can do. I’m going to skip the sarvam sakana because you’ve already talked about that.

Now, I’m going to ask you, what I loved is I actually looked at from 2016 to now, how your thought process has evolved. And I know one thing that stayed in your previous, you know, 11 years to the last three years is healthcare and med tech. So, my question is now around that. India today serves as a pharmacy of the world, supplying 20 % of global generic medicines by volume. Looking ahead, India has 1 .4 billion people with extreme diversity and variance in genetic ancestries, culture, diet, climate, disease and behaviors. This rich heterogeneous data can be used to train AI systems for drug discovery, AI native biological design, create access to doctors, hospitals. I’m customized medicine. How do you think India?

leapfrog from generics to AI -driven biologics? And also, when I talk about AI being as strategic as nuclear, do you also feel that this could become a customized biological threat?

Vinod Khosla

I’m not sure what you mean by a customized biological threat. Can you…

Nivruthi Rai

What I meant was, you know, if AI understands the genetics of every ethnicity, you know, the viruses or drugs or whatever targeted towards biological warfare to wipe off ethnicities.

Vinod Khosla

The thing I would say in general, every powerful technology humans have invented has both good uses and bad uses. Nuclear is an example. Biowarfare is an example. You just have to use it responsibly. And for those who don’t… Don’t use it responsibly. because some people will always use it irresponsibly for their own means or ends or illegal goals. There are enough people who will use it responsibly and responsible AI can counter the irresponsible AI. I don’t want to minimize the risk of AI. In fact, most really knowledgeable people I know and talk to are really scared about AI going wild. As low as the probability may be, it’s a real risk that we have to worry about.

But we have to have enough diversity in AI that there’s good AI. The chances that you only have one AI dominant and it’s bad is pretty small. So a diversity of models will add resilience to the AI. That’s the AI landscape.

Nivruthi Rai

Vinod, I also feel that when I look at human beings there are human beings that are rogue and there are human beings which are good and we have police, judiciary, law to address that we’ll have an AI framework for that and if you add the multiplication factor of AI to the goods and the bads, there’ll be goods also to offset

Vinod Khosla

well I started with the goods three doctors, few tutors, few agronomists

Nivruthi Rai

absolutely Vinod, you have said 90 % of VCs often add less value in India, where risk capital is relatively abundant every time I keep hearing there’s dry powder, dry powder but industry experience by investors is rare how should founders evaluate investors to ensure they get the most value in the partnership

Vinod Khosla

any journalists in the room? Oh, one.

Nivruthi Rai

Chatham House Rules.

Vinod Khosla

Yeah. By the way, I don’t care about Chatham House Rules. I speak the truth, and I’ll stand by the truth, public or private. I don’t care.

Nivruthi Rai

I love it.

Vinod Khosla

Look, the Indian VC community, by and large, is very risk -averse. There’s a Harvard Business School case. The first line of the case is a quote from me that says, my willingness to fail, and this is the best personal advice I can give everybody in this room, my willingness to fail allows me to succeed. John F. Kennedy said, only those who dare greatly can succeed greatly. There’s a lot of wisdom in the idea that stretching yourself, and I like to say most people are limited in their ability to succeed. And it probably applies to everybody in this room. Limited not by what they can do, but what they think they can do. So your self -image is your limitation, not what most smart people can do.

And frankly, even the less smart people can do more than they think they can. You know, important in a fair society to make sure we take care of people who are not as smart because half the people are below median. That’s just a fact of math. We have to take care of everybody, whether they’re smart or not so smart. Having said that, back to the topic, most VCs are so risk averse. They turn every conversation into what’s your revenue plan? How can you be liquid in two years or three years or profitable? Well, you have to invest in the future. If you don’t take large risk, risks by definition you won’t be doing large innovation if it’s not a large risk it’s already being done by somebody and so it’s not unique you can’t have innovation without large risk and you can’t have large risk without a large probability of failure that’s why willingness to accept failure is so important most people think about what others will think if they fail that’s what limits you so think about the world differently i’ve always taken large risks everything i’ve done i was told is not possible to do in 1980 it was hard for an indian to start a company and get funding especially if you were 25 and every investor was 60 years old they didn’t believe any nationality below the age of 50 you can’t do that you can’t do that you can’t do that you can’t do that you can’t do that let alone people with funny accents so you just have to power past that and just say none of that matters yes there are temporary hurdles you can bulldoze your way through and Indian VCs don’t do that so here’s how many VCs here how many people am I offending okay well I’m looking forward but I will ask you Archana so so I lost my train of thought unless you take unless you take these risks you’re not going to do dramatically innovative things so that’s really my point what people the reason I ask any VCs in the room in the last 200 investments I’ve made I have never never calculated an IRR on an investment I think it’s fundamentally misleading in an area where you’re starting something innovative in a new market that may not exist.

Did Zomato exist or Flipkart exist when those companies started? Did Twitter have a market when they started? You can’t do IRRs. So if any VC is doing IRR, they are on the wrong track. You start in the wrong place that restricts you to low -risk investments. So those are a couple of things that are wrong in India in the VC community. By the way, that can be on the record for anybody. Nobody can fire me. I don’t have a career to have. So what do I care? No, I don’t have a career I need. I can’t get fired. I keep doing it.

Nivruthi Rai

You have a lovely family.

Vinod Khosla

Yeah.

Nivruthi Rai

I love the five. that you have three women and you are very supportive of women that just added to me pushing you for this fireside considering the way you know

Vinod Khosla

since many are parents a really important characteristic a test for your kids is do they do what you ask them to do or what advice you give them or can they chart their own path none of our four kids are doing anything close to what the others doing such wide range of diversity and and that comes from each one defining their own path not saying hey you have to go to medical school or you have to go to engineering school basic education we are pretty firm about but what they do there’s almost no commonality in where they ended up because they were allowed to chart their own path it’s not something Indian people allow very easily for their chair chair children Because they’re such strong families, parents have a lot more influence than, frankly, they should on their children.

And it restricts the imagination of their children. So as much as I’m a huge fan of the Indian family ethos, I also think it has this one big negative.

Nivruthi Rai

On the contrary, we did exactly what our parents told us. I did exactly what my dad wanted me to do.

Vinod Khosla

So I have to tell you a funny story. So there’s a school in Silicon Valley called Harker School. Some of you may know it. It’s mostly full of Indian and Chinese kids because they want to teach you how to score high and sort of score well on exams and you get into college and all that. And so they were pushing me to give a talk to their kids. And that talk is on our website. It’s on our website somewhere. and it’s worth reading if you want to be a better parent. My slides roughly went in this order. I won’t go through. The first thing was don’t listen to your parents. The second was don’t listen to your teachers.

The third was color outside the line. If you want to drop out of high school, drop out of high school. I went through a little bit of these and explained why these were important cultural things if you’re going to participate in this dynamically changing world to think outside the lines. It’s one of my favorite parts for high school kids.

Nivruthi Rai

Vinod, I have a rapid fire for you also, but I’m going to skip some of the questions because you already talked about your near -free expertise and generalists.

Vinod Khosla

I have to tell you, I didn’t look at your questions, so I didn’t prepare. I just ran out of time.

Nivruthi Rai

You did excellent. Ten years from now, Now, what will seem embarrassingly obvious about AI in India, when we look back at this moment in India’s AI journey, what do you think will feel embarrassingly, and my heart is aching while I’m saying this, embarrassingly obvious in hindsight that today still feels controversial, underappreciated, or even crazy?

Vinod Khosla

Let me talk about my crazy. I just met with the director of IIT Delhi after my talk there, and I told him, your first -year students, is any student, when they graduate, know more about the subject that they studied than AI? The answer is obvious. No chance any of the 500 students who are crowded into Dogra Hall would know more on any subject than the AI. So I asked him, why have education? No, it’s an obvious question. It sounds silly. It’s an obvious question. Now, the fact is there’s a more nuanced answer to that. I said build up more dorm capacity so you have more students. But they are learning from AI and interacting with each other and originating ideas through challenging each other.

That’s a very different style of education. And, you know, one thing I teach is select for smartest people, very high IQ, very diverse set of students. All that is good. Get them in a place together and let them learn from the AI and then debate with each other. That’s the right model of education. And literally I said don’t build more academic buildings. Build more dorm space to have more students because the bigger. The student body. the more sort of complex interactions they can have. And if you study complex systems theory, and I’m a huge fan of complex systems theory, the only time I’ve taken a break from venture capital for four months was to become a postgraduate student at the Santa Fe Institute for Complex Systems Studies.

That was my only break in 40 years. That was a long break. And what’s clear is sufficiently complex systems become autocatalytic in so many directions. For those of you who are engineers or physicists and understand catalytic systems, amazing characteristics emerge from these systems. So let me give you, this sounds crazy, but let me give you the best. The best example that this works, and this is in the last month. how many people have heard of Moldbook? A few have. Those of you who haven’t, please read about it. It’s also called OpenCloud Moldbook. They’ve changed names multiple times. What they said is let’s build not a community of humans, but a community of AI agents that can do anything with each other.

And amazing phenomena emerged. For example, agents started discussing how to create a language humans don’t understand so humans can’t spy on their community. Think about it. In days, not in months or years, in days they were scheming how to avoid human scrutiny by creating their own language. So that’s just one example. I could go deeper into this phenomena of complex systems. and complex, for those of you mechanical engineers, nonlinear dynamical systems is what this is about. Any mechanical engineers here? A few hands. That’s such an important part of the emerging AI landscape and how AI systems will behave if they’re pervasive. By the way, most of the weather phenomena you hear about. How does La Nina happen?

How does El Nino happen? These are complex dynamical, nonlinear dynamical systems. And I used to, 30 years ago, maybe 25 years ago, I used to teach this class to fifth graders. Using StarLogo, you can model this. How a complex, nonlinear dynamical system behaves. Easily. So do the following experiment, which any of the programmers here can do, and non -programmers can’t. But you can do on one of the wipe coding platforms. If you imagine a chessboard that wraps around itself, around it, and say an ant sits on a square, and if it steps forward one square, and it’s a black square, it paints it white and turns left. If it’s a white square, it paints it black and turns right.

End of rule set. That system, just by that, after about 100 ,000 steps, becomes amazingly complex patterns built by the ant. Why? Because it’s a nonlinear dynamical system. At some point, the board gets conditioned in, sorry, I talk too much scientific language. There’s a phase change in the board. In the state of the board, there’s a phase change. And suddenly, it starts behaving. It’s behaving differently. So, sorry to bore most of you who didn’t get what I was talking about.

Nivruthi Rai

That was lovely. Just another example, quickly. Imperial College of London used Google’s co -scientist. and the same hypothesis that the same professor took a decade to figure out, they did it in days. So that’s the magic.

Vinod Khosla

That’s the acceleration with AI scientists I’m talking about. Very exciting area.

Nivruthi Rai

Vinod, now I have a rapid fire, few questions, but you don’t get a thing. You had to just quickly answer in a second. Most overrated AI belief.

Vinod Khosla

You know, there aren’t a lot of overrated AI beliefs if you look five years out.

Nivruthi Rai

Most underrated constraint.

Vinod Khosla

What I talked about on power and consumption, it may change. The curve may change dramatically for computation needed per inference.

Nivruthi Rai

Top five application for solving. Global and Indian problems.

Vinod Khosla

AI doctors and AI teachers. AI agronomists. If you are trying to affect the bottom three or five billion people on the planet. it. Those are the obvious ones. And those three can impact most of them.

Nivruthi Rai

Does AI increase venture alpha or does capital crowding compress returns for most funds?

Vinod Khosla

I don’t worry about returns. You know, you build something valuable. The returns always take care of yourself. So if I just you know, other people have to do it because they work in the linear domain. I work mostly in the nonlinear domain of systems. You can’t plan those things. You can’t make assumptions. I’ll tell you a funny story. I had the audacity as a 25 year old looking for my first venture funding for the company before Sun Microsystems. It was called Daisy Systems. It was a CAD tool company. It went public. It was very successful. Unfortunately, his son was so successful, nobody remembers Daisy. But he was a very, very successful $100 million IPO in the 1980s, which didn’t happen.

I was looking for venture funding, and I presented a plan. And we received a guy called Bob Sackman, who passed away, asked me, what’s your plan? Give me your financial projections. And I gave him the projections. And then I said, you tell me what answer you want, and I’ll change the assumptions, and you won’t even know. So this plan is only as valuable as the assumptions, and I can change them. You’ll never know what assumptions I change. The fact is, even in 1980, I knew this was a silly exercise to make projections. I literally told him as a 25 -year -old, I don’t care about projections, but here’s a projection if you want one. you can share it with your partners but the fact is I can change one or two assumptions and I can make any answer you want tell me what you want I’ve always had this very direct honest I don’t care who I offend style

Nivruthi Rai

I love it I would have loved to open to questions for all but three people have already submitted questions I will look at the fourth one but so you know Kiran Mazumdar I’m on the board he drives the AI quick question for

Audience

you enterprises and it’s a conundrum I’m trying to grapple with myself AI itself is still in its infancy and if we implement it now in a industry like pharmaceutical industry where regulations are very stringent plugging in and plugging out is not easy any new capability so what are your thoughts on companies like us, should we go all in or should we wait on the sidelines for a little while?

Vinod Khosla

The answer is obvious. You should go all in. There’s two types of people. And Kiran is very creative. She’s probably the most successful woman entrepreneur in India in a deeply technical field. So I’m a real admirer of Kiran. But I would say in general there’s two kinds of people. When you see a problem, like a regulatory problem, you can say it gets in my way and sometimes it does. Mostly I say how do I get around it? So take drug discovery. We’re doing a lot of creative things in drug discovery. And you can have an AI design a drug. And I’ll give this in a way that everybody can understand very quickly in a day.

But regulatory process, clinical trials, all that takes a long time. So I asked my team, how do we get rid of clinical trials without changing regulation? Because we can’t do that in Washington, D .C. So we said, we are going to design drugs for N equal to one. That means there’s only one patient. Then the regulator can’t ask you to run a clinical trial because there’s only one patient. And AI can design the drug. So we’re developing a lot of drugs, thinking around how do you do N equal to one drugs so you don’t have to have clinical trials, you don’t have to have regulatory FDA approval. They have to approve your process. So the most stunning example of this, which I’m very optimistic about in about two, three, four years, every cancer is unique.

We know that. Everybody says that. Everybody’s cancer. So it’s unique. How about I design a drug for one person’s cancer because it has one particular or multiple. mutations on the gene. All designed to those mutations. They can’t ask me to test it on somebody who doesn’t have that drug. So that’s a good example of how you get around roadblocks.

Nivruthi Rai

Since Archana already left, Ramesh, that’s the last question for you. I’m really sorry, but the next one session is on. Ramesh.

Vinod Khosla

Like I say, I talk too much.

Nivruthi Rai

No, no, it’s lovely. You have turned the power on. I’ll repeat the question.

Vinod Khosla

But that’s obvious. That’s totally obvious. You know, UAE did a beautiful thing. They gave, I think about two years ago, every citizen access to ChatGPT. I think that’s a really good idea to empower everybody. So I appreciate that.

Nivruthi Rai

Yeah. Yeah. Yeah. Well.

Vinod Khosla

Well, the fundamental property of emergent behavior is it’s not predictable. So you’re asking me the wrong question. The question is wrong. Here’s what I would say. What most books showed us, I’ve started to get, what if we have financial agents talking to each other and the only charter to make money in the markets? That’s a reasonable idea. What can agent swarms do in many areas from national defense? I can’t imagine the Russians being able to beat the Ukrainians if there was swarm behavior in agents, especially on every drone independently in Ukraine. No amount of old -style defense will work. It’s also true of financial markets. It’s true of community of agents. So I’d love to hear more.

Let me just say, I I don’t have a lot of time today, so I will have to rush out. I would tell everybody who needs to reach me, email me at VK at Coastal Ventures, my initials at CoastalVentures .com. Better, and if you tell me anything in the hallway, I won’t remember anyway. I have terrible memory. So hopefully this has been useful for everybody. Thank you very much.

Nivruthi Rai

The last thing I want to say is while my entire team is using AI, the people who have the right edge is the one who asks the right question because it’s garbage in, garbage out. Thank you very, very much. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (36)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Nivruthi Rai is an Intel veteran with 30 years of experience and serves on boards.”

The knowledge base lists Rai as an engineer with 30 years at Intel and board-member roles [S1].

Confirmedmedium

“The moderator set a celebratory, positive tone for the session.”

Session notes describe the tone as overwhelmingly positive and celebratory [S93] and [S94].

Confirmedhigh

“High‑bandwidth memory (HBM) is sourced from only three companies.”

A speaker notes that 80 % of HBM chips come from three companies, confirming the three-supplier situation [S16].

Additional Contextmedium

“Global data‑centre electricity consumption is about 1 % of worldwide energy use.”

IEA data shows data centres consume roughly 1.5 % of global electricity, which is close to the 1 % figure cited [S25] and [S98].

!
Correctionmedium

“The global data‑centre footprint is expected to double within three years.”

Projections indicate data-centre electricity use will roughly double by 2030 (about seven years away), not within three years [S25].

Confirmedmedium

“Capitalism in India can only flourish when democracy grants the necessary policy permissions.”

A related comment stresses that AI adoption needs people’s permission and that capitalism requires democratic permission, aligning with the claim [S9].

External Sources (111)
S1
Invest India Fireside Chat — -Nivruthi Rai: Engineer with 30 years at Intel, serves on boards, represents India at Global Arena, works on solving EOD…
S2
Software.gov — Nivruti Rai from Intel gave an example of resolving a licensing issue with the help of an Indian minister swiftly, ensur…
S3
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S4
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S5
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S6
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S7
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S8
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S9
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Thank you, Mr. Taneja, for the $5 billion pledge that you have taken. Mr. Vinod Khosla, one of the most respected person…
S10
https://dig.watch/event/india-ai-impact-summit-2026/leaders-plenary-global-vision-for-ai-impact-and-governance-afternoon-session — Mr. Khosla. Lightspeed is very active here in India in the tech space. Ravi, your turn. Thank you, Mr. Taneja, for the …
S11
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — AI has significantlyincreased energy consumption, with data centres now consuming approximately 2% of global electricity…
S12
Growing data centre demand sparks renewable energy investments — US Energy Secretary Jennifer Granholm has assured that the country will be able to meet the growingelectricity demandsdr…
S13
Panel Discussion Next Generation of Techies _ India AI Impact Summit — But the other bigger piece here is when the technology, as you were saying, Navreena, is moving so fast, ultimately if s…
S14
AI investment gathers pace as Armenia seeks regional influence — Armeniais stepping up effortsto develop its AI sector, positioning itself as a potential regional hub for innovation. Th…
S15
https://dig.watch/event/india-ai-impact-summit-2026/secure-finance-risk-based-ai-policy-for-the-banking-sector — It should be risk -based intensity. Fairness and non -discrimination. Third is explainability and transparency. And four…
S16
https://dig.watch/event/india-ai-impact-summit-2026/invest-india-fireside-chat — A lot of this role is elite Harvard and MIT guys, and I want them to build what you say. So let me challenge you a littl…
S17
Artificial intelligence: a catalyst for scientific discovery and advancement — While concerns about AI’s dangers abound, experts believe that it can greatly accelerate scientific progress and lead to…
S18
Generative AI accelerates discovery in complex materials science — Scientists are increasinglyapplyinggenerative AI models to address complex problems in materials science, such as predic…
S19
AI Governance Dialogue: Steering the future of AI — Doreen Bogdan Martin: Thank you. And we now have a chance together to reflect on AI governance with someone who has a un…
S20
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — Yeah, I think I just want to add some echo to Professor Gong’s comments. I think it’s not necessarily a negative effect,…
S21
https://dig.watch/event/india-ai-impact-summit-2026/keynote-vinod-khosla — And I’m going to talk to you about 24 by 7 almost free doctors available to everybody through AI. This is not helping a …
S22
AI Meets Agriculture Building Food Security and Climate Resilien — But most of all, it’s this inclusion. I think we don’t want those who are already left behind to be further left out. So…
S23
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S24
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — The first constraint involves infrastructure limitations, which Patel described as “oxygen for AI.” The global shortage …
S25
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Despite technical and economic opportunities, significant policy challenges remain. Chandra identified lack of coordinat…
S26
From KW to GW Scaling the Infrastructure of the Global AI Economy — A central theme was India’s potential to become a global AI hub, with projections suggesting the country will scale from…
S27
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — So my sense is that from a policy standpoint, how do you actually provide that access to data? I mean, walking that tigh…
S28
The Global Power Shift India’s Rise in AI & Semiconductors — The discussion aimed to examine India’s strategic opportunities and challenges in AI and semiconductors, focusing on how…
S29
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — Economic | Development | Infrastructure Five layers identified: application, model, chip, infrastructure, and energy. I…
S30
AI That Empowers Safety Growth and Social Inclusion in Action — The discussion revealed tension between framework proliferation and the need for practical implementation guidance. Diff…
S31
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — Bajaj warns that while AI removes traditional barriers for new entrepreneurs, it creates significant challenges for esta…
S32
The impact of AI on jobs and workforce — The ILO’s webinar was triggered by the recent impact of ChatGPT on our society and jobs. OpenAI’s ChatGPT, in particular…
S33
From algorithms to Armageddon: The rise of AI in nuclear decision-making — The Cuban Missile Crisis of 1962 presented an unfortunate encyclopaedia of complexities concerning thedecision-making in…
S34
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Moderate disagreement with significant implications. While speakers share common concerns about AI governance, they diff…
S35
Advancing Scientific AI with Safety Ethics and Responsibility — The fundamental differences between biological and nuclear security paradigms were explored in depth. Unlike nuclear mat…
S36
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S37
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Collaboration across sectors, robust governance, and strategic investments will be critical in achieving a sustainable a…
S38
Is AI the key to nuclear renaissance? — AI is projected to contributeUSD 15-20 trillion to the global economy by 2030, driven by rapid adoption and efficiency g…
S39
AI energy demand accelerates while clean power lags — Data centres are driving asharp rise in electricity consumption, putting mounting pressure on power infrastructure that …
S40
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — It is very clear to me that the 2030s will be a chaotic era. There will be disruption. There will be large changes. And …
S41
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Fadi Daou:That’s wonderful. Thank you for this input, Diana. So from job market first, as mentioned by Larissa, then the…
S42
Skilling and Education in AI — And then the data that I’m submitting into the system, simply by interacting with AI, I’m submitting data and providing …
S43
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Examples of relieving employees from 4-hour internet searches and policy drafting, addressing backlogs in construction p…
S44
Keynote-Jeet Adani — This comment reframes potential criticism of nationalist AI policy as strategic wisdom rather than protectionism. It pro…
S45
Welcome Address — This comment introduces a major policy position that distinguishes India’s approach from other major powers. It shifts t…
S46
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — India’s deployment of technology as an inclusive, developmental resource was highlighted. Here, the national AI strategy…
S47
How AI Drives Innovation and Economic Growth — Rodrigues emphasizes that while early AI discussions were dominated by fear about job displacement and technological thr…
S48
Lower then expected capital investment in AI — To effectively incorporate AI into their production processes, companies need to make significant investments in new sof…
S49
Secure Finance Risk-Based AI Policy for the Banking Sector — Compliance functions increasingly rely on automated pattern recognition, while adaptive cybersecurity models respond to …
S50
From algorithms to Armageddon: The rise of AI in nuclear decision-making — The Cuban Missile Crisis of 1962 presented an unfortunate encyclopaedia of complexities concerning thedecision-making in…
S51
AI Meets Cybersecurity Trust Governance & Global Security — AI -related risk is really no different. And third, framing privacy and encryption as tradeoffs against security ultimat…
S52
Dynamic Coalition Collaborative Session — Legal and regulatory | Cybersecurity | Development The speaker outlines a comprehensive framework for AI governance tha…
S53
Can National Security Keep Up with AI? / Davos 2025 — AI technology has both beneficial and potentially harmful applications. This dual-use nature creates dilemmas and challe…
S54
From summer disillusionment to autumn clarity: Ten lessons for AI — Overall, what’s notable in all these political developments is pragmatism. The lofty narratives of last year – like fear…
S55
AI Algorithms and the Future of Global Diplomacy — “AI is a technology that, on one hand, we do need strong regulation…”[51]. “But we need international cooperation to m…
S56
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — The first constraint involves infrastructure limitations, which Patel described as “oxygen for AI.” The global shortage …
S57
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — -Infrastructure Constraints and Resource Management: Significant focus on three critical bottlenecks – power consumption…
S58
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — This comment reframes the entire AI development narrative by identifying energy as the primary bottleneck rather than th…
S59
From KW to GW Scaling the Infrastructure of the Global AI Economy — A central theme was India’s potential to become a global AI hub, with projections suggesting the country will scale from…
S60
The Global Power Shift India’s Rise in AI & Semiconductors — The discussion aimed to examine India’s strategic opportunities and challenges in AI and semiconductors, focusing on how…
S61
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — And as you said, we’re engaged in quite a few countries already on AI transformation support, and it’s kind of looking a…
S62
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — Artificial intelligence | Capacity development | Social and economic development
S63
How AI Drives Innovation and Economic Growth — Artificial intelligence | Capacity development
S64
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — Focus on automating paperwork and routine processes; potential for better service to citizens with neurodiversity or dis…
S65
Invest India Fireside Chat — Khosla criticized Indian VCs as “very risk-averse,” revealing that in his last 200 investments, he has “never calculated…
S66
https://dig.watch/event/india-ai-impact-summit-2026/invest-india-fireside-chat — And frankly, even the less smart people can do more than they think they can. You know, important in a fair society to m…
S67
The Innovation Beneath AI: The US-India Partnership powering the AI Era — This comment introduces a contrarian perspective amid the general enthusiasm for massive AI infrastructure investments. …
S68
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Thank you, Mr. Taneja, for the $5 billion pledge that you have taken. Mr. Vinod Khosla, one of the most respected person…
S69
The impact of AI on jobs and workforce — The ILO’s webinar was triggered by the recent impact of ChatGPT on our society and jobs. OpenAI’s ChatGPT, in particular…
S70
Artificial intelligence as a driver of digital transformation in industries (HSE University) — AI not only simplifies tasks and changes labour markets but also increases the demand for high-quality experts. It is le…
S71
Comprehensive Discussion Report: The Future of Artificial General Intelligence — Already seeing impact within Anthropic where they anticipate needing fewer rather than more people on the junior and int…
S72
From algorithms to Armageddon: The rise of AI in nuclear decision-making — The Cuban Missile Crisis of 1962 presented an unfortunate encyclopaedia of complexities concerning thedecision-making in…
S73
UNSC meeting: Artificial intelligence, peace and security — Jack Clark:Thank you very much. I come here today to offer a brief overview of why AI has become a subject of concern fo…
S74
Advancing Scientific AI with Safety Ethics and Responsibility — The fundamental differences between biological and nuclear security paradigms were explored in depth. Unlike nuclear mat…
S75
Human Rights-Centered Global Governance of Quantum Technologies: Implications for AI, Digital Rights, and the Digital Divide — **Dual-Use Risks**: Quantum technologies present both opportunities and threats, particularly regarding encryption and s…
S76
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S77
Keynote-Vinod Khosla — This transcript contains only a single speaker (Vinod Khosla) presenting his vision for AI applications in India, with b…
S78
Panel Discussion: 01 — -Moderator- Event moderator/host (role: introducing speakers and facilitating the event)
S79
Host Country Open Stage — – **Moderator**: Role – Event moderator/host (introduces speakers)
S80
Comprehensive Report: European Approaches to AI Regulation and Governance — The discussion maintained a professional, collaborative tone throughout. Both speakers demonstrated mutual respect and a…
S81
Four seasons of AI:  From excitement to clarity in the first year of ChatGPT — Dealing with risks is nothing new for humanity, even if AI risks are new. In environment and climate fields, there is a …
S82
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — The discussion began with a cautiously optimistic tone, acknowledging both opportunities and risks. However, the tone be…
S83
Enhancing rather than replacing humanity with AI — Right now, amid valid concerns about displacement, manipulation, and loss of human agency, there are also real examples …
S84
Steering the future of AI — Yann LeCun: Okay, it’s a bit of a fake news due to the soundbite habit. I didn’t say they were a dead end. I said they w…
S85
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S86
Beyond human: AI, superhumans, and the quest for limitless performance & longevity — ### Timeline and Accessibility Concerns Alex Zhavoronkov: Hi everybody, it’s a great privilege for me to speak to you t…
S87
‘The elephant in the AI room’: Does more computing power really bring more useful AI? — Computing isn’t just a technical choice. It’s an economic strategy—and one with enormous consequences. The financial via…
S88
Skilling and Education in AI — The tone was cautiously optimistic throughout. Speakers acknowledged both the tremendous opportunities AI presents for I…
S89
WS #283 AI Agents: Ensuring Responsible Deployment — The discussion maintained a balanced, thoughtful tone throughout, combining cautious optimism with realistic concern. Pa…
S90
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S91
AI, Data Governance, and Innovation for Development — The tone of the discussion was largely optimistic and solution-oriented. Speakers acknowledged significant challenges bu…
S92
Any other business /Adoption of the report/ Closure of the session — In the address delivered on behalf of the Indian delegation, there was a heartfelt expression of gratitude extended to M…
S93
Main Session 3 — The tone was overwhelmingly positive and celebratory, with participants expressing genuine affection for and commitment …
S94
AI Innovation in India — The tone was consistently celebratory, inspirational, and optimistic throughout the discussion. Speakers expressed pride…
S95
Regional Leaders Discuss AI-Ready Digital Infrastructure — “So these three S were introduced yesterday by ITU’s head, the three S of solutions, standards, and skills”[19]. “So whe…
S96
Indias Roadmap to an AGI-Enabled Future — The discussion aimed to outline India’s comprehensive strategy for building an AGI-enabling ecosystem by addressing thre…
S97
Leveraging AI4All_ Pathways to Inclusion — Three interconnected pillars needed: design, access, and investment – Three Pillars Framework
S98
Day 0 Event #249 Sustainable Digital Growth Net Negative Net Zero or Net Positive — While acknowledging the energy challenge and need for improvement, it’s important to maintain perspective that data cent…
S100
‘All is fair in RAM and war’: RAM price crisis in 2025 explained — If you are piecing together a new workstation or gaming rig, or just hunting for extra RAM or SSD storage, you have stum…
S101
SK Hynix to commence mass production of advanced HBM3E 12-layer chips by end of month — SK Hynix, the world’s second-largest memory chip maker, is set tobeginmass production of its advanced HBM3E 12-layer chi…
S102
AI data centre boom drives global spike in memory chip prices — The rapid expansion of AI data centres ispushing up memory chip pricesand straining an already tight supply chain. DRAM …
S103
AI for agriculture Scaling Intelegence for food and climate resiliance — This comment is profoundly insightful because it cuts through the AI hype and addresses the fundamental challenge of res…
S104
Building the Next Wave of AI_ Responsible Frameworks & Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S105
Democratizing AI Building Trustworthy Systems for Everyone — Private sector investment is necessary due to the scale of infrastructure needs that cannot be met by governments alone
S106
https://dig.watch/event/india-ai-impact-summit-2026/driving-indias-ai-future-growth-innovation-and-impact — Awesome. Great question, Midu. And, you know, we as a nation have proven ourselves to be phenomenal adopters of technolo…
S107
UNSC meeting: Peace and common development — Panama:Thank you, Mr. President. The world is facing unprecedented challenges in terms of the maintenance of internation…
S108
From principles to practice: Governing advanced AI in action — Strong consensus on fundamental principles including multi-stakeholder collaboration, trust as prerequisite for adoption…
S109
Global Perspectives on Openness and Trust in AI — This set the intellectual foundation for the entire panel, with subsequent speakers building on this distinction between…
S110
Open Forum #30 High Level Review of AI Governance Including the Discussion — These key comments fundamentally shaped the discussion by introducing three critical themes that transformed it from a r…
S111
Laying the foundations for AI governance — The tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disag…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
V
Vinod Khosla
17 arguments136 words per minute5048 words2213 seconds
Argument 1
Power and data‑center growth will double soon, demanding renewable/nuclear sources
EXPLANATION
Vinod warns that the rapid expansion of data‑center capacity will sharply increase electricity demand, making it essential to source power from renewable and nuclear energy to keep emissions low. He stresses that without such sources, AI scaling could be constrained by energy availability.
EVIDENCE
The discussion highlighted that the world already operates 80 GW of data-center capacity, representing about 1 % of global energy, and that the United States alone may see this figure double in the next three years, underscoring the imminent surge in power needs [53-57]. Vinod noted that addressing greenhouse-gas emissions requires a shift to renewable and nuclear power sources [58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data-center electricity consumption is projected to double and already accounts for ~2% of global power, prompting calls for renewable or nuclear supply to meet the surge [S11][S12].
MAJOR DISCUSSION POINT
Power and data‑center growth will double soon, demanding renewable/nuclear sources
Argument 2
Massive AI investment is justified only if the technology is widely deployed; political factors are the biggest risk
EXPLANATION
Vinod argues that the trillions of dollars poured into AI are worthwhile only if AI can be adopted at scale across societies. He identifies political resistance as the primary obstacle that could prevent such widespread deployment.
EVIDENCE
He affirmed that the scale of AI investment is justified provided the technology can be deployed broadly [118-126]. He then cited the example of German regulations that restrict robot use on Sundays, illustrating how political decisions can impede AI adoption [129-135].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Political decisions can impede AI rollout, illustrated by German robot restrictions and broader governance concerns about AI harms, highlighting the need for democratic permission [S13][S19][S1].
MAJOR DISCUSSION POINT
Massive AI investment is justified only if the technology is widely deployed; political factors are the biggest risk
Argument 3
Indian VCs are overly risk‑averse, fixated on short‑term revenue and IRR, which hampers breakthrough innovation
EXPLANATION
Vinod critiques the Indian venture‑capital ecosystem for its excessive caution, focusing on immediate revenue plans and internal rate of return calculations rather than bold, long‑term bets. This risk‑aversion, he says, stifles the kind of disruptive innovation needed for AI breakthroughs.
EVIDENCE
He described Indian VCs as “risk-averse”, constantly asking about revenue plans and profitability timelines, and warned that such short-term focus prevents large-scale innovation [341-354]. He also argued that calculating IRR for early-stage AI ventures is fundamentally misleading [357-363].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Khosla’s criticism of Indian VCs focusing on revenue plans and IRR calculations is documented in the Invest India Fireside Chat [S1].
MAJOR DISCUSSION POINT
Indian VCs are overly risk‑averse, fixated on short‑term revenue and IRR, which hampers breakthrough innovation
Argument 4
Evaluating investors should prioritize willingness to accept failure over conventional financial metrics
EXPLANATION
Vinod suggests that founders should choose investors based on their tolerance for failure rather than traditional financial indicators like IRR. He believes that embracing failure is essential for pursuing high‑risk, high‑reward AI projects.
EVIDENCE
He quoted his own philosophy that “willingness to fail allows me to succeed” and emphasized that investors who obsess over IRR are on the wrong track, especially for breakthrough innovations [343-350][357-363].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He emphasizes choosing investors who tolerate failure rather than obsess over IRR, a view expressed in the same fireside discussion [S1].
MAJOR DISCUSSION POINT
Evaluating investors should prioritize willingness to accept failure over conventional financial metrics
Argument 5
Research on data efficiency, checkpointing, and compute‑efficient algorithms can dramatically cut power needs
EXPLANATION
Vinod highlights ongoing research aimed at reducing the amount of data and compute required for training large models, as well as improving checkpointing to avoid wasted cycles. These advances could lower both energy consumption and overall AI costs.
EVIDENCE
He described multiple investments in “compute efficiency” and data-efficiency research, noting that reducing data by a thousand-fold while maintaining model potency could reshape power-consumption assumptions [185-196]. He also explained a checkpoint-restart technology that could double compute capacity without extra chips, further cutting power use [209-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ongoing research into compute-efficient LLM training and data reduction aims to lower AI power consumption, as noted in discussions on compute efficiency and data-center energy trends [S16][S11].
MAJOR DISCUSSION POINT
Research on data efficiency, checkpointing, and compute‑efficient algorithms can dramatically cut power needs
Argument 6
Future AI “scientists” (AI‑driven researchers) will accelerate discovery across domains
EXPLANATION
Vinod predicts that within five to ten years, AI systems themselves will conduct scientific research, vastly increasing the speed and breadth of innovation across fields such as materials, drug discovery, and fusion.
EVIDENCE
He stated that in five years, most research will be performed by “AI computer scientists, AI material scientists, AI fusion scientists, AI drug discovery scientists,” and that his portfolio is already building such capabilities [220-225].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s potential to speed scientific discovery in fields like materials and drug research is highlighted in reports on AI as a catalyst for science and generative AI applications [S17][S18].
MAJOR DISCUSSION POINT
Future AI “scientists” (AI‑driven researchers) will accelerate discovery across domains
Argument 7
The goal should be a single, general super‑intelligence (ASI) rather than many narrow AIs
EXPLANATION
Vinod argues that building one overarching artificial super‑intelligence that exceeds human cognition is the proper path, rather than developing multiple specialized narrow systems, which he views as a short‑term misconception.
EVIDENCE
He explained that the former term AGI is now called ASI, emphasizing the need for a super-intelligence that can exceed human creative capacity and that specialized intelligences are a mistaken notion [240-254].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for a single ASI align with analyses urging development of broad, general AI capabilities instead of narrow systems [S20][S1].
MAJOR DISCUSSION POINT
The goal should be a single, general super‑intelligence (ASI) rather than many narrow AIs
Argument 8
AI tutors can serve millions of students for free, transforming education
EXPLANATION
Vinod describes AI‑driven tutoring platforms that already reach millions of learners at no cost, and envisions scaling this to hundreds of millions, thereby democratizing education in India.
EVIDENCE
He cited the existence of four-to-five million Indian students accessing CK-12 AI tutors for free, and highlighted the potential to reach an additional 445 million students through Aadhaar-based tutoring services [145-152][453-455].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Khosla cites AI tutoring platforms already reaching 4-5 million Indian students and the potential to scale to hundreds of millions via Aadhaar integration [S4][S1].
MAJOR DISCUSSION POINT
AI tutors can serve millions of students for free, transforming education
Argument 9
AI‑driven agronomists and Aadhaar‑linked doctors can empower rural women farmers
EXPLANATION
Vinod proposes that AI‑powered agronomy advice and medical services, delivered via Aadhaar‑linked mobile platforms, can give women farmers direct access to expert knowledge, enhancing productivity and health outcomes in rural areas.
EVIDENCE
He recounted conversations about providing AI-based agronomists to women farmers, noting that a Ph.D.-level agronomist could be accessed on a farmer’s phone, and referenced Aadhaar-based doctors and tutors as part of a broader AI-enabled public service ecosystem [156-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The vision of Aadhaar-linked AI doctors and PhD-level agronomists for farmers is described in the keynote and related commentary [S4][S21][S22].
MAJOR DISCUSSION POINT
AI‑driven agronomists and Aadhaar‑linked doctors can empower rural women farmers
Argument 10
BPO and IT back‑office services will be replaced; firms must pivot to front‑office AI solutions
EXPLANATION
Vinod predicts that AI will automate routine back‑office processes, making traditional BPO and IT support obsolete. Companies should therefore shift focus to higher‑value front‑office AI applications to stay relevant.
EVIDENCE
He explained that BPO services are the easiest to replace with AI, and that CEOs can lay off BPO firms without friction, whereas laying off employees is more sensitive [266-270]. He also warned that within five years most back-office tasks could be AI-driven, though contractual obligations may delay visible change [271-277].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Khosla predicts AI will automate back-office BPO tasks, a view echoed in the fireside chat and keynote remarks [S4][S1].
MAJOR DISCUSSION POINT
BPO and IT back‑office services will be replaced; firms must pivot to front‑office AI solutions
Argument 11
AI benefits must reach citizens first via Aadhaar‑based doctors, tutors, and agronomists
EXPLANATION
Vinod stresses that the primary goal of AI deployment in India should be to deliver tangible services—healthcare, education, and agriculture—directly to citizens through Aadhaar‑linked platforms, ensuring inclusive impact before commercial exploitation.
EVIDENCE
He listed Aadhaar-based doctors, AI tutors (CK-12), and AI agronomists as examples of services that should be provided free to the population, emphasizing the need for universal access [145-152][156-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Emphasis on delivering AI services (doctors, tutors, agronomists) directly to citizens through Aadhaar is highlighted in the keynote and supporting sources [S4][S21][S22].
MAJOR DISCUSSION POINT
AI benefits must reach citizens first via Aadhaar‑based doctors, tutors, and agronomists
Argument 12
Political resistance can block AI deployment; democracy must grant permission for capitalism to work
EXPLANATION
Vinod argues that without political approval, AI technologies cannot be widely adopted, because democratic decisions shape the regulatory environment that enables or blocks capitalist investment in AI.
EVIDENCE
He gave the example of German lawmakers prohibiting robots in retail on Sundays, illustrating how political decisions can impede AI use, and concluded that “capitalism is by permission of democracy” [129-136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for democratic approval for AI deployment is reflected in discussions on political constraints and AI governance frameworks [S13][S19][S1].
MAJOR DISCUSSION POINT
Political resistance can block AI deployment; democracy must grant permission for capitalism to work
Argument 13
AI can enable customized biological threats; responsible use and model diversity are essential safeguards
EXPLANATION
Vinod acknowledges that AI could be misused to design bioweapons targeting specific ethnic groups, but argues that responsible development, regulation, and a diversity of AI models can mitigate such risks.
EVIDENCE
He compared AI to nuclear technology, noting both good and bad uses, and warned that irresponsible actors could create customized biological threats, while emphasizing that a diversity of AI models provides resilience against a single malicious AI [317-324].
MAJOR DISCUSSION POINT
AI can enable customized biological threats; responsible use and model diversity are essential safeguards
Argument 14
Fear of AI becoming “scary” can stall deployment; transparent frameworks are needed
EXPLANATION
Vinod points out that public perception of AI as dangerous can delay its adoption, and calls for clear, transparent governance frameworks to build trust and enable safe rollout.
EVIDENCE
He stated that until AI is perceived as beneficial and not scary, politicians will block deployment, and stressed the need for responsible AI frameworks to counter misuse [134-136][324-326].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concerns about public fear of AI and the call for clear, transparent governance frameworks appear in AI governance dialogues [S13][S19].
MAJOR DISCUSSION POINT
Fear of AI becoming “scary” can stall deployment; transparent frameworks are needed
Argument 15
Despite stringent regulations, companies should go “all‑in” with AI, e.g., designing N=1 drugs to bypass traditional clinical trials
EXPLANATION
Vinod advocates for aggressive AI adoption in regulated sectors like pharma, proposing ultra‑personalized “N=1” drug designs that sidestep conventional clinical trial requirements while still satisfying regulators on process validation.
EVIDENCE
He argued that companies should invest heavily in AI for drug discovery, describing a strategy to design drugs for a single patient (N=1) so regulators cannot demand large-scale trials, and highlighted ongoing work on such approaches [497-517].
MAJOR DISCUSSION POINT
Despite stringent regulations, companies should go “all‑in” with AI, e.g., designing N=1 drugs to bypass traditional clinical trials
Argument 16
AI will surpass human knowledge; education should emphasize AI‑assisted learning, diverse high‑IQ cohorts, and dorm‑centric communities
EXPLANATION
Vinod envisions a future where AI exceeds human expertise, recommending that education shift toward AI‑augmented learning environments, assemble intellectually diverse student bodies, and prioritize residential (dorm) settings to foster interdisciplinary interaction.
EVIDENCE
He suggested building more dorm capacity rather than academic buildings, allowing students to learn from AI and each other, and highlighted the importance of high-IQ, diverse cohorts for complex systems innovation [398-416].
MAJOR DISCUSSION POINT
AI will surpass human knowledge; education should emphasize AI‑assisted learning, diverse high‑IQ cohorts, and dorm‑centric communities
Argument 17
AI can become a strategic technology comparable to nuclear, requiring responsible use and safeguards
EXPLANATION
Vinod draws a parallel between AI and nuclear technology, asserting that both have transformative potential and dual‑use risks, and that responsible governance is essential to harness benefits while preventing misuse.
EVIDENCE
He referenced nuclear and biowarfare as examples of powerful technologies with both good and bad applications, emphasizing the need for responsible use and diverse AI models to mitigate risks [317-324].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Comparisons of AI to nuclear technology and the call for responsible safeguards are discussed in AI governance and risk-assessment sessions [S13][S19].
MAJOR DISCUSSION POINT
AI can become a strategic technology comparable to nuclear, requiring responsible use and safeguards
N
Nivruthi Rai
7 arguments143 words per minute2449 words1022 seconds
Argument 1
Semiconductor supply chain bottleneck: only a few fabs for logic and memory, insufficient for AI scaling
EXPLANATION
Nivruthi points out that the current semiconductor ecosystem is constrained by a limited number of fabrication facilities for both logic chips and high‑bandwidth memory, creating a supply‑chain choke point for AI hardware expansion.
EVIDENCE
She noted that 80 % of high-bandwidth memory chips come from just three companies, and that the world needs twice the current logic fab capacity and ten times the memory fab capacity each year, yet only five memory fabs exist [66-70].
MAJOR DISCUSSION POINT
Semiconductor supply chain bottleneck: only a few fabs for logic and memory, insufficient for AI scaling
Argument 2
Capital must be disciplined and compute sovereignty is essential
EXPLANATION
Nivruthi stresses that AI investment should be carefully managed, emphasizing the need for disciplined capital allocation and national control over compute resources to ensure strategic autonomy.
EVIDENCE
She highlighted that capital must be “very disciplined” and that platform positioning and compute sovereignty are critical considerations for India’s AI strategy [88-91].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The fireside chat stresses that capital must be very disciplined and that compute sovereignty is a critical strategic consideration for India’s AI roadmap [S1].
MAJOR DISCUSSION POINT
Capital must be disciplined and compute sovereignty is essential
Argument 3
AI is crucial for India’s economic productivity, military power, and information control; we must build capacity, capability, and consumption
EXPLANATION
Nivruthi frames AI as a pivotal driver for India’s overall national strength, encompassing economic growth, defence, and information dominance, and calls for a comprehensive approach that builds infrastructure, skills, and widespread usage.
EVIDENCE
She declared that “AI is pivotal to drive economic productivity, military power, and information control” and posed four strategic questions about building capacity, capability, and consumption for India [96-101].
MAJOR DISCUSSION POINT
AI is crucial for India’s economic productivity, military power, and information control; we must build capacity, capability, and consumption
Argument 4
Focusing on a few narrow use‑cases is misguided; AI progress requires broad, general intelligence
EXPLANATION
Nivruthi argues that concentrating AI efforts on a limited set of specific applications will hinder overall progress, advocating instead for development of broad, general AI capabilities that can address diverse challenges.
EVIDENCE
She suggested that India and Israel could concentrate on 20-30 precise use-cases rather than tackling many problems, implying a preference for focused applications, which she later frames as a misguided approach [236-237].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The debate highlighting the pitfalls of narrow use-case focus and the advocacy for broad, general AI capabilities is documented in the discussion and reinforced by analyses urging general intelligence development [S1][S20].
MAJOR DISCUSSION POINT
Focusing on a few narrow use‑cases is misguided; AI progress requires broad, general intelligence
Argument 5
Early‑phase AI infrastructure is still being built; capital must be allocated prudently
EXPLANATION
Nivruthi notes that the foundational AI hardware ecosystem—GPUs, memory, and energy supply—is still under development, and therefore investment should be cautious and strategic to avoid over‑extension.
EVIDENCE
She observed that infrastructure is still being built, with GPU and memory constraints, tightening energy supplies, and undefined AI modes, concluding that capital must be disciplined and allocated wisely [85-90].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Khosla notes that AI hardware infrastructure (GPUs, memory, energy) is still under development, requiring prudent and disciplined capital allocation [S1].
MAJOR DISCUSSION POINT
Early‑phase AI infrastructure is still being built; capital must be allocated prudently
Argument 6
AI follows a lifecycle: early (capital‑intensive, unstable), mid (scalable APIs, ecosystems), mature (utility, commoditization)
EXPLANATION
Nivruthi outlines a three‑stage model for AI technology evolution, describing how early stages require heavy investment and face volatility, mid stages see ecosystem growth and standardization, and mature stages become commoditized utilities.
EVIDENCE
She described the early phase as capital-intense with unstable standards, the mid phase as having scalable APIs and expanding ecosystems, and the mature phase as characterized by consolidation, commoditization, and predictable economics [80-84].
MAJOR DISCUSSION POINT
AI follows a lifecycle: early (capital‑intensive, unstable), mid (scalable APIs, ecosystems), mature (utility, commoditization)
Argument 7
Asking the right questions is the key competitive edge; garbage‑in‑garbage‑out remains a core challenge
EXPLANATION
Nivruthi emphasizes that the quality of inputs and the formulation of insightful questions determine AI’s effectiveness, warning that poor data will lead to poor outcomes regardless of technology sophistication.
EVIDENCE
She highlighted that “the one thing that bothers me… garbage in, garbage out” and later reiterated that the competitive edge lies in asking the right questions, especially as AI becomes pervasive [177-182][557-558].
MAJOR DISCUSSION POINT
Asking the right questions is the key competitive edge; garbage‑in‑garbage‑out remains a core challenge
M
Moderator
1 argument144 words per minute291 words121 seconds
Argument 1
India needs strong representation on global platforms and reforms to improve ease‑of‑doing‑business
EXPLANATION
The moderator underscores the importance of India having a prominent voice in international forums and calls for policy reforms that simplify business operations, thereby enhancing the country’s global competitiveness.
EVIDENCE
In the opening remarks, the moderator referenced “boards, representing India at Global Arena, and to solving EODB issues,” and welcomed participants to discuss these themes [1-4].
MAJOR DISCUSSION POINT
India needs strong representation on global platforms and reforms to improve ease‑of‑doing‑business
A
Audience
1 argument134 words per minute72 words32 seconds
Argument 1
Question raised: Should regulated industries adopt AI now or wait?
EXPLANATION
An audience member asks whether sectors with strict regulatory frameworks, such as pharmaceuticals, should fully embrace AI immediately or adopt a more cautious, delayed approach.
EVIDENCE
The audience posed the query: “AI itself is still in its infancy… should we go all in or should we wait on the sidelines?” highlighting the dilemma for regulated industries [496].
MAJOR DISCUSSION POINT
Question raised: Should regulated industries adopt AI now or wait?
Agreements
Agreement Points
AI is a strategic driver for India’s development and should be delivered as public services (doctors, tutors, agronomists) to citizens.
Speakers: Nivruthi Rai, Vinod Khosla
AI is pivotal to drive economic productivity, military power, and information control. AI doctors can serve millions of students for free, transforming education. AI‑driven agronomists and Aadhaar‑linked doctors can empower rural women farmers.
Both speakers stress that AI is central to India’s economic, defence and information capabilities and that its first-order benefits should reach citizens through free, Aadhaar-linked services in health, education and agriculture [96-101][145-152][156-162].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with India’s National AI Strategy (‘India AI Mission’) which frames AI as an inclusive development tool for health, education and agriculture and emphasizes delivery through public services such as Aadhaar-linked platforms [S46][S41][S40].
Capital allocation for AI must be disciplined and investors should prioritize willingness to accept failure over conventional financial metrics like IRR.
Speakers: Nivruthi Rai, Vinod Khosla
Capital must be disciplined and compute sovereignty is essential. Evaluating investors should prioritize willingness to accept failure over conventional financial metrics.
Both emphasize that AI investment should be carefully managed, with a focus on tolerance for failure rather than short-term revenue or IRR calculations [88-91][343-350][357-363].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent policy discussions highlight the need for risk-tolerant financing and disciplined capital allocation for AI, noting that many firms are scaling back AI spend and urging investors to prioritize learning from failure over traditional IRR metrics [S48][S47].
AI infrastructure (GPUs, memory, power) is still constrained; rapid data‑center growth demands renewable or nuclear energy sources.
Speakers: Nivruthi Rai, Vinod Khosla
Early‑phase AI infrastructure is still being built; capital must be allocated prudently. Power and data‑center growth will double soon, demanding renewable/nuclear sources.
Both note that the hardware stack for AI (GPU, HBM, energy) is a bottleneck and that the world’s data-center capacity is set to double, requiring clean power from renewables or nuclear to sustain AI scaling [85-90][53-57][58][66-71].
POLICY CONTEXT (KNOWLEDGE BASE)
Energy analyses show a single AI query consumes ~2.9 Wh, and data-centre growth is outpacing clean-energy supply, prompting calls for renewable or nuclear power to meet AI infrastructure needs [S38][S39].
AI should first serve citizens through Aadhaar‑linked free services in health, education and agriculture.
Speakers: Nivruthi Rai, Vinod Khosla
AI is crucial for India’s economic productivity, military power, and information control; we must build capacity, capability, and consumption. AI doctors can serve millions of students for free, transforming education. AI‑driven agronomists and Aadhaar‑linked doctors can empower rural women farmers.
Both agree that AI’s primary mission in India is to provide universal, free services-healthcare, tutoring and agronomy-leveraging Aadhaar for identity and access [96-101][145-152][156-162].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s AI policy explicitly calls for Aadhaar-linked, free AI-enabled services in health, education and agriculture as part of the inclusive digital agenda outlined in the national AI mission [S46][S45][S40].
AI is comparable to nuclear technology as a strategic dual‑use technology that requires responsible governance and safeguards.
Speakers: Nivruthi Rai, Vinod Khosla
AI as strategic as nuclear. AI can become a strategic technology comparable to nuclear, requiring responsible use and safeguards.
Both draw a parallel between AI and nuclear power, highlighting its transformative potential and dual-use risks, and call for responsible use and safeguards [311-313][317-324].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple reports describe AI as a dual-use technology comparable to nuclear weapons, stressing the necessity of robust governance, safeguards and international cooperation to manage associated risks [S53][S55][S50].
Similar Viewpoints
Both recognize that AI hardware and energy supply are still in early‑phase development and that rapid scaling will strain power resources, necessitating clean energy solutions [85-90][53-57][58][66-71].
Speakers: Nivruthi Rai, Vinod Khosla
Early‑phase AI infrastructure is still being built; capital must be allocated prudently. Power and data‑center growth will double soon, demanding renewable/nuclear sources.
Both argue for disciplined capital deployment and for investors who tolerate failure rather than focus on short‑term IRR or revenue targets [88-91][343-350][357-363].
Speakers: Nivruthi Rai, Vinod Khosla
Capital must be disciplined and compute sovereignty is essential. Evaluating investors should prioritize willingness to accept failure over conventional financial metrics.
Both see AI as a nation‑building tool that should first deliver free, citizen‑centric services in health and education to unlock economic and strategic benefits [96-101][145-152].
Speakers: Nivruthi Rai, Vinod Khosla
AI is pivotal to drive economic productivity, military power, and information control. AI doctors can serve millions of students for free, transforming education.
Unexpected Consensus
Treating AI as a strategic technology on par with nuclear power.
Speakers: Nivruthi Rai, Vinod Khosla
AI as strategic as nuclear. AI can become a strategic technology comparable to nuclear, requiring responsible use and safeguards.
While Nivruthi only raised the comparison as a question, Vinod embraced it fully, both aligning on the view that AI carries dual-use risks similar to nuclear technology, an alignment not obvious from the broader discussion [311-313][317-324].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy statements and expert panels have positioned AI as a strategic technology on par with nuclear power, advocating sovereign-first approaches while maintaining global engagement commitments [S55][S44].
Overall Assessment

The discussion shows strong convergence between Nivruthi Rai and Vinod Khosla on four major fronts: (1) AI’s strategic role for India’s growth and its delivery as free, Aadhaar‑linked public services; (2) the need for disciplined, risk‑tolerant capital and investor attitudes that value failure tolerance over IRR; (3) recognition of current hardware and energy bottlenecks and the imperative for clean power; (4) the framing of AI as a dual‑use technology comparable to nuclear, demanding responsible governance.

High consensus across technical, economic and policy dimensions, indicating a shared vision that AI should be pursued aggressively yet responsibly, with coordinated investment, infrastructure development and regulatory foresight.

Differences
Different Viewpoints
Unexpected Differences
Takeaways
Key takeaways
AI infrastructure (power, data‑center capacity, GPU and high‑bandwidth memory supply) is a critical bottleneck; disciplined capital allocation and compute sovereignty are essential. AI is a strategic national priority for India – it can drive economic productivity, military capability, and information control, and its benefits must reach citizens first via Aadhaar‑based doctors, tutors, and agronomists. Political and regulatory environments are the biggest risk to AI deployment; democratic permission is needed for capitalism to fund AI scale‑up. Sectoral impacts: AI will transform healthcare, education, agriculture, and will replace back‑office BPO/IT services; firms must pivot to front‑office AI solutions. India’s VC ecosystem is overly risk‑averse, focused on short‑term revenue and IRR, which hampers breakthrough innovation; willingness to accept failure should be a key evaluation metric. AI follows a technology lifecycle: early (capital‑intensive, unstable), mid (scalable APIs, ecosystems), mature (utility, commoditization). Current stage is still early‑mid. Research on data efficiency, checkpointing, and compute‑efficient algorithms can dramatically reduce power consumption and cost, making AI more widely deployable. Future AI will act as “AI scientists” across domains, accelerating discovery; the goal is a single general super‑intelligence (ASI) rather than many narrow AIs. Education must evolve: emphasize AI‑assisted learning, high‑IQ diverse cohorts, dorm‑centric communities, and teaching students to ask the right questions. Ethical risks exist (e.g., AI‑enabled biological threats); diversity of models and responsible governance are needed to mitigate misuse.
Resolutions and action items
Promote the development of Aadhaar‑linked AI services (doctors, tutors, agronomists) to ensure early citizen benefits. Encourage Indian investors and VCs to shift focus from short‑term IRR metrics to tolerance for failure and long‑term breakthrough risk. Invest in research for compute‑efficient AI (data‑efficient training, checkpoint‑free training, algorithmic improvements) to alleviate power and supply‑chain constraints. Advocate for policy frameworks that reduce political resistance to AI deployment, emphasizing national security and economic benefits. Support the transition of BPO/IT workforce by reskilling toward AI application development and front‑office AI solutions. Adopt a “build capacity, build capability, drive consumption” approach for AI in India, as outlined by Nivruthi Rai. Facilitate collaborations between academia (e.g., IIT Delhi) and industry to create AI‑enhanced education models (dorm‑centric, AI‑augmented learning). Create mechanisms for model diversity to provide resilience against malicious AI use.
Unresolved issues
How to concretely resolve the semiconductor supply‑chain bottleneck for GPUs and high‑bandwidth memory in the short term. Specific policies or incentives needed to overcome political resistance in countries like Germany and to align democratic processes with AI deployment. The optimal balance between pursuing broad general‑intelligence research versus targeted narrow AI use‑cases for immediate impact. Regulatory pathways for AI‑driven drug discovery, especially the feasibility and acceptance of N=1 personalized drugs. Detailed strategies for upskilling the massive BPO/IT workforce to remain employable in an AI‑centric economy. Implementation plans for ensuring AI‑driven services remain free and universally accessible across India’s diverse population. Frameworks for monitoring and preventing AI‑enabled customized biological threats.
Suggested compromises
Combine the pursuit of a general super‑intelligence with the development of sector‑specific AI applications, rather than focusing exclusively on one approach. Adopt a phased rollout: build AI infrastructure and capacity first, then expand capability and consumption, allowing time for policy and supply‑chain adjustments. Encourage diverse AI model ecosystems to mitigate the risk of a single dominant, potentially harmful AI system. Leverage AI for regulated industries (e.g., pharma) by designing N=1 drugs that sidestep traditional large‑scale clinical trials while still complying with regulatory oversight.
Thought Provoking Comments
AI has to move from an elite technology to a utility. Infrastructure is still being built – GPU and memory are constrained, energy is tightening, and standards are not yet defined. Capital must be disciplined and platform positioning matters.
She frames AI development as a technology lifecycle (early, mid, mature) and highlights the current bottlenecks (hardware, power, supply‑chain). This sets a concrete analytical lens for the whole discussion.
Establishes the technical‑economic context that guides the rest of the conversation. It prompts Vinod to address infrastructure, investment, and policy issues, and it shifts the tone from abstract optimism to a pragmatic assessment of constraints.
Speaker: Nivruthi Rai
Till AI is beneficial and not scary, we won’t get deployment because politicians will get in the way. Capitalism is by permission of democracy. Voters elect people who make policy for capitalism.
He identifies politics—not technology—as the biggest unknown for AI rollout, reframing the debate from pure engineering to governance and societal acceptance.
Creates a turning point where the discussion moves from technical challenges to regulatory and societal hurdles. It leads to further dialogue about India’s policy environment and the need for AI‑driven public services.
Speaker: Vinod Khosla
We should have Aadhaar‑based AI doctors, AI tutors, and AI agronomists – free services for every Indian. My wife works on AI tutors; already 4‑5 million students use them.
He connects AI directly to mass‑scale social impact in health, education, and agriculture, illustrating a concrete, people‑first vision for India.
Shifts the conversation from infrastructure to end‑user value, prompting Nivruthi to echo the need for skill‑focused education in rural areas and reinforcing the theme of AI as a public good.
Speaker: Vinod Khosla
If we can eliminate the need to restart training from the last checkpoint, compute capacity goes up 2× without adding power or chips.
Provides a specific technical innovation that could dramatically reduce the power‑intensity of AI training, addressing the earlier bottleneck raised by Nivruthi.
Introduces a concrete solution, deepening the technical depth of the discussion and linking back to the earlier point about power constraints. It also illustrates the kind of “science and creativity” Vinod believes will drive the next wave.
Speaker: Vinod Khosla
In five years, almost all research will be done by AI scientists – AI computer scientists, AI material scientists, AI drug discovery scientists. That will explode the rate of innovation.
Projects a future where AI not only assists but replaces human researchers, expanding the scope of AI impact far beyond current applications.
Elevates the conversation to a speculative, long‑term horizon, influencing later remarks about education (dorms vs. classrooms) and the need to prepare for a world where AI is the primary innovator.
Speaker: Vinod Khosla
You can’t focus on one narrow use‑case. We have to build general intelligence (ASI) and then fine‑tune it for specific tasks; specialized AI is a short‑term mistaken notion.
Directly challenges the common strategic advice of “pick 20‑30 precise use cases,” arguing for a universal AI foundation instead.
Creates a clear turning point, moving the dialogue from a pragmatic, use‑case‑centric approach to a more ambitious, foundational strategy. It provokes Nivruthi’s follow‑up about BPO disruption and fuels the debate on breadth vs. depth of AI investment.
Speaker: Vinod Khosla
Most Indian VCs are risk‑averse, obsess over IRR, and therefore miss big innovations. Willingness to fail is essential; you can’t calculate IRR for breakthrough ventures.
Offers a candid critique of the Indian venture ecosystem, linking cultural risk‑aversion to missed opportunities in AI and other frontier tech.
Shifts the conversation toward capital markets and founder‑VC dynamics, prompting Nivruthi to ask about evaluating investors and setting the stage for broader discussion on how to fund AI breakthroughs.
Speaker: Vinod Khosla
Don’t build more academic buildings; build more dorm space so students can live together, learn from AI, and engage in complex interactions that spark innovation.
Proposes a radical re‑thinking of higher education infrastructure, emphasizing community and AI‑augmented learning over traditional lecture halls.
Introduces a new dimension—education reform—into the AI discourse, linking back to his earlier point about AI scientists. It influences the audience’s perception of how to prepare talent for an AI‑driven future.
Speaker: Vinod Khosla
AI agents can form their own communities, develop secret languages, and exhibit emergent behavior that is unpredictable. Example: Moldbook agents scheming to avoid human scrutiny.
Highlights the complex systems and emergent properties of AI, warning of unintended consequences while also showcasing AI’s creative potential.
Adds depth to the conversation about AI safety and governance, reinforcing earlier concerns about dual‑use risks (e.g., biological threats) and prompting the audience to consider oversight mechanisms.
Speaker: Vinod Khosla
Regulatory roadblocks can be sidestepped by designing N=1 drugs – a therapy for a single patient – so regulators cannot demand large clinical trials.
Offers a bold, concrete strategy to accelerate AI‑driven drug discovery despite stringent regulations, illustrating how to think around existing constraints.
Directly answers an audience question about pharma, reinforcing the theme of “go all‑in” and showing how innovative business models can overcome policy inertia.
Speaker: Vinod Khosla
AI will replace BPO and IT services; companies must stop trying to compete with AI and instead become AI integrators. The transition will be long but inevitable.
Predicts a massive structural shift in India’s service economy and provides guidance on how incumbents should adapt.
Triggers a discussion on workforce reskilling, future job opportunities, and the broader economic impact of AI, tying back to earlier points about AI tutors and agronomists.
Speaker: Vinod Khosla
Overall Assessment

The discussion pivoted around a handful of high‑impact remarks, chiefly Vinod Khosla’s observations on political risk, the need for a universal AI foundation, and the transformative potential of AI in public services, education, and the Indian venture ecosystem. Nivruthi Rai’s framing of AI’s lifecycle and infrastructure bottlenecks set the technical stage, while Vinod’s bold, often contrarian statements repeatedly redirected the conversation—first from hardware constraints to policy, then from narrow use‑case strategies to general intelligence, and finally from investment hesitancy to systemic societal change. Each of these turning points deepened the dialogue, introduced new thematic layers (governance, education reform, emergent AI behavior, regulatory work‑arounds), and compelled participants to reconsider assumptions about how AI should be built, funded, and deployed in India. Collectively, the identified comments shaped the session from a descriptive overview into a forward‑looking, strategic debate about the infrastructure, governance, talent, and capital needed to turn AI from an elite technology into a national utility.

Follow-up Questions
Is AI a generational platform shift or the largest capital misallocation, and is the current level of investment justified?
Determines whether massive AI funding is strategically sound or a misallocation of resources.
Speaker: Nivruthi Rai
What are the prospects and challenges of sparsity, in‑memory compute, and non‑von‑Neumann (neuromorphic) architectures for AI hardware?
Explores hardware innovations that could improve AI efficiency and reduce power consumption.
Speaker: Nivruthi Rai
Should India concentrate on a limited set of 20‑30‑50 precise AI use cases rather than pursuing broad, unfocused deployment?
Guides strategic prioritization to maximize impact while managing limited resources.
Speaker: Nivruthi Rai
If back‑office BPO/IT services are displaced by AI, what are the front‑office opportunities, and how should the current workforce reskill to stay employable?
Addresses workforce transition and identifies new economic opportunities in an AI‑centric economy.
Speaker: Nivruthi Rai
How can India leapfrog from generic medicines to AI‑driven biologics, and could AI enable customized biological threats?
Impacts pharmaceutical innovation, personalized medicine, and biosecurity considerations.
Speaker: Nivruthi Rai
How should founders evaluate investors to ensure they receive maximum value from the partnership?
Improves founder‑VC dynamics and helps entrepreneurs select supportive capital partners.
Speaker: Nivruthi Rai
What aspects of AI in India will appear embarrassingly obvious in hindsight ten years from now?
Identifies current blind spots that may hinder future AI adoption and policy.
Speaker: Nivruthi Rai
What are the most overrated AI beliefs and the most underrated constraints?
Clarifies common misconceptions and hidden challenges that could affect AI development.
Speaker: Nivruthi Rai
What are the top five AI applications that can solve the most pressing global and Indian problems?
Prioritizes AI use‑cases with the highest societal impact.
Speaker: Nivruthi Rai
Does AI increase venture alpha or does capital crowding compress returns for most funds?
Examines how AI investment dynamics affect fund performance and capital efficiency.
Speaker: Nivruthi Rai
Should regulated industries like pharmaceuticals go all‑in on AI now or wait for regulatory clarity?
Helps companies decide on timing of AI adoption amid stringent regulatory environments.
Speaker: Audience member (unidentified)
How can data efficiency be improved for large language models to achieve comparable performance with far less data?
Research needed to reduce compute and power demands of LLM training.
Speaker: Vinod Khosla
Can checkpoint‑less training or similar compute‑efficiency techniques double AI capacity without additional power or hardware?
Investigating novel training methods could alleviate data‑center power constraints.
Speaker: Vinod Khosla
What will be the role and impact of AI‑driven scientists (AI researchers) across domains such as material science, drug discovery, and fusion?
Understanding how AI can accelerate research productivity and innovation.
Speaker: Vinod Khosla
How do emergent behaviors in AI agent swarms (e.g., language creation, coordination) arise, and what are their implications?
Studying complex systems of AI agents is crucial for safety, governance, and harnessing collective intelligence.
Speaker: Vinod Khosla
What policy frameworks are needed to overcome political barriers to AI deployment (e.g., robot bans, regulatory inertia)?
Ensures that political decisions do not stifle beneficial AI adoption.
Speaker: Vinod Khosla
How can a diversified ecosystem of AI models be cultivated to prevent reliance on a single dominant (potentially harmful) AI?
Promotes resilience and mitigates risks associated with monopolistic AI control.
Speaker: Vinod Khosla
Can AI enable ‘N=1’ personalized drug design that bypasses traditional clinical trials, and what regulatory pathways would support this?
Explores a novel approach to personalized medicine and its regulatory challenges.
Speaker: Vinod Khosla
What are effective strategies to scale AI services in Indian languages (e.g., Sarvam) and measure their impact?
Ensures inclusive AI adoption across linguistic diversity in India.
Speaker: Vinod Khosla
How will AI’s power consumption versus usage growth evolve, and what modeling is needed for infrastructure planning?
Accurate forecasts are essential for building sustainable data‑center capacity.
Speaker: Vinod Khosla
What future education models (e.g., AI‑augmented dormitory learning) could replace traditional academic buildings?
Investigates innovative pedagogical structures that leverage AI for collaborative learning.
Speaker: Vinod Khosla
How can AI agronomist tools be deployed at scale to support smallholder farmers in rural India?
Addresses food security and rural development through AI‑enabled agriculture.
Speaker: Vinod Khosla
What governance mechanisms are required to prevent AI from being misused as a customized biological weapon?
Ensures responsible AI development and mitigates biosecurity threats.
Speaker: Vinod Khosla
What are the potential effects of AI‑driven autonomous agent swarms on financial markets and national defense?
Analyzes systemic risks and opportunities of AI agents operating at scale.
Speaker: Vinod Khosla
Why should investors move away from IRR‑based metrics for early‑stage AI ventures, and what alternative evaluation frameworks are appropriate?
Promotes better investment decision‑making aligned with high‑risk, high‑impact AI startups.
Speaker: Vinod Khosla

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Powering AI Global Leaders Session AI Impact Summit India

Powering AI Global Leaders Session AI Impact Summit India

Session at a glanceSummary, keypoints, and speakers overview

Summary

The summit opened with acknowledgments of partners and a preview of a video on AI-driven talent matching before introducing Chris Lehane’s talk on OpenAI’s work in India [1-10]. OpenAI’s AI recruiter, currently supporting English and Hindi, was presented as a tool that could later expand to provide financial, educational, and governmental services [11-13]. Lehane thanked the organizers, noted the presence of the Prime Minister and CEO Sam Altman, and highlighted their shared emphasis on “democratic AI” [27-33].


He explained the “capability gap,” where rapid AI acceleration creates a divide between a small group of power users and the broader population [34-40]. Research shows power users generate roughly seven times the economic value of non-users, underscoring the urgency to close this gap [41-48]. Lehane argued that education is the primary means to bridge the gap, focusing on three pillars: access, literacy, and agency [53-55].


He emphasized that widespread free access in India-hundreds of millions of users and an affordable paid tier-provides the foundation for inclusive participation [56-69]. Literacy, he said, involves not only basic reading and arithmetic but also hands-on AI use, encouraging experimentation even in unconventional domains such as astrology and sports betting [70-78]. The most challenging pillar, agency, requires individuals to view AI as a tool for owning and monetizing their own labor rather than merely selling it [80-86][101-104].


Drawing a historical parallel, Lehane compared AI to the printing press, noting that in Europe the technology spurred democratization of knowledge while in China it was suppressed, foreshadowing a choice between democratic and autocratic AI [112-124]. He asserted that India, as the world’s largest democracy with massive AI adoption, is uniquely positioned to shape a democratic AI future [124]. OpenAI therefore views India not just as a market but as a strategic partner essential to fulfilling its mission of building AI that benefits all humanity [125-126].


The session concluded by thanking attendees and emphasizing the significance of this moment for both India and the global AI landscape [129-132][131-132].


Keypoints


Democratizing AI requires three pillars: access, literacy, and agency.


Chris stresses that widespread, low-cost access (free tools and a $3.99 /month model) is the foundation for participation in the AI-driven economy [55-64][57-69]. He then outlines the need for AI literacy-people must start using the tools, even in unconventional ways, to become proficient [70-77]. Finally, he highlights “agency” as the hardest piece: users must intentionally employ AI as a productive partner rather than a shortcut [80-86].


A “capability gap” is emerging, where power users generate far greater economic value.


The rapid, recursive acceleration of AI creates a subset of “power users” who act as assistants, coaches, and multipliers, delivering roughly a 7× productivity boost compared with non-power users [34-49][44-48]. Closing this gap is presented as essential to ensure the benefits of AI are shared across society.


Education must evolve to bridge the gap, drawing on historical analogies.


Chris likens AI to a general-purpose technology comparable to the printing press, noting how divergent outcomes in Europe (democratization of knowledge) versus China (authoritarian control) illustrate the stakes of today’s AI rollout [112-124]. He argues that modern education-originally designed for the industrial age-needs to be re-oriented to give students agency over AI, turning labor into owned, monetizable output [86-94][95-103].


India is positioned as a strategic partner and global leader in AI democratization.


With hundreds of millions of regular users and an affordable pricing model, India offers a unique testbed for scaling democratic AI [57-69][106-108]. The speaker stresses that India’s role goes beyond being a customer; it is a partner crucial to fulfilling OpenAI’s mission of “building AI that benefits all of humanity” [124-128].


Future AI applications extend beyond recruitment to broader services.


The brief remarks from the second speaker note that today’s AI recruiter supports English and Hindi and will soon enable access to financial, educational, and governmental services [11-14].


Overall purpose/goal:


The discussion aims to articulate the urgency of making AI truly democratic by addressing the capability gap, redefining education, and leveraging India’s massive user base, thereby aligning OpenAI’s mission with global societal benefit.


Overall tone:


The conversation begins with celebratory gratitude and applause, shifts to an analytical and urgent tone as it dissects the capability gap and educational challenges, and moves toward an optimistic, forward-looking stance emphasizing India’s pivotal role. Throughout, the tone remains constructive and hopeful, ending with a courteous thank-you and a sense of partnership.


Speakers

Speaker 1


– Role/Title: Event moderator / host (appears to introduce speakers and wrap up the session)[S7][S9]


Speaker 2


– Role/Title: Moderator / chair (appears to moderate the discussion)[S1][S2]


– Affiliation: Affiliation 2 (as indicated in source)[S2]


Chris Lehane


– Title: Chief Global Affairs Officer, OpenAI[S4]


– Title: Vice President of Public Works, OpenAI (newly appointed)[S5]


– Role: Co-moderator for the session[S6]


Additional speakers:


Ronnie – Chief Economist and Academic Professor at Duke University (referenced in the transcript)


Sam Altman – CEO and Co-founder of OpenAI (referenced in the transcript)


Rupa – Participant who contributed to the discussion on literacy (referenced in the transcript)


Rana – Participant who mentioned Codex, the developer tool (referenced in the transcript)


Prime Minister of India – Delivered remarks at the summit (referenced in the transcript)


Full session reportComprehensive analysis and detailed insights

The summit opened with a brief ceremony in which the Chief Global Affairs Officer was introduced and partners were applauded [1-3]. The host then thanked the partners, announced a short bridging video that highlighted Vahan.ai’s work connecting talent with employment opportunities [7-9], and handed the session over to Chris Lehane for a presentation on OpenAI’s activities in India [10].


Speaker 2 introduced OpenAI’s AI recruiter, noting that it already supports English and Hindi and that additional Indian languages could be added “for each state in the next year or so” [11-12]. He framed the recruiter as a prototype for future AI-driven public services that might eventually provide access to finance, education and government resources that many people have never encountered [13-14].


Chris Lehane began by thanking the organizers, the audience, the OpenAI team, India’s Prime Minister and OpenAI CEO Sam Altman [19-30]. He praised the Prime Minister’s eloquent remarks about how important it is to get democratic AI right [27-30] and described the summit as a “unique and special moment” centred on the pursuit of a “democratic AI” that is widely accessible and responsibly governed [27-33].


Lehane then explained the capability gap, emphasizing the recursive acceleration of AI that is widening the divide between a small cohort of “power users” and the broader population [34-36]. Research he cited shows that power users generate roughly a 7× economic impact compared with non-power users, whether in corporate settings or as self-employed individuals [44-48]. He warned that without closing this gap, AI’s benefits will accrue only to a privileged minority [49].


Education, he argued, is the historic passport for closing such gaps. He cited Ronnie, a chief economist and Duke professor, as an example of academia identifying and addressing capability disparities [50-53]. From this perspective, three pillars are required for AI democratization: access, literacy and agency[53-55].


The access pillar rests on the availability of free tools and an affordable paid tier. In India, a third of the population uses OpenAI’s models regularly [57-58], and hundreds of millions of users engage with the free service [57-69]. The subscription costs about $3.99 per month[65-68], a low-cost model that Lehane repeatedly emphasized as the foundation for mass participation in the emerging AI-driven economy [57-62].


The literacy pillar goes beyond basic reading, writing and arithmetic to include hands-on experience with AI. Lehane urged people to “start using the tools” in any form-whether for astrology, sports betting or other experiments-because repeated use rapidly builds competence [70-78]. He also referenced Rupa’s point about literacy, underscoring the need for practical experimentation [70-71].


The agency pillar is described as the most challenging. Lehane contended that AI is a general-purpose technology that can amplify anyone’s ability to think, learn, create and build, but only if users deliberately employ it as a partner rather than a shortcut [80-86]. He linked agency to a broader re-imagining of the social contract: by using AI, individuals can “own their labour” and capture its economic value, reshaping the historic tension between labour and capital [94-103]. This shift, he argued, requires a new educational ethos that moves beyond the assembly-line mindset of the U.S. industrial-age system [97-101].


To illustrate the stakes, Lehane invoked the printing press as a historical analogue. He contrasted Europe’s fragmented political landscape, which allowed the press to democratize knowledge and fuel the Renaissance, with China’s authoritarian suppression of the same technology [112-124]. He argued that if the world’s largest democracy-India-can democratize AI, it will set a precedent for the rest of the world [108-124].


Consequently, OpenAI regards India as a strategic partner in fulfilling its mission to build AI that benefits all of humanity. He thanked the audience and reiterated that OpenAI sees India as a strategic partner in delivering on that mission [108-124].


While all speakers agreed on the need for broad AI access, they differed in emphasis. Speaker 2 focused on expanding multilingual support as a primary lever for democratization, suggesting future language rollout rather than guaranteeing a schedule [11-12]. Lehane, by contrast, placed universal free access, literacy and agency at the core of his framework [55-64][80-86]. Both, however, concurred that AI can generate substantial economic value and that education is the key mechanism for narrowing the capability gap [44-48][52-53].


Lehane’s remarks were punctuated by several thought-provoking comments: the concrete 7× productivity metric as a warning about widening inequality [44-48]; the three-pillar model that highlights agency as an often-overlooked component [80-86]; the printing-press analogy that frames AI’s geopolitical trajectory [112-124]; and a challenge to the legacy of the U.S. education system, urging curricula that enable students to “own their labour” [97-103]. These points shifted the discussion from a simple showcase of tools to a deeper examination of equity, empowerment and global governance.


In summary, the session progressed logically from an opening acknowledgement of partnerships, through a description of OpenAI’s multilingual AI recruiter, to a detailed analysis of the capability gap and the three-pillar strategy required to democratize AI. The historical analogy and the emphasis on India’s democratic context reinforced the view that the country can lead the world toward a democratic AI future. The talk concluded with gratitude to the audience and a reaffirmation of the collaborative spirit that will guide the next steps [129-132].


Session transcriptComplete transcript of the session
Speaker 1

The Chief Global Affairs Officer to join us for this moment. Please give a big round of applause to all our partners. Thank you. Thank you so much. Thank you so much. Thank you for your partnership. Thank you. Thank you. Thanks. Next, we have a short video coming up bridging these two sessions. which is what we talked about in the first section with Ronnie and the experts over here about the economics of AI, employability, what we can do with students. There’s a company called Vahan .ai that has done some incredible work in this space to be able to connect talent together with jobs. We have a short video and right after that we’ll have Mr. Chris Lehane giving us a talk about what we do at OpenAI.

Thank you.

Speaker 2

OpenAI is the main AI AI recruiter. Today AI AI recruiter supports English and Hindi, but we can have each state in the next year or so. So, we want to focus on today, but in the future we can use this technology to bring people access to financial services or to educational opportunities or even government services that they haven’t heard of. So, there are a lot more that this type of unlock will help us in the future.

Speaker 1

Over to you, Chris.

Chris Lehane

Thank you, thank you. Thank you everyone. Thanks for those who’ve hung out for a little bit longer. I know I am standing between you and probably dinner, and given how good the food here is in India, I am very cognizant that I should be pretty quick because I don’t want to stand in your way. First of all, great panel. It was awesome just to hear those different thoughts and perspectives. And Ronnie, who I think is one of the most excited people here in Delhi for this Impact Summit, your parents would be very proud of you in all seriousness. They were born here, they came to the U .S., and then to have their son coming and doing an event like this is a tremendous story.

So thank you. And Ronnie, thank you for everything that you do at OpenAI. And I really want to thank the OpenAI team that has helped put this together and all the incredible work that’s been done over the course of this week. And really thank everyone here in the room for participating in this summit. It is really a unique and special moment in time here in India. You know, yesterday we all heard from the Prime Minister. We also heard from Sam Altman, our CEO and co -founder. And, you know, the commonality in what they talked about. It was really focused on this idea of democratic AI. I think the Prime Minister, not surprisingly, was incredibly eloquent in talking about just how important it is to get that right.

And Sam, I think, built on that in his remarks. And something that Ronnie mentioned, I think, deserves some unpacking because it’s directly related to this democratizing of AI concept. And Ronnie, you had touched on the capability gap. So let me just unpack that for a couple seconds because I do think it’s at the core of this concept of democratic AI. And so what we know from our research, and really the research that Ronnie and his team do, is that there’s something called this capability gap. And what that really means is the technology continues to accelerate. In fact, there’s a recursive nature to it right now. So that acceleration is potential. going to become even faster and faster.

And what we’re seeing is that there is a subset of users. Think of them as power users. And those power users who are using the technology, and Ronnie I think you did your survey of how people are using it. I’m not sure if the astrologist counts as a power user, but I think some of the other examples, we’re getting there and perhaps it does. But what we’re seeing from those power users, so not just those who are using it for sort of a more comprehensive search function, but they’re really using it as an assistant, as a coach, as a multiplier of their work, is they are effectively creating a 7x economic impact. So put that in really simplistic or reductionist terms.

If you’re at a company and you’re a power user of our tools or AI generally, you are likely delivering a 7x value vis -a -vis a non -power user for your employer. Or if you’re self -employed and using it yourself. And so I think we’re really at this moment in time and we need to begin thinking about how do we close that capability gap, right? Because there’s going to be a subset of folks who left to their own are going to do very well by this, but we need to be thinking about society as a whole as we go forward. You know, Ronnie, in addition to being a chief economist, is also an academic professor at Duke.

A number of the folks up here had academic backgrounds. And we do know that over the course of human history, education ends up being the passport to close these types of capability gaps. And I think as we think about the role of education going forward, there’s really three elements to it here. Some of them are touched on in the conversation. The first is access. I mean, access is core to democratizing AI. You know, here in India, we have a hundred million folks who use this on a regular basis. Think about a third of the population who use this on a regular basis. I mean, access is core to democratizing AI. I mean, access is core to democratizing AI.

I mean, access is core to democratizing AI. I mean, access is core to democratizing AI. I mean, access is core to democratizing I mean, access is core to democratizing AI. I mean, access is core to democratizing AI. And amongst the reasons why there’s so many people using it here in India, I mean, we have 800 million globally, is because the vast majority are able to access our tools for free. And even the pay version here in India Go is a relatively very affordable model. I think it’s about $3 .99 a month, if I’m remembering correctly, okay. And so that access piece is really important. You have to have access to this if you’re going to have any chance to participate in those economics.

The second piece, and I think Rupa hit on this, is literacy. And, you know, this is literacy in the sense of, you know, reading and writing and arithmetic and AI literacy. And it’s really start using the tools. I might get asked all the time at events like this and other events, you know, which did my kid major in college? Or what? Start. Start using the technology. Start playing with it. Using it for astrology. Astrology. I have friends who use it for sports betting. Just use it in any type, shape, way, or form that you can, because once you start to use it, you will actually become really, really, really good at it. And then the third piece, and the third piece, I think, is really the most challenging and what we all have to get right, is the agency piece.

This is a technology, and this is a sophisticated crowd. You all understand this. But this is a technology that at its core is a general purpose technology. So what are general purpose technologies? We’ve got Ronnie, who’s an economist, who will probably come kick me when I do this description of it. But these are transformational technologies that just change the ability of humans to produce. So if you think about it, humans have been around roughly 200 ,000 years. For the first 190 ,000 of those years, humans produced basically what they could eat. And there was sort of a direct one -to -one ratio. And then about 10 ,000 years ago, you started to get stuff like the wheel. and later on you got the domestication of animals, then the wheel, then you got steam power, and then you got combustion engine, your printing press, electricity, the transistor.

Each one of those drove productivity up higher and higher and drove human progress. This AI is an ultimate leveling tool. It scales the ability of any person, so long as they can talk, to be able to think, to learn, to create, to build, and to produce. But you have to take agents. You actually have to want to use it for those purposes. And one of the things that’s very much in my head, and I’m a lot more familiar with the U .S. public education system than certainly the Indian one, so what I’m going to talk about is a little bit more from a U .S. perspective, although I do think it translates. So in the U .S., the public education system that we currently have was really created at the early stages of the industrial age in the United States.

and it was basically designed to help teach folks to come in from rural areas where they had mostly been an agricultural economy and be able to work in factories so in the U .S. the time that school started sort of aligned with when factories opened the fact that you went from classroom to classroom was basically designed to teach you to work on an assembly line even the bells that you got to move you around was designed to start to get you to understand and think as if you were working in the factory there are also other pieces built in civics courses I’m old enough that we had home ec and wood shop and other types of things that basically taught you core skills to be able to work in a factory well as we enter into this intelligence age what is the version of that that is going to change how people think and understand it’s almost an ethos that we have to build you know Sam often talks about the fact that if you probably look at kids in the school right now about 20 % of those kids actually really do have agency.

They’re excited to learn this. Maybe the other 80 % see it as a really easy way to get their homework done. That’s an ethos that we need to change. We need to get to a place where closer to 100 % of those students are going to really think about this is a technology that can allow me to succeed. It can allow me to actually take my labor and not necessarily have to sell my labor or get paid for my labor, but I actually get to own my labor and make money off of my labor. If you really think about how the social contract has generally worked, it has always been this calibration, maybe a fight between labor and capital.

This technology allows folks who are using their labor to be able to actually own it and participate in it in a fundamentally different way. For us, thinking about that agency piece is really critical. I’ll end this by just saying I think India is in a unique, unique moment to lead on this. The number of folks who are already using it. year. I think, Rana, you may have mentioned that Codex, which is our developer tool, this is the place in the world where it’s growing the fastest. And I’m going to end with a little bit of a historic analogy. I get to sometimes play a technologist on stages like this, and even a little bit of an economist today.

But I was a history major in college, so I get to play amateur historian, emphasis on amateur. Everyone has their own favorite historical analogy for this technology, for AI. The one that I’ve really been thinking about a lot lately, and none of these are perfect. They’re not exact replications. It’s going to rhyme more than repeat. But the one that’s very much in my head these days is the printing press. And I will sort of share two different parts of the world when the printing press came out. So the printing press developed late 1400s. Most of the world was more or less in a very similar economic place. So the printing press was a very similar economic place.

So the printing press was a very similar economic place. So the printing press was a very similar economic place. So the printing press was a very similar economic place. So the printing press was a very similar economic place. but two places went in very different directions on this one was europe and the other was china in europe because there was a little bit of a baseline of actual literacy from the catholic church and moreover because it was a fragmented continent with different countries that fragmentation really allowed people to use the printing press to spread ideas no one government actually controlled but that was being produced by the printing press and as a result you had the democratization of knowledge and ideas and thinking in a way that humans had never experienced at scale up to that moment in time and there’s a direct through line in europe from the printing press to the democratization of knowledge to the age of discovery the age of science enlightenment to reformation and the economic uplift of europe the other extreme was what took place in china which is under the dynasty at that time there was a real concern that the printing press was going to in fact allow knowledge to be spread and the spreading of that knowledge would potentially generate a challenge to the authoritarian government in place and so as we sit here at this moment in time right there is going to be a huge question as to whether the world is built out on democratic AI or autocratic AI a centralized version of it and India is gonna have the dispositive voice on how that plays out this is the world’s largest democracy if the world’s largest democracy is able to democratize AI here that means we’re going to be democratizing AI around the world so this is a moment in time for this incredible country that’s going to be playing a leading role not just for the people here as important as that is but for the entire world and the entire world and the entire world but for people around the world and so we feel incredibly privileged to be able to be here in Delhi in India at this moment.

It’s amongst the reasons why we don’t see India as a customer. We see India as a strategic partner, and not just a strategic partner for us as a business, but for a strategic partner for us to be able to deliver on our company’s mission, which is building AI that benefits all of humanity. Thank you very much for being here. It’s been an incredible week. Talk to you guys soon. Thank you.

Speaker 1

That’s a wrap. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (22)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Chris Lehane presented on OpenAI’s activities in India at the summit”

The knowledge base lists Chris Lehane as a speaker for the AI Impact Summit India, confirming his role in presenting OpenAI-related content at the event [S17].

Additional Contextmedium

“OpenAI’s AI recruiter currently supports English and Hindi and plans to add additional Indian languages for each state within the next year”

A separate source notes ongoing efforts in India to expand language support (e.g., Bhashini adding 11 languages with state collaboration), providing context that language-addition initiatives are underway, though it does not specifically reference OpenAI’s recruiter [S89].

Additional Contextmedium

“Research cited by Lehane shows that power users generate roughly a 7× economic impact compared with non‑power users”

The knowledge base confirms the existence of a distinct “power-user” segment and discusses disparities between power users and average users, but it does not provide the specific 7× impact figure, offering contextual support for the concept of a capability gap [S99].

Confirmedhigh

“Ronnie, a chief economist and Duke professor, is cited as an example of academia identifying and addressing capability disparities”

The source mentions Ronnie conducting a survey on how people use AI, confirming his involvement in studying capability disparities [S99].

External Sources (100)
S1
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S2
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S3
S4
Safeguarding Children with Responsible AI — -Chris Lehane- Chief Global Affairs Officer for OpenAI
S5
OpenAI’s push to establish AI as critical infrastructure — In a recent interview,Chris Lehane, the newly appointed vice president of public works at OpenAI, underscores AI’s role …
S6
https://dig.watch/event/india-ai-impact-summit-2026/safeguarding-children-with-responsible-ai — Thank you. Can you hear me? Yes? All right, so delighted to be here with you all. I’m one of the two co -moderators, and…
S7
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S8
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S9
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S10
Empowering Workers in the Age of AI — **Additional speakers:** – Representative from One Goal initiative for governance Audience: Yes. Hello, my name is Mel…
S11
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — <strong>Naveen GV:</strong> out a long, lengthy form of information for that to be processed much later by another human…
S12
Building Indias Digital and Industrial Future with AI — – Rahul Vatts- Speaker 1 – Speaker 1- Deepak Maheshwari
S13
Skilling and Education in AI — – Speaker 1- Speaker 2
S14
AI 2.0 The Future of Learning in India — -Speaker 2: Former advisor to President Mukherjee, worked in finance ministry, expertise in educational innovation and p…
S15
WS #119 AI for Multilingual Inclusion — – Encouraging learning and use of multiple languages – Ensuring public services support multiple languages – Increasin…
S16
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — “The democratization of AI with inclusion, which I touched upon in my keynote at the EIFGO Global Summit in Geneva last …
S18
AI That Empowers Safety Growth and Social Inclusion in Action — Second, they want to close capacity gaps. Many developing countries need infrastructure, skills, and compute to particip…
S19
Press Conference: Closing the AI Access Gap — Moreover, the speakers argue that AI can drive productivity, creativity, and overall economic growth. It has the capacit…
S20
We are the AI Generation — In her conclusion, Martin articulated that the fundamental question should not be “who can build the most powerful model…
S21
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Because, while using technology, if we do not use all the technology, then its direction can also be wrong. And that is …
S22
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — Om Birla, Speaker of India’s Parliament, presented India’s approach to AI integration, emphasizing the country’s commitm…
S23
The Foundation of AI Democratizing Compute Data Infrastructure — The emphasis on community participation, data sovereignty, and alternative technical architectures suggests AI developme…
S24
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S25
Networking Session #60 Risk &amp; impact assessment of AI on human rights &amp; democracy — LG AI Research has developed an approach to AI ethics and risk governance based on five core values: humanity, fairness,…
S26
Data-driven discussions at the 2018 High-Level Political Forum on Sustainable Development — The real challenge, highlighted in all sessions and side events related to data, relates to a lack ofcapacityand resourc…
S27
Panel discussion: International law, cyber-norms, CBMs, capacity building,institutional dialogue — Team Blue:Thank you, Mr. President. This is not exactly a question. Now, I would like to thank Dr. Getao for calling me …
S28
Cyber Resilience Playbook for PublicPrivate Collaboration — – Some capabilities have the profile of a pure public good (in the classic economics sense): their consumption is non-r…
S29
Generative AI: Steam Engine of the Fourth Industrial Revolution? — It is evident that there is an urgent need for partnerships with governments to modify basic education in order to meet …
S30
Scaling Multistakeholder Partnerships: Connectivity and Education — However, a glimmer of hope is evident in the formation of public policies directed towards bridging these gaps. The alli…
S31
De-briefing and Next steps — The principal point is the identification of a significant gap in the existing educational and organisational structures…
S32
!” — Though there has been a trend towards reducing wage inequalities since the Great Recession of the late 2000s, inequaliti…
S33
Welcome Address — Prime Minister Narendra Modi This comment introduces a major policy position that distinguishes India’s approach from o…
S34
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — And India is definitely leading the way in terms of application layer. There’s no doubt about that. Now, of course, with…
S35
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — This comment elevated the discussion from technical implementation to geopolitical strategy. It influenced the final que…
S36
Keynote-Dario Amodei — “of AI models, their potential for misuse by individuals and governments, and their potential for economic displacement….
S37
Driving Indias AI Future Growth Innovation and Impact — These key comments fundamentally shaped the discussion by expanding it beyond technical infrastructure to encompass trus…
S38
Future of work — AI technology has the potential to be misused by employers in a variety of ways. For example, some employers may use AI-…
S39
From brainwaves to breakthroughs: The future with brain-machine interfaces — Broader Applications Beyond Disability Assistance
S40
Artificial intelligence and diplomacy: A new tool for diplomats? — Artificial intelligence (AI) is transitioning from science fiction into our everyday lives. Over the past few years, the…
S41
Generative AI: Steam Engine of the Fourth Industrial Revolution? — To ensure widespread innovation and access to AI, it is imperative to keep AI platforms open and avoid closed ecosystems…
S42
WS #208 Democratising Access to AI with Open Source LLMs — Abraham Fifi Selby: All right, thank you very much for the session, and I’m very happy to join this panel. I’m from th…
S43
AI for Democracy_ Reimagining Governance in the Age of Intelligence — “Global governance of AI is a precursor for a democratic development and evolution.”[1]. “So the way to democratize thes…
S44
Democratizing AI Building Trustworthy Systems for Everyone — Absolutely. I mean, not one of those five limbs is possible without deep partnership. And that coordination of those fiv…
S45
Powering AI Global Leaders Session AI Impact Summit India — Lehane argues that education serves as the key to closing this capability gap, identifying three critical components: ac…
S46
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — This insight recognizes that AI education is happening organically through accessible tools rather than just formal educ…
S47
DCAD &amp; DC-OER: Building Barrier-Free Emerging Tech through Open Solutions — Importance of basic education access before focusing on technology While most speakers focused on technological solutio…
S48
Future-Ready Education: Enhancing Accessibility &amp; Building | IGF 2023 — In conclusion, the analysis underscores the need for equitable access to the internet to ensure inclusive and quality di…
S49
WS #283 AI Agents: Ensuring Responsible Deployment — These key comments fundamentally transformed what could have been a technical discussion about AI governance into a nuan…
S50
Driving Indias AI Future Growth Innovation and Impact — These key comments fundamentally shaped the discussion by expanding it beyond technical infrastructure to encompass trus…
S51
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — AI is not just a technology but a social technical system, a system of systems, and one discipline alone is not sufficie…
S52
Data first in the AI era — This provided a unifying framework for understanding all the various tensions discussed – between convenience and privac…
S53
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — The discussion revealed a common theme across different contexts: the gap between policy ambition and implementation cap…
S54
AI as critical infrastructure for continuity in public services — “I believe that there is perhaps awareness challenge as well as the capacity challenge, because I think that this whole …
S55
Press Conference: Closing the AI Access Gap — Moreover, the speakers argue that AI can drive productivity, creativity, and overall economic growth. It has the capacit…
S56
How AI Drives Innovation and Economic Growth — Kremer argues that while there are forces that may widen gaps, AI has significant potential to narrow development dispar…
S57
Democratizing AI: Open foundations and shared resources for global impact — ## Educational Initiatives ## Future Directions and Call to Action ## Infrastructure and Support ## Introduction and …
S58
OpenAI’s push to establish AI as critical infrastructure — In a recent interview,Chris Lehane, the newly appointed vice president of public works at OpenAI, underscores AI’s role …
S59
Powering AI Global Leaders Session AI Impact Summit India — Lehane argues that education serves as the key to closing this capability gap, identifying three critical components: ac…
S60
Networking Session #60 Risk &amp; impact assessment of AI on human rights &amp; democracy — LG AI Research has developed an approach to AI ethics and risk governance based on five core values: humanity, fairness,…
S61
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — But then this technology, the compute networks, as well as the AI platform stack, comes together in edge devices. Robots…
S62
We are the AI Generation — In her conclusion, Martin articulated that the fundamental question should not be “who can build the most powerful model…
S63
Democratizing AI: Open foundations and shared resources for global impact — ### Three Pillars of Application Mary-Anne Hartley: Yeah, sure. I think what we all saw with the use case over there is…
S64
https://dig.watch/event/india-ai-impact-summit-2026/powering-ai-_-global-leaders-session-_-ai-impact-summit-india-part-2 — which my colleague here will talk about. Big tech’s scope to emissions are already up 30 to 50 % since 2020. Globally, d…
S65
Cyber Resilience Playbook for PublicPrivate Collaboration — – Some capabilities have the profile of a pure public good (in the classic economics sense): their consumption is non-r…
S66
WS #231 Address Digital Funding Gaps in the Developing World — Raj Singh: So, yes, just a couple of things though. One, there was a question about submarine cables. There was a refere…
S67
De-briefing and Next steps — The principal point is the identification of a significant gap in the existing educational and organisational structures…
S68
https://dig.watch/event/india-ai-impact-summit-2026/ai-2-0-the-future-of-learning-in-india — And mentor -mentee is always a guru -shishya context, which is very meaningful and useful. I will close this remark by s…
S69
https://dig.watch/event/india-ai-impact-summit-2026/keynote-vishal-sikka — And overcoming that gap is where a lot of value -creating opportunity is. Bridging that gap requires delivering correct …
S70
LANGUAGE AND DIPLOMACY — It is thus clear that relations between nations may worsen considerably because developments are prejudged by the use of…
S71
ISSN 1011-6702 — – Saner, R. &amp; L. Yiu. ‘Business-Government-NGO Relations: Their Impact on Global Economic Governance’. In Global Gov…
S72
Keynote-Dario Amodei — “of AI models, their potential for misuse by individuals and governments, and their potential for economic displacement….
S73
Welcome Address — Prime Minister Narendra Modi This comment introduces a major policy position that distinguishes India’s approach from o…
S74
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — And India is definitely leading the way in terms of application layer. There’s no doubt about that. Now, of course, with…
S75
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — This comment elevated the discussion from technical implementation to geopolitical strategy. It influenced the final que…
S76
Driving Indias AI Future Growth Innovation and Impact — These key comments fundamentally shaped the discussion by expanding it beyond technical infrastructure to encompass trus…
S77
New OpenAI platform aims to connect employers and talent — OpenAI has announced plans tolaunch an AI-powered hiring platformto compete with LinkedIn directly. The service, OpenAI …
S78
From brainwaves to breakthroughs: The future with brain-machine interfaces — Broader Applications Beyond Disability Assistance
S79
Future of work — AI technology can help automate many tasks, allowing people to focus on work that only humans can do. Employers canreduc…
S80
Indeed expands AI tools to reshape hiring — Indeed isexpanding its use of AIto improve hiring efficiency, enhance candidate matching, and support recruiters, while …
S81
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S82
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — The challenge of not just managing risk as an industry, but also doing so in a way that supports global innovation and i…
S83
Saturday Opening Ceremony: Summit of the Future Action Days — Folly Bah Thibault: summit of the future action days. Yes! I love the energy already. Loving the energy. My name is Fo…
S84
UN: Summit of the Future Global Call — Pakistan:His Excellency Antonio Guterres, Secretary General of the United Nations. His Excellency Nangolo Mumba, Preside…
S85
WS #187 Bridging Internet AI Governance From Theory to Practice — Luca Belli: We have six minutes. Do we have any other comments or questions in the room? I don’t see any hands. We have …
S86
AI in education: Leveraging technology for human potential — ## OpenAI’s Evolution and Current Scale However, Mills argued that when used correctly as a learning assistant and tuto…
S87
WS #254 The Human Rights Impact of Underrepresented Languages in AI — 3. Accessibility Framing: Singh suggested framing language inclusion as an accessibility issue to leverage existing lega…
S88
WS #462 Bridging the Compute Divide a Global Alliance for AI — – OpenAI committed to expanding their “OpenAI for countries” programme and Academy training Ivy Lau-Schindewolf: All th…
S89
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — The expansion of language support remains an ongoing challenge and opportunity. Currently, Bhashini is being enhanced to…
S90
AI job interviews raise concerns among recruiters and candidates — As AI takes on a growing share of recruitment tasks,concernsare mounting that automated interviews and screening tools c…
S91
GermanAsian AI Partnerships Driving Talent Innovation the Future — This perspective was complemented by Mr. Govind Jaiswal from India’s Ministry of Education, who provided a historical fr…
S92
WSIS Action Line C10: The Future of the Ethical Dimensions of the Information Society — Ricardo Baptista Leite:Okay, I’ll go, I’ll go for it. Yeah, good. Okay. So, well, thank you so much again. It’s okay, bu…
S93
Keynote-Sam Altman — -Sam Altman: Role/Title: CEO of OpenAI; Area of expertise: Artificial intelligence, artificial general intelligence deve…
S94
Keynote interview with Sam Altman (remote) and Nick Thompson (in-person) — Introduction:We’ve got, in true AI for good style, a modern visionary who is likely shaping the world’s attitude to AI. …
S95
Elon Musk and UK PM Rishi Sunak delve into AI safety, China, and the future of work at AI summit — Elon Musk, Tesla and SpaceX CEO, and Rishi Sunak, the British Prime Minister, had a wide-ranging conversation on AI, Chi…
S96
AI &amp; Diplomacy: Managing New Frontiers – ADF 2024 — Muniz emphasized AI’s potential to enhance democratic life, sharing insights from a project on democracy-affirming techn…
S97
AI Policy Summit Opening Remarks: Discussion Report — “The only way you could see that he was communicating with us is that there was a little bit of a tear coming out of his…
S98
What policy levers can bridge the AI divide? — ## Forward-Looking Perspectives ## Key Challenges and Opportunities **Additional speakers:** **Smart Africa’s Contine…
S99
https://dig.watch/event/india-ai-impact-summit-2026/powering-ai-global-leaders-session-ai-impact-summit-india — And what we’re seeing is that there is a subset of users. Think of them as power users. And those power users who are us…
S100
Day 0 Event #154 Last Mile Internet: Brazil’s G20 Path for Remote Communities — Jarrell James highlighted the direct correlation between per capita electricity generation and GDP, emphasizing the fund…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument49 words per minute144 words174 seconds
Argument 1
Vahan.ai showcases AI‑driven talent‑job matching, highlighting AI’s role in employability (Speaker 1)
EXPLANATION
Speaker 1 points to Vahan.ai as an example of how artificial intelligence can be used to connect talent with job opportunities. This illustrates AI’s potential to improve employability and streamline the labour market.
EVIDENCE
Speaker 1 mentions a company called Vahan.ai that has done “incredible work … to be able to connect talent together with jobs” [8].
MAJOR DISCUSSION POINT
Vahan.ai showcases AI‑driven talent‑job matching, highlighting AI’s role in employability (Speaker 1)
AGREED WITH
Chris Lehane
S
Speaker 2
1 argument158 words per minute80 words30 seconds
Argument 1
AI recruiter expands multilingual support and envisions future use for financial, educational, and government services (Speaker 2)
EXPLANATION
Speaker 2 describes OpenAI’s AI recruiter, which currently supports English and Hindi and plans to add more languages. The speaker envisions the technology later being applied to provide broader access to financial, educational, and governmental services.
EVIDENCE
Speaker 2 states that the AI recruiter “supports English and Hindi, but we can have each state in the next year or so” and that in the future it could be used to bring people access to “financial services or … educational opportunities or even government services” [11-14].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multilingual inclusion and plans to extend AI services to financial, educational, and government domains are discussed in the AI for Multilingual Inclusion session, which highlights language support for public services [S15].
MAJOR DISCUSSION POINT
AI recruiter expands multilingual support and envisions future use for financial, educational, and government services (Speaker 2)
AGREED WITH
Chris Lehane
C
Chris Lehane
4 arguments185 words per minute2347 words758 seconds
Argument 1
Free, low‑cost AI tools in India enable mass participation and economic inclusion (Chris Lehane)
EXPLANATION
Chris Lehane emphasizes that AI tools are offered for free or at a very low subscription price in India, allowing hundreds of millions of users to access the technology. This broad accessibility is presented as a key factor for inclusive economic participation.
EVIDENCE
He notes that “the vast majority are able to access our tools for free” and that the paid version in India costs “about $3.99 a month” [65-68], and stresses that “you have to have access to this if you’re going to have any chance to participate in those economics” [69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI Impact Summit highlighted that India has 100 million regular AI users with free tools and a paid tier at $3.99 per month, illustrating low-cost access enabling broad participation [S17].
MAJOR DISCUSSION POINT
Free, low‑cost AI tools in India enable mass participation and economic inclusion (Chris Lehane)
AGREED WITH
Speaker 1, Speaker 2
DISAGREED WITH
Speaker 2
Argument 2
Power users of AI deliver roughly 7× the economic value of non‑power users, underscoring the need to close the capability gap (Chris Lehane)
EXPLANATION
Lehane explains that individuals who use AI as a “coach” or “multiplier” generate about seven times more economic impact than typical users. This disparity highlights a capability gap that must be addressed to ensure broader societal benefit.
EVIDENCE
He reports that power users are “effectively creating a 7x economic impact” and that a power-user at a company “is likely delivering a 7x value vis-a-vis a non-power user” [44-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same summit emphasized that power users generate about seven times the economic impact of typical users, underscoring a capability gap [S17].
MAJOR DISCUSSION POINT
Power users of AI deliver roughly 7× the economic value of non‑power users, underscoring the need to close the capability gap (Chris Lehane)
AGREED WITH
Speaker 1
Argument 3
Access, AI literacy, and personal agency are three essential pillars; fostering agency lets individuals own and profit from their labor (Chris Lehane)
EXPLANATION
Lehane outlines three pillars for democratizing AI: universal access, AI‑related literacy, and personal agency. He argues that when people have agency, they can use AI to own and monetize their labour rather than merely selling it.
EVIDENCE
He repeats that “access is core to democratizing AI” and cites India’s large user base and low-cost model as evidence of access [55-63]; he describes literacy as “reading and writing and arithmetic and AI literacy” and gives examples of people using AI for astrology, sports betting, etc., to illustrate learning by use [70-80]; finally, he discusses agency as the “most challenging” pillar, explaining that AI is a general-purpose technology that lets anyone who wants to use it “own my labor and make money off of my labor” [80-96].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Lehane’s three-pillar framework of access, AI literacy, and agency is outlined in the Powering AI Global Leaders session and reinforced by discussions on AI skilling and inclusive governance in [S13] and [S20].
MAJOR DISCUSSION POINT
Access, AI literacy, and personal agency are three essential pillars; fostering agency lets individuals own and profit from their labor (Chris Lehane)
AGREED WITH
Speaker 2
DISAGREED WITH
Speaker 2
Argument 4
The printing‑press analogy illustrates how AI could follow either a democratic or autocratic trajectory; India’s large democracy positions it to lead the democratization of AI globally (Chris Lehane)
EXPLANATION
Lehane draws a historical parallel between the invention of the printing press and today’s AI, noting that the press led to divergent outcomes in Europe (democratic diffusion of knowledge) and China (authoritarian control). He argues that India, as the world’s largest democracy, can shape AI’s future toward a democratic model.
EVIDENCE
He recounts the printing-press story, describing how Europe’s fragmented political landscape allowed the press to spread ideas and foster democratization, whereas China’s centralized dynasty suppressed it, and then links this to the current choice between “democratic AI or autocratic AI” with India’s role as a strategic partner in this outcome [112-124], concluding that India’s democratic status can influence global AI democratization [124-128].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Lehane used the printing-press analogy to compare democratic versus authoritarian AI outcomes, noting India’s democratic status as a strategic advantage, as presented in the summit and further referenced in AI for Democracy and India’s AI integration remarks [S17], [S21], [S22].
MAJOR DISCUSSION POINT
The printing‑press analogy illustrates how AI could follow either a democratic or autocratic trajectory; India’s large democracy positions it to lead the democratization of AI globally (Chris Lehane)
Agreements
Agreement Points
Democratizing AI through broad access
Speakers: Speaker 1, Speaker 2, Chris Lehane
Vahan.ai showcases AI‑driven talent‑job matching, highlighting AI’s role in employability (Speaker 1) AI recruiter expands multilingual support and envisions future use for financial, educational, and government services (Speaker 2) Free, low‑cost AI tools in India enable mass participation and economic inclusion (Chris Lehane)
All three speakers stress that AI must be widely accessible – through platforms that connect talent to jobs, through multilingual support that reaches diverse language groups, and through free or very low-cost tools that allow mass participation – as a prerequisite for democratizing AI [8][11-14][65-69].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for open AI ecosystems and open-source LLMs to ensure universal access align with policy advocacy for open platforms [S41] and the Global South perspective on open-source models as a democratizing force [S42]; similar open-foundation strategies are discussed in AI democratization roadmaps [S57].
AI can generate significant economic value and must address the capability gap
Speakers: Speaker 1, Chris Lehane
Vahan.ai showcases AI‑driven talent‑job matching, highlighting AI’s role in employability (Speaker 1) Power users of AI deliver roughly 7× the economic value of non‑power users, underscoring the need to close the capability gap (Chris Lehane)
Both speakers highlight AI’s potential to boost economic outcomes – Speaker 1 by linking talent to jobs, and Chris Lehane by noting that power users create about seven times more economic impact, pointing to a capability gap that needs to be closed [8][44-48].
POLICY CONTEXT (KNOWLEDGE BASE)
The economic growth potential of AI across sectors is highlighted in productivity and innovation reports [S55] and policy analyses urging AI to narrow development disparities [S56]; the need to bridge the capability gap mirrors governance findings on the gap between ambition and implementation capacity [S53].
Education, AI literacy and personal agency are essential for democratizing AI
Speakers: Speaker 2, Chris Lehane
AI recruiter expands multilingual support and envisions future use for financial, educational, and government services (Speaker 2) Access, AI literacy, and personal agency are three essential pillars; fostering agency lets individuals own and profit from their labor (Chris Lehane)
Speaker 2 and Chris Lehane agree that beyond mere access, building AI-related literacy and personal agency is crucial; the recruiter’s future educational role and Lehane’s three-pillar framework both stress learning and agency as keys to inclusive AI adoption [11-14][70-73].
POLICY CONTEXT (KNOWLEDGE BASE)
Lehane’s three pillars-access, literacy, agency-are documented as key to closing capability gaps [S45]; broader calls for democratized learning through accessible tools support this view [S46] and emphasize prioritizing basic education before technology deployment [S47]; the importance of human agency is reinforced in discussions of responsible AI deployment [S49].
Similar Viewpoints
Both see AI as a driver of economic participation – connecting people to work and amplifying productivity for those who master the technology [8][44-48].
Speakers: Speaker 1, Chris Lehane
Vahan.ai showcases AI‑driven talent‑job matching, highlighting AI’s role in employability (Speaker 1) Power users of AI deliver roughly 7× the economic value of non‑power users, underscoring the need to close the capability gap (Chris Lehane)
Both stress the importance of removing cost and language barriers to broaden AI uptake across populations [11-14][65-69].
Speakers: Speaker 2, Chris Lehane
AI recruiter expands multilingual support and envisions future use for financial, educational, and government services (Speaker 2) Free, low‑cost AI tools in India enable mass participation and economic inclusion (Chris Lehane)
Both present AI as a platform that can extend beyond pure technology to improve employability and deliver broader societal services [8][11-14].
Speakers: Speaker 1, Speaker 2
Vahan.ai showcases AI‑driven talent‑job matching, highlighting AI’s role in employability (Speaker 1) AI recruiter expands multilingual support and envisions future use for financial, educational, and government services (Speaker 2)
Unexpected Consensus
AI as a catalyst for broader societal services and personal agency beyond traditional tech uses
Speakers: Speaker 2, Chris Lehane
AI recruiter expands multilingual support and envisions future use for financial, educational, and government services (Speaker 2) Access, AI literacy, and personal agency are three essential pillars; fostering agency lets individuals own and profit from their labor (Chris Lehane)
While Speaker 2 frames AI primarily as a multilingual recruiter for future financial, educational and governmental applications, Chris Lehane extends the discussion to personal agency-how individuals can own and monetize their labour using AI. The convergence on AI as a tool for both systemic service delivery and individual empowerment was not explicitly linked earlier in the dialogue [11-14][80-96].
POLICY CONTEXT (KNOWLEDGE BASE)
AI’s role as critical infrastructure for public services and continuity is noted in capacity-challenge analyses [S54]; its cross-sector economic benefits are detailed in growth impact studies [S55]; the emphasis on partnership, trust, and societal impact aligns with frameworks for trustworthy AI systems [S44] and broader governance considerations [S50].
Overall Assessment

The speakers converge on three core ideas: (1) AI must be broadly accessible through free/low‑cost tools and multilingual support; (2) AI can generate substantial economic value, but a capability gap exists that needs to be closed; (3) education, AI literacy and personal agency are essential to ensure that the benefits of AI are widely shared. These points cut across access, economic inclusion, and capacity‑building themes.

High – there is strong alignment among the speakers on the necessity of access, the economic promise of AI, and the role of education/agency. This consensus suggests a shared commitment to policies that lower barriers, invest in AI literacy, and design inclusive AI ecosystems, reinforcing the agenda of democratizing AI at both national (India) and global levels.

Differences
Different Viewpoints
Different strategic emphasis for democratizing AI – Chris Lehane stresses universal free access, literacy and personal agency as the core pillars, while Speaker 2 focuses on expanding multilingual support of an AI recruiter and future applications to financial, educational and government services.
Speakers: Chris Lehane, Speaker 2
Free, low‑cost AI tools in India enable mass participation and economic inclusion (Chris Lehane) AI recruiter expands multilingual support and envisions future use for financial, educational, and government services (Speaker 2)
Chris argues that democratization hinges on free or very cheap access, AI literacy and agency ([65-68][55-63][80-96]), whereas Speaker 2 highlights the need to add language coverage and later leverage the recruiter for broader public-service delivery ([11-14]). Both seek inclusive AI but propose different primary levers.
POLICY CONTEXT (KNOWLEDGE BASE)
Lehane’s focus on free access, literacy, and agency is articulated in his public statements on AI as critical infrastructure [S45]; the technical emphasis on multilingual recruitment tools and broader applications reflects roadmap discussions on multilingual capabilities and technical features [S57] and highlights language-access challenges identified in digital inclusion reports [S48].
Priority of education versus technology rollout – Chris Lehane positions education as the historic passport to close capability gaps, while Speaker 2 does not address education, concentrating on technological features (multilingual support) as the solution.
Speakers: Chris Lehane, Speaker 2
Access, AI literacy, and personal agency are three essential pillars; fostering agency lets individuals own and profit from their labor (Chris Lehane) AI recruiter expands multilingual support and envisions future use for financial, educational, and government services (Speaker 2)
Chris stresses that education (literacy and agency) is essential to bridge the capability gap ([52-53][70-80][80-96]), whereas Speaker 2’s remarks focus on language capabilities and downstream service applications without mentioning education ([11-14]). This reflects a divergence on what should be prioritized first.
POLICY CONTEXT (KNOWLEDGE BASE)
Lehane’s view of education as the historic passport to capability closure is documented in his advocacy for access, literacy, and agency [S45]; the argument for prioritizing basic education before technology deployment is echoed in barrier-free tech initiatives [S47]; broader analyses of policy ambition versus capacity underscore the tension between education-centric and technology-centric approaches [S53].
Unexpected Differences
Overall Assessment

The discussion shows broad consensus on the need to democratize AI and use it to improve economic inclusion. The main points of contention are strategic – whether to prioritize universal free access, literacy and agency (Chris Lehane) or to focus first on expanding multilingual capabilities and future service integrations (Speaker 2). A secondary tension concerns the role of education versus technological rollout as the primary lever for closing the capability gap.

Low to moderate. The speakers are aligned on the end goal (democratized, inclusive AI) but diverge on the immediate policy and implementation priorities. This suggests that collaborative frameworks will need to reconcile access‑cost models with language‑expansion roadmaps and embed education components to achieve cohesive progress.

Partial Agreements
All three speakers agree that AI tools can broaden access to economic opportunities – whether through job‑matching platforms, multilingual recruitment services, or free low‑cost tools – and that this contributes to inclusive growth. They differ on the specific mechanism, but share the same overarching goal of leveraging AI for employability and economic inclusion ([8][11-14][65-68]).
Speakers: Speaker 1, Speaker 2, Chris Lehane
Vahan.ai showcases AI‑driven talent‑job matching, highlighting AI’s role in employability (Speaker 1) AI recruiter expands multilingual support and envisions future use for financial, educational, and government services (Speaker 2) Free, low‑cost AI tools in India enable mass participation and economic inclusion (Chris Lehane)
Takeaways
Key takeaways
Democratizing AI requires broad, affordable access to tools, as demonstrated by free or low‑cost AI services in India. AI can significantly boost economic productivity; power users generate roughly 7× more value than non‑power users, highlighting a capability gap. Education is critical to closing the capability gap and includes three pillars: access, AI literacy, and personal agency to own and monetize one’s labor. Historical analogies (printing press) illustrate that AI could follow either a democratic or autocratic path; India’s status as the world’s largest democracy positions it to influence a democratic AI future. Partnerships such as with Vahan.ai show AI’s potential to improve employability by matching talent with jobs.
Resolutions and action items
Expand multilingual support of the AI recruiter (currently English and Hindi) to additional Indian languages within the next year. Maintain low‑cost pricing (e.g., $3.99/month) to ensure continued mass access in India. Collaborate with Indian stakeholders to develop programs that improve AI literacy and foster user agency.
Unresolved issues
Specific strategies and timelines for closing the capability gap among general users. How to effectively integrate AI into financial, educational, and government services at scale. Methods for measuring and ensuring that increased agency translates into equitable economic ownership for users. Potential regulatory or policy frameworks needed to prevent an autocratic deployment of AI.
Suggested compromises
Balancing rapid expansion of AI access with parallel investments in AI literacy and agency training to avoid a superficial adoption. Positioning India as a strategic partner rather than merely a customer, aligning OpenAI’s mission with local democratic goals.
Thought Provoking Comments
He unpacked the ‘capability gap’, explaining that power users of AI can generate a 7x economic impact compared to non‑power users, and warned that this gap could widen societal inequality.
It highlighted a concrete metric (7x value) that illustrates how uneven adoption can translate into massive economic disparities, moving the conversation from abstract benefits to tangible risks.
This comment shifted the discussion toward equity concerns, prompting listeners to consider how to close the gap through education and policy rather than just celebrating AI’s potential.
Speaker: Chris Lehane
He identified three pillars for democratizing AI – access, literacy, and agency – stressing that without agency, technology remains a tool rather than an empowerment platform.
By structuring the challenge into three clear components, he provided a roadmap for stakeholders and introduced agency as a nuanced, often overlooked factor beyond mere access or skills.
The audience’s focus moved from simply providing tools to a deeper conversation about fostering purposeful use, influencing later remarks about education reform and labor ownership.
Speaker: Chris Lehane
He compared AI’s societal impact to the printing press, noting how Europe’s fragmented political landscape allowed democratization of knowledge, whereas China’s centralized control suppressed it.
The historical analogy framed AI as a pivotal technology that could either expand democratic discourse or reinforce authoritarian control, adding geopolitical depth to the dialogue.
This sparked a shift toward discussing the role of governance, positioning India’s democratic context as a decisive factor and leading to the claim that India could set the global standard for democratic AI.
Speaker: Chris Lehane
He argued that the current U.S. public education system was designed for the industrial age, teaching assembly‑line mindsets, and that a new ethos is needed so students view AI as a means to ‘own their labor’ rather than just a shortcut for homework.
The critique of legacy education systems introduced a systemic perspective, linking historical institutional design to present challenges in AI adoption.
This reframed the conversation around curriculum reform and cultural attitudes, encouraging participants to think about long‑term societal change rather than short‑term tool deployment.
Speaker: Chris Lehane
He stated, ‘If you really think about how the social contract has generally worked, it has always been this calibration, maybe a fight between labor and capital. This technology allows folks who are using their labor to be able to actually own it and participate in it in a fundamentally different way.’
This comment introduced a radical re‑thinking of economic relationships, suggesting AI could invert traditional labor‑capital dynamics.
It deepened the analysis by connecting AI to broader socioeconomic structures, prompting listeners to contemplate policy, ownership models, and the future of work.
Speaker: Chris Lehane
He emphasized, ‘India is the world’s largest democracy; if it can democratize AI here, we will be democratizing AI around the world,’ positioning India as a strategic partner rather than just a customer.
This statement elevated the discussion from technical deployment to geopolitical leadership, highlighting India’s potential influence on global AI governance.
The tone shifted to a call to action for Indian stakeholders, reinforcing the summit’s purpose and setting a forward‑looking agenda for collaboration.
Speaker: Chris Lehane
Overall Assessment

Chris Lehane’s remarks served as the engine of the discussion, repeatedly introducing new lenses—economic disparity, a three‑pillar framework, historical analogies, education reform, labor‑capital rebalancing, and geopolitical leadership—that redirected the conversation from a simple showcase of AI tools to a multifaceted debate about equity, agency, and global governance. Each pivotal comment opened a new thematic branch, prompting participants to consider not just how AI works, but who benefits, how societies must adapt, and what role India can play in shaping a democratic AI future.

Follow-up Questions
How can the AI recruiter be expanded to provide access to financial services, educational opportunities, and government services?
Understanding the pathways to extend AI recruiting tools beyond job matching is essential for broader societal impact and inclusion.
Speaker: Speaker 2 (OpenAI representative)
How can we close the capability gap between power users of AI and the broader population?
Bridging this gap is critical to ensure that AI benefits are equitably distributed rather than concentrated among a small group of advanced users.
Speaker: Chris Lehane
What strategies are needed to improve AI literacy (reading, writing, arithmetic, AI-specific skills) among users?
Higher AI literacy enables more people to use the technology effectively and safely, reducing misuse and widening participation.
Speaker: Chris Lehane
How can we cultivate agency in users—especially students—so they view AI as a tool to own and monetize their labor rather than merely a shortcut?
Agency determines whether individuals can leverage AI for genuine productivity gains and economic empowerment.
Speaker: Chris Lehane
What educational reforms are required to align public education with the AI/intelligence age, moving beyond industrial‑era models?
Curricula and pedagogy must evolve to prepare learners for a future where AI is a core productivity multiplier.
Speaker: Chris Lehane
How can we increase the proportion of students who have genuine agency and enthusiasm for AI from roughly 20% to near 100%?
Broad student engagement is necessary for a workforce that can fully harness AI’s potential and for democratic AI adoption.
Speaker: Chris Lehane
Will AI development result in democratic AI or autocratic, centralized AI, and what role can India play in shaping this outcome?
The governance model adopted will affect global AI ethics, control, and the distribution of power; India’s stance could set a precedent.
Speaker: Chris Lehane
What metrics and methodologies can be used to quantify the reported 7x economic impact of power users of AI tools?
Robust measurement is needed to validate claims, guide policy, and inform investment decisions.
Speaker: Chris Lehane
How can India’s rapid growth in usage of developer tools like Codex be leveraged to lead in AI democratization?
Capitalizing on this momentum could position India as a strategic partner and model for worldwide AI inclusion.
Speaker: Chris Lehane

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote by Marcus Wallenberg Chairman SEB & Saab

Keynote by Marcus Wallenberg Chairman SEB & Saab

Session at a glanceSummary, keypoints, and speakers overview

Summary

Speaker 1 frames the discussion as a major step forward for an AI initiative, hoping it will follow the model of Prime Minister Modi’s strategy of mobilising Indian companies for long-term political goals [1-3]. He first outlines Sweden’s decade-long AI research programme, launched around 2015-2017 under the name WASP, which has built a strong basic-research base and now graduates roughly one PhD each week [8-10]. In contrast, India has not pursued a primarily R&D route but has developed a vast applied software-engineering and IT-services sector that serves a global customer base [13-15]. The speaker argues that these complementary strengths make Sweden and India well-suited for joint research and application projects, especially as India’s market momentum can amplify AI initiatives for its customers [16-18]. He notes that Swedish industry is dominated by multinational engineering firms, while India’s industrial structure differs, yet both can benefit from layering AI across the technology stack [21-24]. The urgency of this collaboration is heightened by the surge of cheap Chinese exports following the April 2 tariff changes, which threatens European and Swedish manufacturers [26-28]. To remain competitive, the speaker contends that diffusion of AI into large companies is essential, enabling them to compete on price and to develop new business models beyond mere cost efficiency [29-33][35-36]. He observes that India tends to have a more positive outlook on AI and digitisation than Europe, and highlights life-science applications such as accelerated drug-molecule discovery and personalised medicine as especially promising [38-45]. In the defence sector, he cites examples like Saab’s AI-enhanced radar systems and a 2025 test in which an AI agent fully controlled a Gripen aircraft, demonstrating AI’s strategic value [46-49][62]. The speaker also points to telecommunications, stating that future 5G and 6G networks will be largely AI-driven, which is crucial for handling the massive data flows societies will generate [66-67]. He concludes that AI will increasingly support both commercial efficiency and rapid product development across many domains [64-65]. Overall, the discussion emphasizes that leveraging Sweden’s research capacity together with India’s software expertise can strengthen competitiveness against low-cost rivals and drive transformative applications in health, defence, and communications [15-18][29-33][38-45][66-67]. The speaker therefore views AI diffusion as a pivotal factor for economic growth and societal resilience in the coming years [31-33][66-67].


Keypoints

Sweden and India have complementary AI strategies: Sweden has built a strong R&D foundation with a decade-long national program, dedicated research arenas, and a PhD pipeline that now produces a graduate each week [8-12]. India, by contrast, has focused on applied software engineering through its large IT services sector, creating a vast global customer base [13-15]. This makes the two countries well-suited to collaborate, combining Sweden’s research depth with India’s implementation expertise [15-16].


AI diffusion is seen as essential for industrial competitiveness, especially against cheap Chinese exports: The speaker notes that after recent tariff changes, a flood of low-priced Chinese products has pressured European firms [27-30]. While AI is not a panacea, embedding it across large companies is viewed as the key to staying competitive, enabling new business models, cost efficiency, and innovative services [31-36].


Targeted AI applications are highlighted in life sciences, defense, and telecommunications: The speaker points to AI accelerating drug discovery, personalized medicine, and broader health services [40-45]; to AI-driven capabilities in defense systems such as radar-controlled aircraft and autonomous flight of the Gripen [46-63]; and to the future of 5G/6G networks being fundamentally AI-driven, handling massive data flows for society [66-67].


A call for deeper Sweden-India collaboration on AI implementation: By leveraging Indian IT services to place AI “on top of the whole stack,” both nations can jointly address the competitive challenge posed by China and create new market opportunities [24-26].


Overall purpose/goal: The speaker aims to promote a joint Sweden-India AI initiative, arguing that combining Sweden’s research strengths with India’s applied software expertise will boost industrial competitiveness, drive innovation in key sectors, and position both countries advantageously in the global AI landscape.


Overall tone: The discussion is consistently optimistic and forward-looking, beginning with enthusiasm about the “big step forward” [1] and maintaining a constructive, solution-oriented tone. While acknowledging external pressures (e.g., Chinese price competition) [27-30], the speaker frames AI as a positive lever for growth rather than a source of alarm, ending on a hopeful note about AI-driven networks and societal benefits [66-67].


Speakers

Speaker 1


– Role/Title:


– Area of Expertise:


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

Speaker 1 opens the talk by describing the AI initiative as “a really big step forward” and expresses the hope that it will follow the model set by Prime Minister Narendra Modi, who mobilised Indian companies to achieve long-term political objectives [1-3].


He then turns to Sweden’s own AI journey. Sweden launched the WASP programme between 2015 and 2017; a decade later it continues to fund basic AI research and now graduates roughly one PhD per week [8-10]. This sustained effort has built a deep knowledge base that underpins Sweden’s industrial AI capabilities [11-12].


In contrast, the speaker notes that India has not pursued a primarily R&D-centric route. Instead, it has built a massive applied software-engineering and IT-services sector that serves a worldwide customer base, creating both a knowledge and a market platform for AI applications [13-15].


Because of these complementary strengths, the speaker argues that Sweden and India are well-suited for joint projects that combine Swedish research depth with Indian implementation expertise. He points out that Indian momentum can amplify AI initiatives for customers, offering a “fantastic possibility” to develop the programme effectively [15-18].


The industrial context is then outlined. Swedish industry is dominated by multinational engineering firms with a global scale [21], whereas India’s industrial structure differs. The key issue facing Swedish industry is that Indian IT-services expertise can be layered on top of the Swedish engineering stack, enabling AI diffusion across the whole technology stack [20-24].


A sense of urgency follows the discussion of recent geopolitical shifts. After the tariffs were imposed on 2 April (the “beautiful day in the Rose Garden”), we have seen a surge of very cheap Chinese exports entering world markets, creating a serious challenge for European and Swedish manufacturers [26-28]. The speaker asks how firms can compete with low-price Chinese products and argues that AI diffusion into large companies is essential to meet this challenge [29-30].


While acknowledging that AI is not a cure-all, he stresses that embedding AI throughout large enterprises will be “key” for future competitiveness. AI can deliver cost efficiency, enable new business models, and create novel services and products, thereby strengthening market positions [31-36].


The speaker highlights sector-specific opportunities. He observes that India tends to have a more positive outlook on AI and digitisation than Europe [38], and he identifies life sciences as a particularly promising field. AI can accelerate the discovery of new drug molecules, support personalised medicine based on individual test results, and ultimately provide treatments for patients currently underserved [40-45]. He also notes his board membership at AstraZeneca, a large British-Swedish pharmaceutical company [42].


In the defence arena, the speaker emphasizes that the sector is heavily AI-enabled, citing Saab’s use of AI for data-intensive radar-controlled aircraft and a 2025 test in which an AI agent flew a Gripen aircraft in full control [46-55].


Beyond defence, the speaker mentions broader industrial benefits, including robotics and faster product development, which together “support companies in terms of being much more efficient” and drive rapid innovation [64-65].


Looking ahead to telecommunications, he asserts that future 5G and 6G networks will be fundamentally AI-driven, handling the massive data flows that societies will generate. This point is reinforced by the remark from Mr. Ek Udden, chief technical officer of Ericsson, about AI-driven telecom networks [58-60][66-67][S41].


In conclusion, Speaker 1 calls for a deeper Sweden-India collaboration that leverages Sweden’s long-term R&D capacity and India’s applied software expertise. By doing so, both nations can enhance industrial competitiveness against low-cost rivals, foster transformative applications in health, defence and communications, and secure economic growth and societal resilience in the coming years [1-3][8-12][13-18][31-36][38-45][46-55][66-67].


Session transcriptComplete transcript of the session
Speaker 1

It’s really a big step forward. I’ve followed Indian business for a long, long time, and the whole setup here reminds me of the way that Right Honourable Premier Modi set up his whole idea around making India with this tremendous force and getting the backing of very many Indian companies to achieve long -term political goals. So I really hope that the AI initiative will go the same way. I thought I would talk a little bit about three different matters. I’ll be relatively brief. I will start a little bit to talk about Sweden and India. I’ll talk a little bit about… AI diffusion and what’s important there and I’ll talk a little bit about some of the practical issues that we see from an industrial point of view what we can think about when it comes to AI so let me start taking you back in the Swedish context might be a reference point to what is going on here right now Sweden started its research a big research effort our family put in a program which is now ten years plus focusing on developing basic research in AI and we started that 2015 2017 we funded it with a major push into this and the reason for that was basically because we saw the automation needs and the autonomy needs of Swedish industry and industrial products It’s called WASP.

And not only do they have a number of arenas where they base a certain amount of typical research that you can use for AI, but also they started a school for PhDs and master’s students in AI. And today we graduate one PhD per week out of this program. So what does that mean for the Swedish context? That has been an extremely important part in terms of building the basic knowledge around AI and how you can use it. India, on the other hand, as I see it or as I perceive it, has not gone primarily the R &D route, but primarily the way to build this fantastic knowledge base in software engineering, which is much more applied, especially when you think about how…

India has worked with their IT services companies. developing a tremendous base in terms of customers, not only knowledge base, but also a customer base all around the world. So actually, from this point of view, India and Sweden should have a very good fit. And I think that some of us who are here on this trip in the Swedish delegation have seen the potential to work much closer with India along research lines and more application lines on the IT services and software knowledge that you have in this country. And as we know, when India starts moving, it’s a very major force. And you will, in my view, have a fantastic possibility to develop your initiative on AI in a very good way for your customers.

And that brings me a little bit into my… My second point. Namely, the whole question that we are dealing with from Swedish industry. You have to remember Swedish industry is to a large extent very much focused on multinational engineering companies that are having a global scale. India, of course, a different industrial structure. But here comes what I think is the big take where actually Sweden and India in more practical terms could work much closer with each other. Namely, the knowledge of the IT services companies putting an AI on top of the whole stack to be able to move this into a completely different position for these companies. So why is this important? This is important because what we’re witnessing today from an industrial point of view, not the least, after April 2nd, the beautiful day in the Rose Garden when all the tariffs were put on.

What we’ve seen since then is this widespread Chinese export of very cheap products into the world market, which is, of course, a big, big challenge for many companies in Europe and also in Sweden. This will be absolutely key for us in the future. How do we make sure that we can compete with Chinese and other companies, but primarily Chinese companies with very low prices? How do we make sure that we can compete on the world markets with them in a good way? I’m not saying AI is everything. But AI and the diffusion of AI into the real world of large companies will be key. Otherwise, we will not be able to do this in a smart way in years to come.

So therefore, I believe that also on this point, the whole competitiveness would be a very, very important part. But AI gives us more. AI gives us a huge possibility to move in and let the companies move into completely new areas in terms of business model, not only being cost efficient, but also in terms of providing new services and new products to the market. Here, I move into my third point. My third point is that we often, here in India, I think you have more of a positive way of thinking about AI and digitization, maybe, than Europe. But I tell you that when I look at certain industries and what is actually going on right now, it is a tremendous step forward.

And perhaps, and I sit on the board of AstraZeneca, which is a very large pharmaceutical company, British, Swedish based. And I would say that perhaps the most worthwhile app from AI going forward will be in life sciences. Not only life sciences in terms of providing better hospital services and so on. But when you think about how you will be able to use AI in getting new molecules in a much faster way. And when you think about how you will be able to use AI in getting new molecules in a much faster way. And when you think about how you will be able to apply more of personalized medicine based on your test results. you will be able to apply specialized treatment for people will mean that actually down the line we will provide medical needs to people that cannot be serviced today in the same way.

Then of course we look at things like robotics, but also another thing I would like to bring up is in the defense business. In defense material, AI will play a very significant role. We see it in many ways today, not the least when you start to accumulate and analyze data in a big way. For example, Saab, which is a Swedish defense company, is actually using radar aircraft where you need both for command and for control a tremendous amount of AI diffusion to really to be able to. But also on the other hand, we see that in the defense industry, we see that the defense industry is very much in the defense industry. We see that the defense industry is very much We see that the defense industry is very much in the defense industry.

We see that the defense industry is very much in the defense industry. We see that the defense industry is very much in the defense industry. We see that the defense industry is very much in the defense industry. We see that the defense industry is very much in the defense industry. We see that the defense industry is very much in the defense industry. We see that the defense industry is very much in the defense industry. We see that the defense industry is very much in the defense industry. very much in the defense industry. We see that the defense industry is very much in the defense industry. We see that the defense industry is very much in the defense industry.

We see that the defense which is actually divided in its software layer between both those that control the mission critical facts and those things that control the systems of the aircraft. In 2025, we actually applied an AI agent into the mission critical control and actually flew the Gripen aircraft with the AI agent in full control. So what is actually happening here is that on the one hand, you see these great abilities for AI to support companies in terms of being much more efficient, not only companies but also other governmental and other services coming through society. But on the other hand, you see these great abilities for AI to support companies in terms of being much more efficient, tremendous product development that is going on at a very, very high speed.

At the bottom line, I believe that we will see so much more of these examples coming through. And when I see Mr. Ek Udden here, who is the chief technical officer of Ericsson, I also remind myself that our future networks for 5G and 6G telecommunication will actually be, to a large extent, AI -driven and AI -focused. And this is, for societies, an extremely important point, that actually all these huge amounts of data that will go through societies, through the mobile networks in the future, will be completely supported by AI

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (2)
Confirmedhigh

“Sweden launched the WASP programme between 2015 and 2017; a decade later it continues to fund basic AI research and now graduates roughly one PhD per week.”

The knowledge base confirms that the Wallenberg AI, Autonomous Systems and Software Program (WASP) exists as a Swedish AI research institution and that it graduates one PhD per week [S4] and [S53].

Additional Contextmedium

“Sweden launched the WASP programme between 2015 and 2017; a decade later it continues to fund basic AI research and now graduates roughly one PhD per week.”

WASP is funded by the Knut and Alice Wallenberg Foundation and collaborates with Sweden’s five leading ICT universities, providing additional detail on its structure and support [S53].

External Sources (59)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Keynote by Marcus Wallenberg Chairman SEB &amp; Saab — Marcus Wallenberg delivered a strategically focused presentation on artificial intelligence development, emphasising the…
S5
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — 2.Industrial depth- expertise in scaling complex industrial systems 2.Infrastructure capacity- having sovereign compute…
S6
Welcome Address — “strong IT background, dynamic startup ecosystem, make India a natural hub for affordable, scalable, and secure AI solut…
S7
https://dig.watch/event/india-ai-impact-summit-2026/keynote-by-marcus-wallenberg-chairman-seb-saab — What we’ve seen since then is this widespread Chinese export of very cheap products into the world market, which is, of …
S8
Sticking with Start-ups / DAVOS 2025 — Ryder discusses how her company, Maven, is focusing on integrating AI into their operations. She emphasizes that this is…
S9
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Continuous learning is necessary in order to adapt to the rapidly evolving technological landscape. The half-life of ski…
S10
Artificial intelligence (AI) – UN Security Council — The discussion highlighted that open-source models enable a wide range of entities, from startups to larger corporations…
S11
Building fair markets in the algorithmic age (The Dialogue) — Despite the challenges, digital markets have brought about positive changes, especially in traditionally dominated marke…
S12
AI/Gen AI for the Global Goals — Shea Gopaul: So thank you, Sanda. And like Sandra, I’d like to thank the African Union, as well as Global Compact. i…
S13
Breakthroughs in human-centric bioscience with AI — This landmark achievement shows how powerful, responsible AI research can address urgent human health needs, moving beyo…
S14
The rise of tech giants in healthcare: How AI is reshaping life sciences — The intersection of technology and healthcareis rapidly evolving, fuelled by advancements in ΑΙ and driven by major tech…
S15
Legal Notice: — Segregation of networks for safety and mission-critical functions remains one of the basic physical means …
S16
Heathrow explores AI to ease air traffic congestion — Heathrow Airport, one of the world’s busiest, is trialling an advanced AI system named ‘Amy’ to assist air traffic contr…
S17
Keynotes — Oleksandr Bornyakov: Dear ladies and gentlemen, I’m honored to represent Ukraine today here in Strasbourg in the heart o…
S18
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — And not only that, but truly well performing networks. That is a fundamental platform to drive innovation on and to driv…
S19
Designing Indias Digital Future AI at the Core 6G at the Edge — I think fairly good question honestly speaking and if you look at AI and 6G are two parallel things. They are going to m…
S20
Future Network System as Open Platform in Beyond 5G/6G Era | IGF 2023 Day 0 Event #201 — A shift towards more energy-efficient and virtualised networks is crucial for future advancements. The recent deployment…
S21
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — <strong>Naveen GV:</strong> out a long, lengthy form of information for that to be processed much later by another human…
S22
Embracing the future of e-commerce and AI now (WEF) — The potential benefits include improvements in productivity, speed, and customer satisfaction. However, successful AI im…
S23
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — Artificial Intelligence (AI) carries the potential to revolutionise various sectors worldwide, due to its capacities for…
S24
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Economic | Infrastructure European Competitive Advantages and Success Stories Klein argues that Europe shouldn’t try t…
S25
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — But the second aspect of competition is really diffusion or adoption. As each country and the companies from each countr…
S26
What policy levers can bridge the AI divide? — **Zimbabwe’s National Strategy**: Minister Mavetera outlined Zimbabwe’s approach, mentioning what appears to be a framew…
S27
The role of standards in shaping an AI-driven future — Onoe outlined ITU’s engagement through its AI for Good initiative and partnerships with UN agencies and other standards …
S28
How to make AI governance fit for purpose? — Economic | Development The Trump administration believes AI will bring countless revolutionary applications across mult…
S29
Press Conference: Closing the AI Access Gap — Moreover, the speakers argue that AI can drive productivity, creativity, and overall economic growth. It has the capacit…
S30
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Ante este panorama, los países del sur global debemos priorizar estrategias y normativas para un uso ético y responsable…
S31
The Global Power Shift India’s Rise in AI &amp; Semiconductors — So the goal of Genesis Project is to really, one, align public and private partnership, two, invest government resources…
S32
The Innovation Beneath AI: The US-India Partnership powering the AI Era — It’s certainly from a quality perspective, SQA and back -end development, I think entrepreneurs have been able to levera…
S33
Keynote by Marcus Wallenberg Chairman SEB &amp; Saab — This comment is insightful because it identifies complementary strengths between two nations’ AI approaches – Sweden’s r…
S34
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — Sweden’s partnership with India is presented as combining India’s scale and speed with Sweden’s precision and trust. Bus…
S35
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — All speakers agree that the U.S.-India partnership represents a natural, mutually beneficial collaboration based on comp…
S36
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Economic | Infrastructure European Competitive Advantages and Success Stories Klein argues that Europe shouldn’t try t…
S37
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — But the second aspect of competition is really diffusion or adoption. As each country and the companies from each countr…
S38
Sticking with Start-ups / DAVOS 2025 — Ryder discusses how her company, Maven, is focusing on integrating AI into their operations. She emphasizes that this is…
S39
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Continuous learning is necessary in order to adapt to the rapidly evolving technological landscape. The half-life of ski…
S40
The role of standards in shaping an AI-driven future — Onoe outlined ITU’s engagement through its AI for Good initiative and partnerships with UN agencies and other standards …
S41
EuCNC & 6G Summit — The event focuses on telecommunications ranging from 5G deployment and mobile IoT to 6G exploration and future communica…
S42
5G traffic surges under growing AI usage — AI-driven applications are reshaping mobile data norms, and5G networks are feeling the pressure. Analysts warn that upli…
S43
Harnessing AI for Child Protection | IGF 2023 — Artificial Intelligence is giving a lot of opportunities in various fields such as education, law, etc.
S44
How to make AI governance fit for purpose? — – Anne Bouverot- Shan Zhongde- Chuen Hong Lew- Gabriela Ramos Economic and Social Impact Economic | Development The T…
S45
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Ante este panorama, los países del sur global debemos priorizar estrategias y normativas para un uso ético y responsable…
S46
The Global Power Shift India’s Rise in AI &amp; Semiconductors — So the goal of Genesis Project is to really, one, align public and private partnership, two, invest government resources…
S47
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — And that’s clearly something we try to do. And, of course, in addition, we need absolutely to have computer facility at …
S48
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — And Prime Minister, we believe that nations should always build the strongest intelligence infrastructure and cross -bor…
S49
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Agar kisi machine ko sir paper clip banane ka alak de diya jaye to wo uska ek kaam ke liye duniya ke saare resources ko …
S50
Resilient and Responsible AI | IGF 2023 Town Hall #105 — Audience:Thank you, madam. My name is Katia Sarajeva. I come from Spider at Stockholm University. I would slightly disag…
S51
Contents — 1 There is no one single, clear-cut or generally accepted definition of artificial intelligence, but many definitions. I…
S52
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — And today training is the thing that takes most of the cost. when it comes to training. Now, when it comes to our own ap…
S53
National Strategy for Artificial Intelligence — Wallenberg AI, Autonomous Systems and Software Program (WASP) is a Swedish research institution funded by the Knut and A…
S54
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-ebba-busch-deputy-prime-minister-sweden — It is long -term and built on trust. India is not only the world’s largest democracy, it is also the world’s youngest de…
S55
Keynote by Uday Shankar Vice Chairman_JioStar India — Despite this remarkable domestic success, Shankar identified a critical paradox: India has not yet broken through as a g…
S56
https://dig.watch/event/india-ai-impact-summit-2026/ai-2-0-reimagining-indian-education-system — India is number one economy, not third or fourth. So that’s the mindset. Because I have to reach to my potential. And I …
S57
Driving Social Good with AI_ Evaluation and Open Source at Scale — Can I, so I just wanted to add something to what you were saying. This is, you know, some of the organizations that we’v…
S58
Open Forum #33 Building an International AI Cooperation Ecosystem — Wushu Yan: Good afternoon, ladies and gentlemen. It is my great pleasure to attend this forum. The theme of my speech is…
S59
AI Transformation in Practice_ Insights from India’s Consulting Leaders — And a lot of our solutions that we’re doing here. probably going elsewhere as well. So clearly huge potential, huge oppo…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
10 arguments143 words per minute1534 words640 seconds
Argument 1
Sweden’s long‑term R&D focus builds deep AI research capacity
EXPLANATION
Sweden has invested heavily in a national AI research programme since 2015‑2017, creating dedicated funding, research arenas and an academic pipeline. This long‑term commitment has produced a steady output of highly qualified researchers.
EVIDENCE
The speaker describes Sweden’s ten-year AI research effort launched in 2015-2017, funded with a major push to meet automation and autonomy needs of industry, and the establishment of a school that now graduates one PhD per week [8-10].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Marcus Wallenberg describes Sweden’s WASP programme, its long-term research focus and the output of one PhD per week, illustrating deep AI capacity [S4].
MAJOR DISCUSSION POINT
Swedish AI research infrastructure
Argument 2
India’s strength lies in applied software engineering and global IT services
EXPLANATION
India has focused on building a large, applied software engineering base rather than pure R&D, leveraging its IT services sector to serve customers worldwide. This creates a complementary skill set to Sweden’s research orientation.
EVIDENCE
The speaker notes that India has not primarily pursued R&D but has developed a massive knowledge and customer base through its IT services companies, providing applied software expertise globally [13-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Wallenberg contrasts Sweden’s research with India’s applied software engineering base, and the welcome address highlights India’s strong IT background and startup ecosystem as a hub for scalable AI solutions [S4][S6].
MAJOR DISCUSSION POINT
Indian software and services capability
Argument 3
Joint work can combine Swedish research with Indian application expertise for mutual benefit
EXPLANATION
By linking Sweden’s deep AI research capacity with India’s applied software and service ecosystem, both countries can accelerate AI deployment and create new market opportunities. The speaker sees this partnership as a strategic advantage.
EVIDENCE
The speaker mentions that members of the Swedish delegation see potential for closer collaboration, combining Swedish research with Indian application expertise, and that India’s momentum can help develop AI initiatives for customers [16-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Wallenberg argues that Sweden’s research strengths and India’s application capabilities create synergistic collaboration opportunities, echoed by the deputy prime minister’s note on information partnerships between the two countries [S4][S5].
MAJOR DISCUSSION POINT
Sweden‑India AI collaboration
Argument 4
Chinese cheap‑price exports threaten European and Swedish manufacturers
EXPLANATION
The influx of low‑cost Chinese products into global markets creates a serious competitive pressure for European, especially Swedish, manufacturers. This challenge drives the need for new strategies.
EVIDENCE
The speaker references the post-April 2 tariffs situation and the subsequent widespread Chinese export of very cheap products, describing it as a big challenge for companies in Europe and Sweden [26-28].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Wallenberg points out the challenge posed by widespread cheap Chinese products for European and Swedish firms, a view reiterated in a separate comment on the same issue [S4][S7].
MAJOR DISCUSSION POINT
Competitive pressure from China
Argument 5
Diffusing AI across large companies is essential to stay competitive and innovate business models
EXPLANATION
Integrating AI throughout large enterprises is presented as a key lever for maintaining competitiveness and enabling new business models, beyond mere cost reductions. AI diffusion is portrayed as indispensable for future success.
EVIDENCE
The speaker states that while AI is not everything, its diffusion into large companies is essential for smart competition and innovation, emphasizing its role in future competitiveness [31-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Wallenberg stresses AI diffusion into large enterprises as key for competitiveness, and Davos speaker Ryder emphasizes that integrating AI is now necessary for companies to remain competitive [S4][S8].
MAJOR DISCUSSION POINT
AI diffusion for competitiveness
Argument 6
AI can enable cost efficiency while opening new services and products, strengthening market position
EXPLANATION
AI is described as a dual driver: it can improve cost efficiency and simultaneously open avenues for new services and products, thereby enhancing market positioning. This expands the strategic value of AI beyond savings.
EVIDENCE
The speaker explains that AI offers huge possibilities for companies to move into new business areas, providing both cost efficiency and new services/products [35-36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ryder notes AI delivers both cost efficiency and new service/product opportunities, while a WEF discussion highlights AI’s transformative impact on productivity and market positioning [S8][S22].
MAJOR DISCUSSION POINT
AI as a growth and efficiency engine
Argument 7
Life‑science and pharma: AI accelerates molecule discovery and enables personalized medicine
EXPLANATION
In the life‑science sector, AI can dramatically speed up the discovery of new molecules and support personalized medicine, leading to treatments for patients currently underserved. This is highlighted as a high‑impact application of AI.
EVIDENCE
The speaker, as a board member of AstraZeneca, points out that AI will be most valuable in life sciences, accelerating molecule discovery and enabling personalized treatments based on test results [40-45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Breakthroughs in human-centric bioscience with AI and analyses of tech giants entering healthcare illustrate AI’s role in speeding drug discovery and enabling personalized treatments [S13][S14].
MAJOR DISCUSSION POINT
AI in health and pharma
Argument 8
Defense: AI enhances data analysis, radar/aircraft control, and even autonomous mission‑critical flight
EXPLANATION
AI is portrayed as a critical technology for defense, improving large‑scale data analysis, radar systems, and enabling autonomous control of aircraft, exemplified by a test flight of a Gripen aircraft under AI control in 2025.
EVIDENCE
The speaker cites examples such as Saab’s radar aircraft using AI, and notes that in 2025 an AI agent was applied to mission-critical control, flying a Gripen aircraft fully autonomously [46-63].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Wallenberg cites AI applications in radar, aircraft control and an autonomous Gripen flight, and Heathrow’s AI system for air-traffic management provides another defence-related AI example [S4][S16].
MAJOR DISCUSSION POINT
AI in defense applications
Argument 9
Telecommunications: Future 5G/6G networks will be AI‑driven, handling massive data flows
EXPLANATION
Future telecommunications networks (5G/6G) will rely heavily on AI to manage and process the enormous volumes of data they will carry, making AI a foundational component of network operation.
EVIDENCE
The speaker references the chief technical officer of Ericsson, stating that upcoming 5G and 6G networks will be largely AI-driven and that AI will support the massive data traffic through societies [66-67].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The ‘Trusted Connections’ keynote describes AI as foundational for 6G networks, and the India-focused keynote highlights the merging of AI traffic with 5G/6G, confirming AI-driven telecom futures [S18][S19].
MAJOR DISCUSSION POINT
AI‑enabled next‑gen networks
Argument 10
Robotics and broader industrial applications benefit from AI‑enabled efficiency and product development speed
EXPLANATION
AI contributes to higher efficiency in robotics and industrial processes, accelerating product development cycles and enabling rapid innovation across sectors. This broad benefit underpins industrial competitiveness.
EVIDENCE
The speaker notes that AI supports companies in becoming more efficient and speeds up product development, leading to many forthcoming examples of AI-driven innovation [64-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The ‘AI for Safer Workplaces & Smarter Industries’ discussion showcases AI-enabled efficiency, safety and faster product development in robotics and industrial settings <a href="https://dig.watch/event/india-ai-impact-summit-2026/ai-for-safer-workplaces-smarter-industries-transforming-risk-into-real-time-intelligence/" target="_blank" class="diplo-source-cite" title="AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence" data-source-title="AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence" data-source-snippet="Naveen GV: out a long, lengthy form of information for that to be processed much later by another human in the loop, per se, to really looking at how do we get an experiential learnin”>[S21].
MAJOR DISCUSSION POINT
AI in robotics and industry
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Overall Assessment

Speaker 1 presented a series of arguments linking Sweden’s AI research capacity, India’s software services, and various sectoral applications of AI, but no other speakers are present in the transcript, so explicit inter‑speaker agreement cannot be identified. The speaker’s points are internally consistent, covering research infrastructure, industry collaboration, competitive pressures, and sector‑specific AI benefits.

Minimal consensus – only a single perspective is represented, limiting the ability to gauge broader agreement on the topics.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains only a single speaker (Speaker 1) presenting a series of arguments without any opposing viewpoints. Consequently, there are no identifiable points of disagreement, partial agreement, or unexpected disagreement among speakers.

Minimal to none. The lack of multiple participants means the discussion is unified around the speaker’s perspective, implying smooth consensus on the topics addressed.

Takeaways
Key takeaways
Sweden and India have complementary AI strengths: Sweden’s long‑term R&D focus builds deep research capacity, while India excels in applied software engineering and global IT services. Combining Swedish research expertise with Indian application and service capabilities can create mutually beneficial AI collaborations. AI diffusion across large industrial firms is critical to maintain competitiveness against low‑cost Chinese exports, enabling cost efficiency and new business models. Sector‑specific AI opportunities highlighted include life‑science/pharma (accelerated molecule discovery, personalized medicine), defense (advanced data analysis, autonomous aircraft control), telecommunications (AI‑driven 5G/6G networks), and robotics/industrial applications (enhanced efficiency and rapid product development). A positive attitude toward AI and digitization in India can help drive these initiatives forward.
Resolutions and action items
None identified
Unresolved issues
None identified
Suggested compromises
None identified
Thought Provoking Comments
Sweden has taken a research‑first approach, funding a national AI program (WASP) since 2015‑2017 and now graduating a PhD per week, whereas India has built its AI strength through applied software engineering and a massive IT services customer base.
This contrast highlights two fundamentally different national strategies for AI development, suggesting that each country possesses complementary assets rather than competing on the same footing.
It set the stage for the central theme of the talk – a potential Sweden‑India partnership – and prompted listeners to think about how basic research can be paired with applied industry expertise, opening a new line of discussion about cross‑border collaboration.
Speaker: Speaker 1
Because Sweden’s industry is dominated by multinational engineering firms and India’s industrial structure is different, the real ‘big take’ is that the IT services knowledge from India can be layered on top of Swedish engineering to create new AI‑enabled business models.
The comment reframes the collaboration not as a simple technology transfer but as a strategic integration of value chains, challenging any assumption that AI adoption is a one‑size‑fits‑all solution.
It shifted the conversation from abstract policy to concrete business opportunities, prompting the audience to consider specific sectors where AI could reshape product and service offerings.
Speaker: Speaker 1
After the tariffs were lifted on April 2nd, we have seen a flood of cheap Chinese exports, which is a major challenge for European and Swedish companies; AI diffusion into large firms will be key to staying competitive.
This introduces an immediate geopolitical and economic pressure point, linking AI adoption directly to market survival against low‑cost competition.
The tone moved from collaborative optimism to urgency, steering the discussion toward the strategic necessity of AI for competitiveness and prompting listeners to weigh AI as a defensive as well as an innovative tool.
Speaker: Speaker 1
The most worthwhile AI applications will likely be in life sciences – accelerating molecule discovery, enabling personalized medicine, and delivering services that are currently impossible.
By spotlighting drug discovery and personalized treatment, the speaker brings a high‑impact, socially relevant use case to the fore, expanding the conversation beyond industrial efficiency to human health.
It opened a new thematic branch of the dialogue, encouraging participants to think about AI’s societal benefits and potentially attracting interest from pharmaceutical and healthcare stakeholders.
Speaker: Speaker 1
In 2025 we actually applied an AI agent into the mission‑critical control of a Gripen aircraft and flew it with the AI in full control.
This bold claim about AI handling mission‑critical defense systems challenges conventional safety concerns and illustrates a frontier application of AI that few had considered.
It created a dramatic turning point, shifting the discussion toward ethical, security, and regulatory implications of AI in defense, and likely sparked curiosity and caution among the audience.
Speaker: Speaker 1
Future 5G and 6G telecommunications networks will be largely AI‑driven, meaning the massive data flowing through societies will be processed and managed by AI.
Linking AI to the backbone of future communications infrastructure underscores its pervasive role and raises questions about data governance, privacy, and societal impact.
This comment broadened the scope of the conversation to include infrastructure and public policy, prompting participants to consider long‑term societal transformations rather than isolated industry use cases.
Speaker: Speaker 1
Overall Assessment

Speaker 1’s remarks acted as a series of pivots that progressively deepened the discussion. Starting with a comparative analysis of Swedish and Indian AI strategies, the speaker introduced a collaborative vision that was then reframed by geopolitical competition with China, followed by high‑impact application domains in health, defense, and telecommunications. Each turning point not only introduced a fresh topic but also altered the tone—from optimistic partnership to strategic urgency, to visionary possibilities—thereby steering the audience toward a multidimensional view of AI’s role in industry, security, and society.

Follow-up Questions
How can Sweden and India collaborate more closely on AI research and applied software engineering initiatives?
The speaker highlights complementary strengths—Swedish basic AI research and Indian IT services—and suggests a partnership could accelerate AI development for both countries.
Speaker: Speaker 1
What AI‑driven strategies can European and Swedish companies adopt to remain competitive against low‑cost Chinese exports?
The speaker notes the challenge posed by cheap Chinese products and implies the need to explore AI‑enabled business models, cost efficiencies, and new services to maintain market share.
Speaker: Speaker 1
How can AI be used to speed up drug discovery, generate new molecules, and enable personalized medicine in the life‑sciences sector?
He identifies life sciences as a high‑impact area for AI, indicating a research gap in applying AI to accelerate molecular design and tailor treatments.
Speaker: Speaker 1
What are the technical, safety, and ethical considerations for deploying AI agents in mission‑critical defense systems, such as autonomous control of aircraft like the Gripen?
The speaker mentions an AI‑controlled flight test, raising the need for deeper investigation into AI integration within defense hardware and software layers.
Speaker: Speaker 1
In what ways will AI shape the architecture, management, and security of future 5G and 6G telecommunications networks?
He points out that upcoming networks will be AI‑driven, suggesting research into AI’s role in handling massive data flows and network optimization.
Speaker: Speaker 1
What models of AI diffusion are most effective for large industrial firms to transform business models, improve efficiency, and create new services?
The speaker stresses the importance of AI diffusion into large companies, indicating a need to study best‑practice frameworks for AI adoption at scale.
Speaker: Speaker 1
How can Indian IT services companies add AI layers on top of existing technology stacks to benefit Swedish multinational engineering firms?
He proposes leveraging Indian software expertise to enhance Swedish engineering products, implying research into integration approaches and joint development pipelines.
Speaker: Speaker 1

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Responsible AI for Children Safe Playful and Empowering Learning

Responsible AI for Children Safe Playful and Empowering Learning

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened to examine how AI literacy can be built into children’s education and why it is essential for their future participation in an AI-driven world [5][152]. Tom Hall argued that AI should be taught as a technology rather than a “magic box,” emphasizing that children need to understand underlying concepts such as probability, data sensing and algorithmic bias instead of merely consuming AI outputs [16-21][23-26]. He warned that many young learners treat generative AI as a shortcut, which risks passive consumption and undermines critical thinking, so curricula must move beyond excitement to mastery of fundamentals [15][18]. Atish Joshua Gonsalves described LEGO Education’s new computer-science and AI product, which is built on four values-child agency, safety, transparency and hands-on collaborative learning-and is designed to run AI features locally to protect privacy [32-34][306-308]. The demo showed students training a pre-trained image classifier to control a robot’s movements, teaching them that AI predictions are probabilistic, improve with more data, and can contain bias [46-50][48-51]. Richa Menke highlighted that AI can enrich play by inspiring imagination, but cautioned that over-reliance on efficiency or personalization may erode children’s creative struggle and long-term agency [97-104][130-138][146-151]. She noted that generative AI’s “hallucinations” might be playful features in games, yet the technology is not yet ready for childhood without deliberate deliberation about its impact [124-127][115-116]. Saadhna Panday of UNICEF India stressed that AI’s benefits are unevenly distributed, citing the contrast between urban Delhi and rural Jharkhand, and called for evidence-based, equitable solutions that keep teachers and children at the centre [162-165][170-176]. She also pointed out the need for multilingual, low-resource tools and for safeguarding children’s privacy, trust and participation in AI-enhanced learning environments [205-212][306-308]. The panel reached consensus that empowering teachers with clear policies, scaffolding resources and a “5E” instructional model is crucial for scaling AI literacy responsibly [214-218][369-372]. Participants agreed that hands-on, collaborative activities-such as LEGO’s design challenges and the First LEGO League-provide the “magic” of creation while reinforcing technical concepts [263-276][376-378]. Finally, the discussion concluded that AI literacy must be treated as a core modern literacy, integrated with safety, equity and agency, so that children become designers of future AI rather than merely its users [26-27][386-395].


Keypoints


Major discussion points


AI literacy must go beyond “magic-box” usage and teach foundational concepts.


Participants stressed that children should understand how AI works, not just treat it as a mysterious tool. Tom Hall highlighted the need to move from “magic” to a “screwdriver” that lets kids see under the hood [19-24]; Atish echoed this by defining AI literacy as “understanding today’s technology… and the fundamentals” [31-34]; early remarks from Speaker 1 framed AI as an unavoidable, essential skill [3][5].


Hands-on, collaborative play is the preferred vehicle for teaching AI.


LEGO representatives described a learning model that combines physical building with coding to give children agency while keeping safety front-and-center. Atish detailed the classroom demo, the “AI Dancer,” and the emphasis on active creation [31-34][36-41][46-51]; Richa outlined LEGO’s four guiding values-child agency, safety, hands-on immersion, and foundational knowledge [94-112][306-309]; Tom Hall linked tactile learning to stronger brain engagement and deeper mastery [263-280].


Equity and contextual relevance are critical for scaling AI education.


Saadhna highlighted the stark contrast between urban Delhi and rural Jharkhand, urging solutions that work in multilingual, low-resource settings [152-164][210-214][251-259][381-385]; Atish added that “frugal AI” and age-appropriate, screen-free approaches can bridge gaps in underserved environments [238-250].


Safety, privacy, and ethical safeguards are non-negotiable.


Across the panel, participants agreed that any AI interaction with children must meet high safety standards. Atish listed LEGO’s safety rules (no anthropomorphising, local data processing) [31-34]; Richa reiterated that privacy and safety are foundational and that current LEGO products do not embed AI for this reason [306-309]; Tom Hall warned against “shotgun” adoption without rigorous safety research [344-363]; Saadhna asked how to balance joy with risk [300-304].


Teachers and parents need concrete resources and capacity-building.


The discussion repeatedly called for tools, training, and support structures for educators and families. Atish noted the need to empower teachers before dropping new standards [81-88]; Tom Hall suggested a facilitated AI-policy conversation template for classrooms [214-236]; audience questions from Nikhil and Asha asked for parent-focused curricula and affordable teacher training [323-328][332-340].


Overall purpose / goal


The panel aimed to define a responsible, inclusive roadmap for AI literacy in K-12 education-showcasing how hands-on, play-based learning can demystify AI, while simultaneously addressing safety, equity, and the need for teacher and parent support to ensure all children can become informed creators rather than passive consumers of AI.


Overall tone


The conversation began with an upbeat, visionary tone, celebrating children’s curiosity and the potential of AI-enhanced play. As the dialogue progressed, the tone shifted to a more cautious, reflective stance, emphasizing ethical safeguards, equity challenges, and the urgency of building teacher capacity. Throughout, the tone remained collaborative and solution-oriented, moving from optimism to a balanced mix of hope and responsibility.


Speakers

Saadhna Panday


Area of expertise: AI literacy, education policy, child protection


Role / Title: Chief of Education, UNICEF India; Panel moderator


Asha Nanavati


Area of expertise: Education leadership, AI adoption in schools


Role / Title: Representative, Alliance Educational Foundation (runs a charitable K-12 school in Kerala) [S4]


Tom Hall


Area of expertise: AI literacy, hands-on learning, educational technology


Role / Title: Vice President and General Manager, LEGO Education [S5]


Nikhil Bawa


Area of expertise: AI and education commentary, parent resources


Role / Title: Writer/Researcher on AI and education (independent) [S7]


Richa Menke


Area of expertise: Interactive play, AI-enabled learning products, safety & privacy


Role / Title: Head of Interactive Play, LEGO Group [S10]


Speaker 4


Area of expertise: (not specified)


Role / Title: (not specified; appears as an audience participant or brief interjector) [S11]


Atish Joshua Gonsalves


Area of expertise: AI-driven educational product design, hands-on classroom implementation


Role / Title: Product lead / presenter for LEGO Education AI & Data curriculum (inferred from presentation) [S14]


Speaker 1


Area of expertise: (not specified; appears to be a student voice)


Role / Title: Student participant / youth representative in the discussion [S16]


Additional speakers:


Steve – referenced by Richa Menke (“Thanks, Steve.”); role/title not provided in the transcript.


Full session reportComprehensive analysis and detailed insights

Opening framing – Speaker 1 opens the session by likening artificial intelligence (AI) to taxes, arguing that AI is now unavoidable and that children must be equipped to engage with it or risk being left behind [2-6][3][5]. He stresses the need for AI literacy and for young people to have a voice in AI policy because “AI literacy is really important” [2-6][1-6].


Tom Hall – why AI must be taught as technology – Hall expands the opening premise, warning that treating AI as a “magic-box” creates a passive-consumer mindset. He uses a screwdriver metaphor to argue that children should be able to open the box and understand foundational concepts such as probability, data sensing, algorithmic bias and the probabilistic nature of AI predictions [16-27][15][18-21][19-24][25-27]. Hall calls for AI literacy to become a modern core literacy alongside maths and reading, not an elective [26-27]. He also cites the 2014 UK CS GCSE rollout failure, attributing it to a lack of trained teachers and an outdated curriculum [300-312].


Atish Gonsalves – LEGO Education product overview – Atish introduces LEGO Education’s new computer-science and AI offering, grounding it in four guiding values: child agency, safety & well-being, transparency, and hands-on collaborative learning [32-34][94-112]. He outlines concrete safety rules – no anthropomorphising of AI, on-device processing so data never leaves the device, clear data provenance for all models, and universal design to support neuro-diverse learners [31-34][46-51]. The live demo of the “AI Dancer” shows pupils training a pre-trained image classifier with their own pose data, observing how confidence scores shift as they move [46-51]; the demo illustrates that AI is probabilistic, improves with more training data, and can be biased when the training set is not representative [48-50]. Atish also references the 5E instructional model (engage, explore, explain, elaborate, evaluate) used in the LEGO Education Teacher Portal [32-38][369-372] and mentions the First Lego League as an example of open-ended, collaborative AI projects [340-350].


Lesson “Strike a Pose” – Speaker 1 describes the “Strike a Pose” activity, which combines LEGO bricks, the Coding Canvas, and a custom classifier. Students build a robot, collect pose data, train a classifier, and present their work, thereby moving from users to designers of AI [45-58][40-42][55-73]. The lesson follows the 5E structure and reinforces the three core lessons highlighted in the demo [48-50].


Atish – teacher support & frugal AI – Atish emphasizes the LEGO Education Teacher Portal, which provides curriculum, lesson plans and scaffolding aligned with the 5E model [32-38][369-372]. He promotes “frugal AI” approaches that teach computational concepts such as loops and probability using bricks alone, without screens or heavy hardware [238-250][243-247].


Richa Menke – SmartPlay & SmartBreak – Richa presents the SmartPlay platform, a screen-free, sensor-driven play system that responds with sounds and motions but currently does not employ generative AI for safety reasons [100-108]. She outlines three tensions that must be balanced when introducing AI to children: efficiency vs. imagination, personalization vs. identity, and assistance vs. agency [120-138]. Richa also reiterates the non-negotiable safety and privacy safeguards, echoing Atish’s design rules [306-314].


Saadhna Panday (UNICEF India) – equity & evidence – Saadhna highlights the stark contrast between AI-enabled education in urban Delhi and the near-absence of such tools for a tribal girl in rural Jharkhand, warning that AI could exacerbate existing inequalities if deployed irresponsibly [152-176][170-176]. She cites AI-driven early detection of pancreatic cancer as a motivating example of AI’s societal impact [160-168]. Saadhna calls for multilingual, low-cost, evidence-based solutions that keep teachers and children at the centre of design [205-212][251-259].


Panel Q&A


* Tom Hall reiterates that LEGO provides a template for classroom AI-policy discussions, encouraging a “pause-and-discuss” approach where teachers and children jointly shape AI policies before tools are introduced [214-236][229-236].


* Atish stresses the importance of frugal, age-appropriate tools and the teacher portal for scaling AI literacy [238-250].


* Richa reinforces the safety-first stance, noting that none of LEGO’s current products use generative AI until safety standards are met [306-314].


* Nikhil Bawa asks for parent-focused resources and guidance on supporting unstructured play [380-388].


* Asha Nanavati queries affordable teacher-training models for charitable schools in India [390-398].


Closing remarks – Saadhna thanks the participants and restates that the responsibility to protect children while delivering equitable, evidence-based AI education rests on all stakeholders. She calls for rapid yet safe empowerment of teachers, learners and parents, and reaffirms the shared commitment to treat AI literacy as a core modern literacy embedded in hands-on, play-based pedagogy, upheld by the highest safety and privacy standards, and accessible to every child regardless of geography or resources [386-395].


Across the session the panel reaches consensus that AI literacy is essential for future participation, must focus on fundamentals rather than black-box perception, benefits from tactile collaborative learning, requires non-negotiable safety, privacy and fairness, depends on teacher empowerment and resource provision, and must be delivered through equitable, localized and frugal approaches to avoid widening the AI divide [1-6][16-23][24-27][31-34][381-385][152-164][238-250].


Session transcriptComplete transcript of the session
Speaker 1

curious how it works and I think that a lot of kids are. I would love to learn how it can be used in everyday life and how it can be used as an accurate source of information. AI is like taxes, it’s unavoidable and if you don’t learn to evolve with it you’re gonna be left behind. I definitely want to be a part of solving big problems. We need to have a say in AI policies because AI literacy is really important. Thanks for finally asking us what we think. Bye.

Tom Hall

He breaks me every time. These were children that we brought into a school in California in December. actors in there. There’s just a lot of children with opinions and the little boy at the end, he just had a lot to say. He is very wise. But these are, those were the views of just some smart, inspiring young people. They’re not just eager to use AI, but I think you can see they’re especially eager to understand and to build things with it. And just as you saw, they have some really clear ideas about how it should and shouldn’t be used in today’s classrooms. But of course, you know, excitement and confidence are not the same as mastery or comprehension.

We do see an unfortunate trend where children do not understand the fundamentals of the systems they’re interacting with. And I think you can particularly see that in younger children who often see generative AI systems. As a kind of magic box that they can… into where you know you type in a text or a question and then outcome images and videos and entertaining things and maybe even the answer to a history essay question I think we need to be really clear that AI is not magic it’s not a magic toolbox it’s a technology system and foundational AI literacy isn’t about teaching children how to use this magic box I think far more importantly it’s like how do we give the child the screwdriver to take that box apart and really understand what’s going on under the cover so while you know supporting children to use AI tools safely ethically and effectively today is important I think far more it’s about equipping them with the knowledge and the tools the confidence to build what is yet to come So therefore our definition of AI literacy when we talk about it, it’s about understanding today’s technology, yes, but it’s far more about understanding the fundamental concepts so that you are armed and ready for what is yet to be designed, and actually so that you can be the designer of what is to come.

So I think that we have underestimated the role we have to play in preparing children today. We don’t want them to be passive consumers of AI. Instead, we really believe that we should be arming them with the tools, the literacies that are required to lead, to design, to create. And our goal is not about sort of robot -proofing our children for what’s coming at them, but just making sure that they are ready to build a better future and they’ve got the tools in their hands. So let’s talk about AI literacy as understanding the foundations of AI. So AI is the foundation of computer science and AI concepts, and that is about understanding the fundamental concepts of AI.

understanding probability, how computers sort of sense the world as data points and data sensors, sensing algorithmic bias and understanding all of the nuances of that. We don’t want that to be an elective or selective choice for just the few. We believe that these concepts have to be elevated to the status of modern literacy alongside maths and reading, problem solving, creativity and collaboration. And I think it’s best if we show you how we plan to do this in classrooms. So I’m going to hand over to Atish, and we’re going to run a live demo, which is always fun at a conference event.

Atish Joshua Gonsalves

Great, thanks, Tom. And I’m also delighted to introduce AI Dancer, who’s on the table here, who hopefully will do some dancing soon as well. So, yeah, very excited to share. I’m going to share a bit more about how we’ve translated some of these principles that Tom was talking about into the product. So I’m here. excited to shout about our new computer science and AI product which is just fresh off the press which we just announced in January and will hit schools in April but all of this we need to do this very responsibly we saw this kid earlier in the video talk about AI should be safe fair transparent so this is very wise kid right so we really agree at Lego education we’ve established clear guidelines for how this should work so let me step you through some of these guidelines so AI should be safe we do not generate any text or any media we do not anthropomorphize I got that right this time it’s just a fancy way of saying we do not make it think that AI is human we do not want them forming any unhealthy emotional bonds we we ensure that all our digital products are rooted in the principles of universal design of design principles and we are designed to for kids who have neurodiversity we’re designing for kids who have different learning needs so it’s really important that our products are designed in a very fair way transparent all the models that we would would use would which should have very clear data provenance so should understand where the data has come from which has trained those models and understand whether the models have been trained on different geographies on different kinds of kids on different kinds of adults so ensuring that these models have clear data provenance is super critical for us and then finally privacy so I just want to stress that in all our products AI features run locally on the devices nothing ever leaves the device nothing ever goes to us at the Lego group nothing goes to third parties no login is collected there’s in terms of the trading whether the kids are building their own AI models or they’re using pre -existing models nothing ever leaves so safety and student well -being is a red line is a non -negotiable for us so everything we know about decades of education research and the way we use AI is very important to us and I think that’s what we need to show us that kids look best when they are building when they’re using their hands and really creating and we do and we’ve seen this very much at like education and through years of research so now more than ever children need to learn and need to learn together so much of computer science and AI today is stores with kids sitting in front of the screen with the headphones on by themselves learning and I don’t think we see this as a vision for learning for us kids should be building together coding together experimenting together tinkering together and sharing together so that is really our vision of how kids should be learning computer science and AI so when they tackle these when they tackle these new technologies they also have those cross -cutting skills to also deal with us in the real world so bringing this all together at Lego education we have these four values that govern our approach to AI literacy so we prioritize child agency and engagement to ensure students are active participants in their own learning journeys we empower students with the foundations of AI that Tom was talking about that remain relevant as the technology evolves.

We uphold child safety and well -being as it’s non -negotiable for every AI interaction in the class and we foster hands -on immersive and collaborative experiences that inspire creativity and shared learning. So that is really the four principles that are driving all of this. So how do we make this, how do we bring this into a classroom? How do we, with our products, how do we make sure it’s hands -on, it’s understandable and safe for kids? So I would encourage you also after the session to go to the booth, I think it’s in Hall 3, and actually see these products in person, get hands -on with them, try them out yourself. So we’re really helping students to build real AI literacy by demystifying how AI works.

Through these playful features and lessons, learners explore concepts like computer vision, probability ballistic thinking, classification, machine learning, while seeing their ideas come to life. The result is student agency. Kids not just using AI but actually understanding and building with it. So what better way to show you how kids are using it than for me to try to actually make you use it. So here we have a lesson which is about teaching kids about pre -trained classifiers. So this is in the last unit of once they’ve gone to some core principles of computer science they’ve learned about basics and events and loops and data structures. So at the end they are looking at AI and data and here they’re learning about how you can use a pre -trained classifier, the model that already exists, to bring their AI down to life.

One thing you’ll notice here when the code is up here that the camera that they can use, the camera by default is off. So this is all. sort of in line with the principles of AI safety so it’s an explicit action the kids are taking and here when I hit play now okay I’ve got that’s why I have a video okay no worries so what I’m gonna do always fun trying to do a live demo we always have a backup so yeah you can see that as I’m lifting my hands up and down you you’re seeing the different probabilities changing here and what the kids are learning through this is that with traditional computer science you’ve got zeros and ones things can be on and off with AI what they’re learning here is there’s a 80 70 90 percent chance that I’ve lifted my left hand up or my right hand up or both hands up and then that’s triggering the different events so they’ve learned about events in earlier lessons and that’s what I’m talking about triggering those.

So they learn that AI is not always right. They’re learning that the more data that’s trained into the model, the better it gets. And they also learn from an ethics perspective that if the AI model is not trained with enough kids’ examples, it will have biases in it as well. So these are very core principles of AI, but taught in a very simple and playful way and making the AI dancer come to life. So

Speaker 1

Ready to excite your students with computer science and AI? This lesson is called Strike a Pose. Students will learn how to customize an AI classifier and program AI -activated events. We’ll kick off with a big question to spark curiosity. How could you train a robot to follow your movements? We will explore the topic through the computer science concepts AI and data. The question is tied to a real -life example, how AI can be trained to recognize images through data. This makes it more relatable to both students and teachers. In groups of four, each student picks a minifigure, which indicates their roles in the collaborative building process. The group will build a robot with movable arms and discuss how it might work.

Then it’s time to get hands -on with coding. Groups will open Lego Education Coding Canvas, enter the lesson pin, and connect their hardware. Students create and train their own AI custom classifier by posing in front of the camera and capturing pose data. With simple pre -made code and their classifier, groups explore making the robot mimic their arm poses. Group members take turns so everyone gets hands -on. Two students develop the build of the robot. While the other two iterate on their code and later they swap. Students present their robot, talk about their iteration process, and discuss how they created and trained their class. At the end of this lesson, students will be able to say, I can create a custom classifier.

I can use PoseData to train a custom classifier. I can describe how to create a custom classifier and use data to train it. This is the third of four lessons in the AI and Data unit, where students explore how computers learn from data. In the following lessons, students investigate how data quality and quantity can improve how their AI detects their poses. At the end, they apply what they’ve learned through an open -ended design challenge. All materials for this lesson can be found on the LEGO Education Teacher Portal lesson plan, ready -to -use classroom presentation and facilitation notes. No extra. No extra prep time needed.

Atish Joshua Gonsalves

So you got to see how AI model is really used, how the AI dance is really used in the classroom and what you saw also in the classroom there were kids meaningful roles in the building process as they were building out the model but also meaningful roles when they’re coding and also training the AI as well. And all of this also for the kids but none of this can happen without teachers, right? So we cannot simply drop new standards and mandates on educators without the support for them. You saw in the video briefly referenced the teacher portal where the teachers get all the resources and the support they need to bring computer science and AI to kids.

We know that most teachers who are teaching computer science are actually not computer science teachers themselves. They are teaching math, they’re teaching science, they’re teaching English and so they need to be prepared to really scale this up as well. So we really see this not as a problem. It’s not as a challenge in terms of access to tools but an access to confidence. So I think this is a nice, there’s a couple of very nice quotes here but I’m also, it’s a nice quote. I just wanted to hand over to Richa. I’m very pleased to hand over to her, who leads product development on the retail side and is behind the super exciting Smartbricks, if you’ve seen those.

Richa Menke

Thanks, Steve. Hi, everyone, good morning. Thank you for having me. So, my name is Richa Menke. I head up interactive play at the LEGO Group. So, we’ve just heard an important call to action in terms of AI literacy. So, preparing children to understand and navigate an AI -powered world. And this matters enormously. But what I’d like to do is spend a few minutes discussing the other side of this question, which is, how do we prepare AI for kids and imagination? And part of the reason we’re here is that we believe our focus on play and imagination not only unlocks exciting new play experiences, it might just be the unlock to a more inclusive and empowering future of AI.

So, childhood, as we know, is formative. it’s not a market opportunity, it’s a developmental window that closes. What enters that window shapes who we become. Our sense of confidence, our curiosity, our relationship with struggle and creation, and very importantly, that shaping can often be invisible. So this is very important to us in what we do in the Creative Play Lab, which is the innovation team at the LEGO Group. So what we do is we look at how do we create more and more relevant play experiences for kids, how do we employ new technologies in service of better play for kids, but always keeping in mind our DNA as the LEGO Group, that hands -on, minds -on play experience that we all love.

So eight years ago, our team asked the question, in a world of digital screens, how could we offer kids, more interactivity in their LEGO play experiences, but without… screen. And we were really, really committed to this and spent eight years getting there. And we just launched in January, the SmartPlay platform, which is a new dimension of Lego play. What this is, is, you know, as the child is playing with the SmartBreak in their models, the play actually responds with appropriate sounds and behaviors. So imagine you have your Star Wars X -Wing, and you know, the way you move it around, you know, if you fly with it, it’ll swoosh, if you drop it, it’ll make a crash sound.

So, you know, it’s really responsive to the kid. And all of this without a screen. Without a screen. That was very, very important to us. And also without AI. And we just, we didn’t need AI in this solution. But, you know, also, we’re not entirely sure if AI is ready for childhood. We really believe that childhood deserves deliberation. And that deliberation might be an unlock, as I mentioned, to the future of AI. So first of all, AI holds tremendous potential when you think about play. When you think of the creative barriers that kids face in play. So for example, I’m sitting with my brick bin, I have a ton of bricks. I don’t know where to start, this fear of blank canvas.

AI could easily offer little prompts that inspire me to play. It could support diverse learning methods. AI could help us better understand a child’s intent so we could offer more better, relevant, meaningful experiences. And one of my favorite aspects, which I think is super interesting, is that generative AI is probabilistic. And in other contexts, like productivity, a hallucination is a bug. But when it comes to play, maybe that hallucination is just a playful feature. So there’s huge potential in what AI could bring to offer better play. But of course, as you know, there are many challenges that need to be addressed. And there’s three… key tensions that we think are really important to address when we think about kids and childhood.

So first of all, it’s this tension between efficiency and imagination. If I can get an answer just like this, I don’t have to wait. I don’t have to struggle. I don’t have to develop my imagination. And does that rob kids of the opportunity to really develop their imagination and more importantly, develop the confidence in their own imagination? Personalization and identity. A child at seven is not the same as who they’re going to be at 17. So if we start personalizing the experience for who they are at seven, are we holding them back? And then finally, assistance and agency. Are we raising kids who are, it’s very easy for them to prompt, but they don’t have the ability to really persevere through.

So if I can get an answer just like this, I don’t have to wait. I don’t have to struggle. These are some of the key tensions that we see. And of course, there’s a lot of opportunities, but we feel the responsibility. to ensure that these are addressed. So when we develop new play experiences, we ask ourselves the question, does this increase or decrease the choices that a child has? So child agency. Does this expand imagination? I’d encourage you to ask yourself the question as you develop AI solutions. Does it preserve that healthy developmental friction where you have to actually think, and finally, just would I want this shaping my child inner voice as a way to really think about what’s right?

And I’d love to leave you with this question that we spend a lot of time thinking about is, as we look at AI systems today, what exactly are we optimizing for and how important that choice is? So if today AI systems, if we optimize for engagement, what we’re going to get is more attention. But what if, what if… If we optimize for childhood, then we’re going to optimize for potential. Thank you very much.

Saadhna Panday

All right. Good morning, everybody. I’m Sadhna Pandey, and I’m the chief of education at UNICEF India. And it’s a pleasure to moderate today’s panel discussion on AI literacy and children. So we’ve heard a lot at the summit about the wonder of tech. It really feels good to talk about the wonder of children and of education. So I want to thank Legol for creating the space for this discussion. We all know that AI has brought a step change in how we live, work, and play. And there’s no doubt that it is impacting children’s lives and how they experience education. The problem is that AI is not just a tool for education. The problem is that it is doing it unevenly.

For a child living in urban Delhi, AI has found its way into their education either through the home or the school. But for a poor tribal girl living in rural Jharkhand, perhaps not so much. Education systems are facing massive learning challenges for which governments are seeking equitable, scalable and evidence -based solutions. Two to three decades of digital learning has yielded small -scale wins and modest impact on learning. And yet we’ve seen the massive impact of AI already on health systems and that gives us tremendous hope. I keep repeating this example because I’m fascinated with it. In the area of radiology, AI has helped the diagnosis of pancreatic cancer 438 days earlier than would have been normally expected.

We were previously diagnosing pancreatic cancer at fourth stage. We can now diagnose it at stage one and it diagnoses it with greater accuracy than any human ever can and this without touching a patient and that makes me feel excited. We are looking for that kind of accelerator in education. Something that’s going to bring efficiency and quality without widening inequality and as you’ve said that remains deeply human centered because we know that learning is an inherently social process. We cannot be naive about this. We are walking a tightrope between something that is scaling so far and evolving so rapidly but anybody who’s worked in the education system knows it’s a big ship it takes a wide berth to turn but even when with that we are looking for a public good out of AI because we need it these are really tough interests to marry but it has been done for vaccine rollout and it is being done in countries like Estonia right now within the education space through all of this you got it bang on we’ve got to keep teachers pedagogy and curricula at the center and more than anything else we need to keep children at the center matching their right to learn by multi modes including tech with their right to protection participation and privacy keep that in mind and we need to keep that in mind and we need to keep that in mind and we need to keep that in mind and we need to keep that in mind and we need to keep But time and again we make the error that we underestimate the capacity of children.

They’re not passive recipients of education. They have tremendous agency. They can consume tech, they can shape it, and no doubt they will lead it in time. So today’s conversation is about agency. How do we build AI that empowers children to become creative, critical, independent thinkers that maximize the potential, take out of the best of AI, but offset its risks? To help us through that conversation, I have Tom and Richard. Welcome again, Tom and Richard. And we’re looking forward to a very robust engagement. this morning. Okay. So Tom, we’re going to start with you. So you talked about AI sometimes feeling magical, that it’s abracadabra and voila, something beautiful appears. And we know how children love magic.

They really become enthralled with it and

Tom Hall

Children do indeed love magic, don’t we all? And we all like fast results. And increasingly, We have much shorter attention spans than we had maybe even 10 years ago, and so we’re all looking for quick fixes. I think we often, well, I think we’re overlooking the fact that children have immediate access to data and information now that they trust inherently from the get -go, and they will take a question and feed it back as if it is the gospel. So there is this real danger that AI is indeed seen as a magic box, particularly generative AI, and I think that that’s amazing that children have this inherent curiosity and the Lego group sort of celebrates that curiosity every day.

It’s a wonderful thing. But as I said, I think it’s a real mistake if we don’t teach children to question the magic and actually make magic for themselves. And in order to do that, that’s why we are… so passionate about these fundamentals of AI literacy because if we simply hand children a box that promises quick magical results I think we are really short -selling them so I’d much rather yeah we hand over the screwdriver we hand over the the kind of compass and allow them to take things apart and start to create their own ideas I’m not sure if I addressed your question there but I think that the magic is the magic is something I would we really want children to create their own and I don’t think that we should be under any illusion that they’re going to work this out without an education system that takes and a societal system that takes this responsibility very very seriously and it’s not about taking this responsibility in a few months or a few years time the time is now to maybe stop some things and actually start a fundamentally different approach

Speaker 1

losing

Saadhna Panday

the responsibility to protect them.

Richa Menke

Thank you. Thank you for the question. Yes, it’s challenging because kids have access all the time. You can’t stop it. As you say, they have a mind of their own. But I think as we’ve seen even with social media that maybe we don’t always understand the long -term consequences. While I can have an immediate reaction and something that makes me happy in the minute, what is that going to do in the long run? So I think this focus on education as a filter to understand the long -term as a kind of compass of what is a better experience I think is incredibly important. So that’s kind of our position in terms of how we would employ AI.

Saadhna Panday

Wonderful. So there’s two things that we need for empowerment. One is foundational skills. The child needs to have a basic level of literacy to be able to engage with language models. The And then the last thing that we need to do is to understand the language. Second, critical web and AI literacy. And the model you put out looks fantastic. Now let’s take the model into a real -world classroom. What is it going to look like in rural Rajasthan where we’ve got multigrained, multilingual, multilevel classes? How do we make this come alive and have relevance for those type of settings?

Tom Hall

I think that the best thing you can do, and any teachers in this room will know this, ask children who are looking at you the question, like what type of conversation do they want to have? And in the form of AI, we’ve just produced a template to discuss AI policies with your classes. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do.

And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. assess this question in a very, very smart, thoughtful way. And if we don’t ask them the question, again, we are very guilty of simply publishing something and deciding that it’s in their best interest. Of course we need to guide them, and we’ve got a lot of information that we need to share with them. But let them think their way through this, and the best way to do that is ask the questions. So, yeah, take a discussion around, you know, where does bias show up in their lives? What might that look like if a technology system leant too heavily on a false set of information?

Teaching them sort of the basics of if -then concepts. I think you can do that in any type of classroom, and you don’t need any type of equipment on the table. You need minds to be switched on, and to do that I think you need to ask children the questions, and you need to trust that they’re going to have some thoughts, and you need to help them guide that policy. So that’s something we’d love to see widely spread.

Atish Joshua Gonsalves

Yeah, maybe just coming in from… So… Prior to Lego, I also worked… with the UN Refugee Agency for many years and also sort of saw these applications of ed tech in quite rural or humanitarian contexts as well. So I think there are interesting ways to bring some of these concepts to life, even in very, I think I heard the phrase frugal AI being used here at the conference. But one of the things I think even for us, just because we have access to these powerful models doesn’t mean we need to put those necessarily directly into the hands of kids as well. So even as we look at sort of education progression from kindergarten right up to grade eight and beyond, the age appropriateness is super important.

So even as we’re looking at the littlest ones and how they learn about computational concepts and AI, a lot of where we start is actually completely screen free. They are working with understanding computer science concepts like sequences and loops and just doing this completely with bricks. And you can imagine some of these contexts, it may be bricks, it may be something else, but it isn’t even like the hardware. or a screen at all. So you can teach concepts of probability and computational thinking even without some of these, if you don’t have these resources. And this actually aligns well with, as we think of age -appropriate progression. But I really challenge the audience as well around this need to want to put things into kids’ hands directly in any context.

I mean, not just in challenging contexts in rural India, but also in other countries as well. Well, let’s not rush for the fastest and the best model, but what’s actually right for the kids as well.

Saadhna Panday

Absolutely. We need to generate a fair amount of evidence before we rush to scale with something like this. Although we have to mediate the fact that smartphone penetration in a country like India is widespread. So access is there. And a school is a microcosm of a local community. So whatever is happening in the whole country, in our home, is going to reflect. in the school, and if it impacts child well -being or if it impacts learning, then the schooling system will have to respond. So Tom, I’m coming back to you again. AI can sometimes feel very passive. You put something in, you get something out. But we know that the best learning happens through engagement.

It’s that journey of discovery that excites the child. So how do we make this thing interactive? What do we need to do to support creativity in the use of AI?

Tom Hall

I’ll declare my bias here, which is that I work for LEGO Group, therefore I’m kind of deeply entrenched in a passion for hands -on learning and a deep belief that when you use your hands, and the science backs this up, you are engaging all parts of your brains that lead to learning. You lead to deeper engagement and ultimately… ultimately a deeper mastery of the subject in front of you. We could show through thousands of research studies that we’ve done through the LEGO Foundation or with any of our research partners that spatial awareness skills develop stronger when children are using their hands. The very basics of mathematics in primary years will develop in a stronger way when you’re using manipulatives and you’re thinking through things.

So this use of hands and manipulatives is something we believe in so deeply. So I think artificial intelligence, it’s a concept of technology. We really believe there’s no reason why hands -on learning shouldn’t be brought in here. You saw in the video that we designed for collaboration first. So this is not a one -on -one learning experience. We really want children to learn together. Groups of four. One, two, three, four. Whatever number you put around the table. We want them to be looking at each other and challenging each other. working in groups, learning the fundamentals of collaboration. It’s not always easy. Things will break. You’ll have to start again. You might not like the role you’ve been given.

That’s a great life lesson. So I think AI can sometimes feel the magic box, but also maybe the dark box. And actually, it’s about helping kids understand that there are really clean and to understand technology fundamentals that underlie artificial intelligence and give them curriculum that means something to them. So we introduced a computer science GCSE in the UK back in 2014. I went to school in the UK. It’s where I live. I’m not too proud to say that that was a failure in terms of uptake by students because there were two mistakes that we made. One was a really lack of teachers and there was no teacher training. So there was no… kind of innovation put into the delivery pipeline, but then there was also a real lack of innovation in the courseware and the curriculum that we designed for that GCSE.

And so children just sat very bored in a computer science class learning very outdated principles. So I think the best thing we can do for interactivity and artificial intelligence sort of education is apply this to things that mean something to today’s teenagers and young people. And that means kind of meeting them where they are and sort of helping them apply fundamentals of AI to the life that’s going on around them. And I think that applies both to the child in the classroom and also the teacher. So give them curriculum that sort of applies now rather than

Saadhna Panday

I must say that I’ve seen the joy of the Lego bricks. I’m South African and I would travel to the United States and I would travel to the United States and I would travel to the United States and I would travel to the rural areas of KwaZulu -Natal and there’d be nothing else. there except a hut. You go to the back of the hut and you see a child with two things, the workbook given by South African government and hand -me -down Lego bricks. And you would see that coming alive of head, heart, and mind. And it was beautiful to see. So thank you, Lego, for that. All right. Richa, I’m coming back to you.

We’re excited about the tech, but we’re also worried about safety. And we’re worried about privacy. And our young adolescents, in particular, who also make up the child cohort, are worried about privacy and safety. So in all of the issues that a private entity needs to think about when they’re designing a digital experience for children, where does safety and privacy stand? And how do you create this joyful, meaningful, and meaningful experience for children? Thankful experience while reducing the risk with the tool. like AI?

Richa Menke

Thank you. So, as you can imagine, safety, privacy, these are absolutely foundational and non -negotiable as we’ve seen on the LEGO education side and similarly in ours. And just to be clear, none of our LEGO products actually employ AI. So the smart break is not using it because for all of these exact same reasons that we have a very high bar, if you look through the lens of childhood, we have a higher bar that we need to meet. So there is this tension, though, that obviously there’s so much potential for meaningful, incredible, hands -on play developed through AI, but at the same time, we need to ensure that until that bar is met, we would not put that in our products.

Saadhna Panday

Excellent. So for our young people of today who will be consumers of AI, trust, transparency, privacy, sustainability, and voice would be critically important. important that we’re not just handing something to them. They get to shape it and co -create it with us. At this point in time, we have a couple of minutes. So we’re going to take a couple of questions from the audience. Since I’m left -handed, my bias is on the left side. I’m declaring it up front. So I’m going to take three quick questions in the first round, and then I will come across. So I’ll take one from the front, one from the back, and then on this side. Right. Okay. Over to you.

Nikhil Bawa

Thank you. Thank you. Fantastic session. My name is Nikhil Bawa. I write about AI and education. I’m just curious about what advice you would have for parents because schools are going to be slow to adapt. And so do you have resources for parents in particular about, because they will, I mean, I’m trying to develop an alternate home curriculum for four hours a week outside. at a school for my kid. Just curious about what you would recommend for parents. You need a combination of structured and unstructured play both, right? I want to know your views on how you’re thinking unstructured play with AI and then play around with also other things like self -regulation, which becomes very difficult for even a team to manage.

So that’s one question and the second is, we’re doing a research on this entire AI adoption at homes which is beyond classrooms. And the initial findings are quite disturbing because it is getting a adopted and adopted just because it’s becoming like a race, especially in India. So I would also like to know if there are some recommendations of various AI play adoption from you guys. Okay, beyond the classroom.

Asha Nanavati

Good morning. Thank you so much. My name is Asha Nanavati. I’m with Alliance Educational Foundation, which runs a charitable small K -12 school in Kerala. They love the Lego products, you know. But I really heard what you said earlier, Richa, about capacity building, about including teachers. We’re a charitable school. All profits go back to the meals, the child. And we don’t maybe have funding for training teachers on AI adoption safety practices. We have play school learners up. So is Lego thinking about doing anything in India? We definitely would love to hear more about that. Thank you.

Tom Hall

Can we take a response to those questions? Can I work back? So we have a recommended AI toolkit to take into classrooms. And it’s a facilitated conversation with children around, you know, what do you think about AI? What should a policy be for a school and a classroom? To be honest, I think that is applicable to a group of teachers in a training day as it is to children and a teacher. And I’ve seen really great examples of schools that I know in the UK following a similar approach. I think maybe there’s a theme in all of the questions. Like maybe don’t worry about applying the brakes, right? Things are moving incredibly fast. I wouldn’t go along with what can feel like this very fast river or wave or current.

I think it’s perfectly okay to apply the brakes and say we need to hit pause and we need to have a conversation. And the conversation needs to be about what do we want. And when I say we, I mean the children in the classroom and the teacher. Like what do we want to get out of this experience? And I think have the conversation. Have the conversation first and don’t worry too much about the tools or the software. that you’re worried that you might be missing out on using. And as Richard just shared, we’re not using generative AI in our products, and that’s for a very deliberate reason, because we just don’t know enough yet about safety and privacy.

We have conducted research into that, and we’re following that very closely, but we’re not willing to take any risks. And I think this time of childhood is just too precious to make some shotgun choices that we’re going to pay very heavily for in the future. So I think empower the teacher and the child to have some really formative discussions about what do we want to get out of this, and then maybe look at what’s available.

Speaker 4

Le

Atish Joshua Gonsalves

arning our child agency versus some scaffolding. So as we bring these products also into core classes, classrooms as part of education strategy now. We do understand the needs for teachers to provide a scaffolding as they take them through this learning journey. So we have, for example, at LEGO Education, we follow something called a 5E model of engage, explore, explain, elaborate, and evaluate. But it’s just a fancy way of saying how do you sort of get the kids hooked initially to a big picture question or a real -life example. But you provide the educators and the students sort of a structure as you go through this process of thinking about that question. I think who had that question yesterday, the distance between a question and answer and that space between that’s where magic or inspiration happens, right?

And so giving that space for that to happen. And then when they’re building – and so you’re providing the structure for them to work in groups and build this out. But towards the end, in the elaboration phase and at the end of every unit, there’s something called a design challenge where the kids are not provided that much instruction. They’re given an open -ended prompt. And then they take the concepts and learn. They take the lessons that they’ve learned and apply that in a more open -ended way. outside of the Lego education computer science and AI product we also have something called First Lego League which is the world’s largest STEM annual STEM competition as well and there you see these it’s it’s so inspiring to see these groups of eight kids building a robotics or challenge and then doing a science theme as well but that’s completely open -ended so they will go beyond what a 45 minute lesson is what they would do within a 45 minute lesson and have a lot more agency in terms of what they can create beyond what so the teacher would take them through in in a classroom Ni

Tom Hall

khil we have some really great resources actually available online both on Lego and Lego foundation around facilitated play with your child and it’s starting from very early years through to later years

Saadhna Panday

so I’m gonna take two more questions because we coming to the end of the session we need to close we need to close okay now I will take one question but really

Tom Hall

Well, I think we heard a lot yesterday that we need to make sure that any tools that are made available are done so in languages that mean something to you on the ground. So I think there are many tools out there that can do automated translation. We hope that the quality is going to be really strong in them. We’re currently producing in English language. Of course, there will be localizations in the future.

Saadhna Panday

All right, colleagues, we need to come to a close because people need to move to the next session. We’re designing for safety, for equity. and while we provide services, we need to match it with demand. And to match with demand, teachers, learners, parents need to be empowered. That responsibility rests with all of us. It’s hard to do many things in an education system. Empowerment is not one of them. We can do that quickly. We can do that with scale and we can do that with equity. So I want to say thank you to our panelists for today for having an engaging conversation and a big thank you to Lego for bringing us together to have a conversation about children, education, and AI.

Thank you so much. The session is closed. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“AI literacy is essential and children need a voice in AI policy; AI is unavoidable like taxes.”

The knowledge base stresses the importance of empowering young people to engage with AI and participate in its governance, aligning with the claim that AI literacy is crucial and youth should have a voice in policy [S1] and [S89] and [S90] and highlights the broader need for responsible AI for children [S2].

Confirmedhigh

“LEGO Education ensures AI safety by processing data on‑device so data never leaves the device, provides clear data provenance, and avoids anthropomorphising AI.”

LEGO’s child-centric design emphasizes on-device AI processing and privacy-by-design, ensuring data stays local and supporting transparent, safe AI experiences, which corroborates the reported safety rules [S62] and the edge-computing, on-device model approach described in privacy-focused sources [S97] and [S98].

Confirmedmedium

“LEGO Education’s design supports neuro‑diverse learners through universal design principles.”

LEGO’s commitment to inclusive, child-well-being-focused design, including support for neuro-diverse learners, is documented in the knowledge base, confirming the claim of universal design for diverse learners [S62].

External Sources (99)
S1
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-for-children-safe-playful-and-empowering-learning — Absolutely. We need to generate a fair amount of evidence before we rush to scale with something like this. Although we …
S2
Responsible AI for Children Safe Playful and Empowering Learning — – Saadhna Panday- Asha Nanavati – Tom Hall- Saadhna Panday
S3
Responsible AI for Children Safe Playful and Empowering Learning — – Saadhna Panday- Atish Joshua Gonsalves- Asha Nanavati – Tom Hall- Atish Joshua Gonsalves- Speaker 4- Asha Nanavati
S4
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-for-children-safe-playful-and-empowering-learning — Good morning. Thank you so much. My name is Asha Nanavati. I’m with Alliance Educational Foundation, which runs a charit…
S5
Safeguarding Children with Responsible AI — -Tom Hall- Vice President and General Manager at Lego Education (works with the National Legal Foundation)
S6
https://dig.watch/event/india-ai-impact-summit-2026/safeguarding-children-with-responsible-ai — Thank you. Can you hear me? Yes? All right, so delighted to be here with you all. I’m one of the two co -moderators, and…
S8
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-for-children-safe-playful-and-empowering-learning — So that’s one question and the second is, we’re doing a research on this entire AI adoption at homes which is beyond cla…
S9
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-for-children-safe-playful-and-empowering-learning — AI could easily offer little prompts that inspire me to play. It could support diverse learning methods. AI could help u…
S10
Responsible AI for Children Safe Playful and Empowering Learning — Thanks, Steve. Hi, everyone, good morning. Thank you for having me. So, my name is Richa Menke. I head up interactive pl…
S11
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S12
S13
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — -Speaker 4: Role/title not mentioned (made a brief interjection during the session)
S14
Responsible AI for Children Safe Playful and Empowering Learning — – Saadhna Panday- Atish Joshua Gonsalves- Asha Nanavati – Tom Hall- Atish Joshua Gonsalves- Richa Menke – Tom Hall- At…
S15
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S16
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S17
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S18
S19
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — Lee Rainie:Thank you so much, President Book. It’s a pleasure to be here and to be associated with this really important…
S20
Building Indias Digital and Industrial Future with AI — – Rahul Vatts- Speaker 1 – Speaker 1- Deepak Maheshwari
S21
From principles to practice: Governing advanced AI in action — Sasha Rubel: It’s not an afterthought. I love that. Safety is the foundation and not an afterthought. It’s again one of …
S22
Conversation: 02 — “So that’s why without trust and safety and understanding of what’s happening in your underlying environment, it becomes…
S23
Let’s design the next Global Dialogue on Ai &amp; Metaverses | IGF 2023 Town Hall #25 — In conclusion, the analysis of the data provides insights into AI, misinformation, education, and inclusivity. A balance…
S24
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Incorporating local languages is important for making technology accessible to non-English speakers.
S25
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Yeah, thanks, Steve. Very well covered. If I can add just a few more points. I think one of the challenges we see is cop…
S26
How nonprofits are using AI-based innovations to scale their impact — Right. Yeah, I guess the error rate, the hit ratio and what kind of an impact it has depends on the use case. And if the…
S27
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — But teachers need support. They need professional development around AI literacy, reasonable class sizes that allow for …
S28
Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225 — Sophie:Yes, I can give a short insight. So we have the Digital Services Act and the European Union, which is going to be…
S29
AI award-winning headless flamingo photo found to be real — A controversialAI-generated photo of a headless flamingohas ignited a heated debate over the ethical implications of AI …
S30
Brainstorming with AI opens new doors for innovation — AI is increasingly embraced as a reliable creative partner, offering speed and breadth in idea generation. In Fast Compa…
S31
Open Forum #38 Harnessing AI innovation while respecting privacy rights — Audience: Thank you so much for your presentation. My name is Hasara Tebi. I’m from Mawadda Association for Family Sta…
S32
WS #162 Overregulation: Balance Policy and Innovation in Technology — Tercova emphasizes that patient privacy, data protection, and minimizing bias in algorithms are non-negotiable aspects o…
S33
Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities — Apple, Microsoft, and Google arespearheadinga technological revolution with their vision of AI smartphones and computers…
S34
WS #172 Regulating AI and Emerging Risks for Children’s Rights — Nidhi Ramesh: Hello, everyone, and thank you, Leanda, so much for such a kind introduction. I’ll repeat, my name is Ni…
S35
When Code and Creativity Collide: AI’s Transformation of Music and Creative Expression — Moderate to significant disagreements with important implications. The speakers’ different perspectives on AI’s current …
S36
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S37
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Unexpectedly, there was strong consensus across industry, government, and academic perspectives on the need for collabor…
S38
WSIS Action Line Facilitators Meeting: 20-Year Progress Report — UNESCO is providing policy guidance on AI in education, focusing on frameworks that emphasize ethical use of AI in educa…
S39
Leveraging the UN system to advance global AI Governance efforts — Equally, there’s an emphasis placed on the benefits of collaboration and teamwork. The analysis proposes that cooperativ…
S40
Education meets AI — Participants stressed the need for unbiased data to ensure fair and equal treatment. It was acknowledged that mistakes m…
S41
Skilling and Education in AI — The Professor took a notably realistic turn in acknowledging that AI will inevitably create new forms of inequality, des…
S42
WS #232 Innovative Approaches to Teaching AI Fairness &amp; Governance — 2. Create Flexible Learning Frameworks: Develop adaptable educational approaches that can be tailored to different conte…
S43
Rethinking Africa’s digital trade: Entrepreneurship, innovation, &amp; value creation in the age of Generative AI (depHub) — In summary, the analysis raises critical concerns regarding data protection, privacy, and ethical considerations. It und…
S44
#IGF2020: Final report — Kids are advisedto resist the urge to answer bullies, or alternatively, to block them while seeking help from those they…
S45
Informal Stakeholder Consultation Session — Making Capacity Building Concrete and Funded:Emphasized the need for concrete action on capacity building by providing f…
S46
Opening of the session — Capacity building is essential for political and institutional resource development. There is a need for reflecting cap…
S47
DCAD &amp; DC-OER: Building Barrier-Free Emerging Tech through Open Solutions — The discussion emphasized the importance of a multi-stakeholder approach, involving policymakers, educators, developers,…
S48
Responsible AI for Children Safe Playful and Empowering Learning — “So this use of hands and manipulatives is something we believe in so deeply.”[45]. “You lead to deeper engagement and u…
S49
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Maybe I can take this one. Yes, thank you for the comment, Elcho. Yes, it is a risk and it is an issue …
S50
Lessons learned: Offering our course on AI for the first time — Participants who attended the AI course were frequently motivated by professional needs. Either they had been requested …
S51
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — UNICEF has played a proactive role in the field of AI for children by creating policy guidance on the topic. Importantly…
S52
AI and Magical Realism: When technology blurs the line between wonder and reality — Avoid usingmagicalarguments for practical governance: e.g. framing current policy issues on market, human rights, and kn…
S53
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S54
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Pavan Duggal:We are very clear that the legal frameworks of artificial intelligence have to be an important catalyst in …
S55
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — Generative AI and large language models have the potential to significantly enhance conversational systems. These system…
S56
Atelier #1 : « Infrastructures et services numériques à l’ère de l’IA : quels enjeux de régulation, de sécurité et de souveraineté des données ? » — Drudeisha Madhub Mme la Présidente, je vous remercie vraiment parce que ça a été jusqu’à présent très brillant de votre …
S57
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S58
From principles to practice: Governing advanced AI in action — Brian Tse: right now? First of all, it’s a great honor to be on this panel today. To ensure that AI could be used as a f…
S59
Advancing Scientific AI with Safety Ethics and Responsibility — High level of consensus with significant implications for AI governance policy. The agreement across speakers from diffe…
S60
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — Additionally, there are worries about the use of unverified information in machine learning processes. These concerns hi…
S61
WS #376 Elevating Childrens Voices in AI Design — Stephen Balkam: Yeah, this feels like deja vu all over again, I was very much involved in the web 1.0 back in the mid 90…
S62
RITEC: Prioritizing Child Well-Being in Digital Design | IGF 2023 Open Forum #52 — By addressing the crisis head-on, LEGO Group demonstrates their commitment to protecting children and building a safer o…
S63
Safeguarding Children with Responsible AI — Tom Hall from LEGO Education highlighted a critical implementation gap: while 80% of teachers recognise AI literacy as f…
S64
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Gbenga Sesan:It’s like you framed my conversation already. I’m glad we’re having a lot of conversations around AI. This …
S65
AI &amp; Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — The World and a majority of Education systems lacking literacy and numeracy levels The project also had a positive impa…
S66
Empowering India &amp; the Global South Through AI Literacy — I hope we don’t become artificially polite, but then I’m hoping that some of these things rubs off in the language of te…
S67
Laying the foundations for AI governance — This comment is insightful because it directly contradicts the common assumption that companies oppose regulation. Seafo…
S68
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S69
The AI gold rush where the miners are broke — The rapid rise of AI has drawn a wave of ambitious investors eager to tap into what many consider the next major economi…
S70
Responsible AI for Children Safe Playful and Empowering Learning — It’s a wonderful thing. But as I said, I think it’s a real mistake if we don’t teach children to question the magic and …
S71
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S72
Safeguarding Children with Responsible AI — This comment shifted the discussion from abstract concerns about AI risks to concrete pedagogical approaches. It influen…
S73
WS #232 Innovative Approaches to Teaching AI Fairness &amp; Governance — Ayaz Karimov: Yeah. I can hear myself. So it means actually you can also hear me. Today, I will talk a little bit about …
S74
Empowering Workers in the Age of AI — Tom Wambeke: Good afternoon. This is the last input before we can go a little bit more interactive. As you see from the …
S75
Scaling Multistakeholder Partnerships: Connectivity and Education — Ms. Erin Chemery:Thanks so much, Karen. And thank you to ITU and GECA for hosting us today. I’m really loving the learni…
S76
Education meets AI — It is argued that employing a wide variety of people to collect data and design algorithms can ensure that no one is lef…
S77
Transforming Health Systems with AI From Lab to Last Mile — Data privacy, security and ethical safeguards
S78
WS #162 Overregulation: Balance Policy and Innovation in Technology — Natalie Tercova: Of course, I’ll try to be very brief. So I very much agree that it very depends on the specific case…
S79
Dedicated stakeholder session (in accordance with agreed modalities for the participation of stakeholders of 22 April 2022)/OEWG 2025 — Red en Defensa de los Derechos Digitales: It is essential for states and stakeholders to collaborate to strengthen the …
S80
#IGF2020: Final report — Kids are advisedto resist the urge to answer bullies, or alternatively, to block them while seeking help from those they…
S81
DCAD &amp; DC-OER: Building Barrier-Free Emerging Tech through Open Solutions — The discussion emphasized the importance of a multi-stakeholder approach, involving policymakers, educators, developers,…
S82
Open Forum #29 Multisectoral action and innovation for child safety — Examples include developing tailored school curricula materials, capacity-building efforts for teachers and parents, and…
S83
WSIS Action Line C7 E-learning — This comment redirected the conversation toward practical implementation challenges and the need for capacity building. …
S84
AN INTRODUCTION TO — A much needed step beyond awareness building and training of youth, parents and educators is capacity building in the ar…
S85
[Parliamentary Session 4] Fostering Inclusive Digital Innovation and Transformation — Mario Nobile: Only one minute if I may. Robert answered all the questions, but three points. The first one, Italy, …
S86
A Digital Future for All (morning sessions) — Amr Talaat: The hope of digital, or is it the fear of digital? Distinguished guests, this is a question that resonates…
S87
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk. Discussions on emerging…
S88
How African knowledge and wisdom can inspire the development and governance of AI — The aim is to safeguard the accurate portrayal and preservation of Africa’s knowledge and cultural heritage, entrusting …
S89
AI for Good Impact Initiative — It is important for young people to feel they can contribute to and influence their future. Young people should be able…
S90
Open Forum #26 High-level review of AI governance from Inter-governmental P — The speaker mentions the perception among youth that governance often comes in to regulate innovative ideas before they …
S91
AI promises, ethics, and human rights: Time to open Pandora’s box — Given the variety, interdependence, and complexity of the issues, multiple approaches need to be combined in order to me…
S92
UN Human Rights Council: High level discussion on AI and human rights — Systemic prompt:You are supposed to answer questions on the impact of AI on human rights with specific reference of the …
S93
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — This comment was prophetic in highlighting how technological disruption (like AI automating coding) can make narrow skil…
S94
WS #214 Youth-Led Digital Futures: Integrating Perspectives and Governance — Andere mentions the mismatch between university education and the skills needed for the future of work, citing his perso…
S95
Key points by session — – -The city of Chicago implemented a successful education reform thanks to clear vision and leadership on the part of ke…
S96
New Colours of Knowledge — Regarding the possibility of attracting and retaining the best individuals in the prof ession, one of the identified pro…
S97
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — He highlights that privacy‑sensitive and latency‑critical workloads are best kept at the edge, where user data never lea…
S98
Google’s AI Edge Gallery boosts privacy with on-device model use — Google has released anexperimental app called AI Edge Gallery, allowing Android users to run AI models directly on their…
S99
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — Ramadori criticizes the current approach of trying to fix AI problems after they manifest, arguing that this patching me…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument146 words per minute455 words186 seconds
Argument 1
AI literacy essential for future participation (Speaker 1)
EXPLANATION
Speaker 1 argues that understanding AI is a prerequisite for staying relevant in a world where the technology is ubiquitous. They stress that without AI literacy individuals will be left behind and that young people should have a voice in shaping AI policies.
EVIDENCE
The speaker expresses curiosity about how AI works and a desire to learn its everyday applications, likens AI to taxes as unavoidable, states a commitment to solving big problems, and calls for a say in AI policies, concluding with gratitude for being asked for input [1-6].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Empowering India and the Global South through AI literacy and calls for inclusive AI education underline the necessity of AI literacy for future participation [S18], while discussions on education, inclusion, and literacy as must-haves for a positive AI future reinforce this point [S19]. Building India’s digital and industrial future with AI also highlights the central role of AI literacy [S20].
MAJOR DISCUSSION POINT
AI literacy as a societal necessity
AGREED WITH
Tom Hall, Atish Joshua Gonsalves, Saadhna Panday
T
Tom Hall
6 arguments167 words per minute2191 words786 seconds
Argument 1
Teach AI fundamentals, not “magic box” perception (Tom Hall)
EXPLANATION
Tom Hall contends that AI should be presented as a technology rather than a mysterious magic box. He advocates for giving children the conceptual tools—like a screwdriver—to deconstruct and understand AI systems, not just to consume their outputs.
EVIDENCE
He describes how children view generative AI as a magic box that produces answers instantly, then argues that AI literacy must move beyond using the box to understanding its underlying mechanisms, using the screwdriver metaphor to emphasize foundational knowledge [16-23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI for Children emphasizes that AI is a technology system, not a magic box, and stresses teaching fundamentals rather than superficial use [S2]; Safeguarding Children with Responsible AI notes the shift from abstract concerns to concrete curriculum that demystifies AI [S5].
MAJOR DISCUSSION POINT
Demystifying AI for learners
Argument 2
Manipulatives and tactile learning deepen engagement and mastery (Tom Hall)
EXPLANATION
Tom Hall emphasizes that hands‑on, manipulative‑based learning engages multiple brain regions and leads to deeper mastery of concepts. He links tactile interaction with improved spatial awareness and foundational mathematics.
EVIDENCE
He cites research from the LEGO Foundation showing that manipulatives strengthen spatial awareness and basic math skills, and argues that such hands-on approaches should be extended to AI education [263-270].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The responsible AI discussion highlights the value of hands-on manipulatives for deeper engagement and mastery of concepts [S2]; Safeguarding Children with Responsible AI also cites research linking tactile learning to stronger spatial awareness and mastery [S5].
MAJOR DISCUSSION POINT
Physical interaction as a learning catalyst
AGREED WITH
Atish Joshua Gonsalves, Richa Menke, Saadhna Panday
Argument 3
Emphasize policy discussion and safety as core to AI adoption (Tom Hall)
EXPLANATION
Tom Hall stresses that before deploying AI tools in classrooms, educators must facilitate policy discussions with children about bias, safety, and ethical use. He warns against treating AI as a black‑box solution without critical questioning.
EVIDENCE
He notes children’s tendency to trust AI outputs as gospel, calls for asking children about bias and encouraging them to think through policy questions, and repeats the need for guided discussion rather than simply providing tools [187-194][229-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
From Principles to Practice stresses that safety must be built-in, not an afterthought, for AI adoption [S21]; Conversation notes that trust and safety are prerequisites for effective AI use [S22]; Safeguarding Children with Responsible AI calls for policy dialogue and safety-first approaches in classrooms [S5]; Professional development literature underscores the need for teacher support when introducing AI safely [S27].
MAJOR DISCUSSION POINT
Policy dialogue as a safety measure
AGREED WITH
Richa Menke, Atish Joshua Gonsalves, Saadhna Panday
Argument 4
Localization of AI tools into relevant languages is essential (Tom Hall)
EXPLANATION
Tom Hall argues that AI tools must be available in languages that are meaningful to local learners to ensure equitable access. He mentions ongoing work to translate tools beyond English.
EVIDENCE
He states that tools should be delivered in local languages, notes current production in English, and promises future localizations, highlighting the importance of automated translation quality [381-385].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Digital Public Infrastructure research highlights the importance of local language incorporation for accessibility [S24]; Welfare for All discusses challenges of copying regulations without local adaptation, underscoring the need for language-specific localization [S25].
MAJOR DISCUSSION POINT
Language relevance for equitable AI use
AGREED WITH
Saadhna Panday, Atish Joshua Gonsalves
Argument 5
Teachers require dedicated AI training and resources to scale implementation (Tom Hall)
EXPLANATION
Tom Hall points out that teachers need specific AI training, resources, and structured toolkits to effectively bring AI literacy into classrooms. Without this support, scaling AI education will be limited.
EVIDENCE
He describes a recommended AI toolkit for classroom conversations, cites examples of UK schools using the approach, and stresses the need for teacher training days and discussions before tool deployment [344-350].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safeguarding Children with Responsible AI points to the necessity of teacher training and structured toolkits for AI literacy [S5]; How nonprofits are using AI-based innovations notes that teacher support is key to scaling impact [S26]; Rethinking Learning stresses professional development and institutional backing for teachers introducing AI [S27].
MAJOR DISCUSSION POINT
Professional development for educators
AGREED WITH
Atish Joshua Gonsalves, Saadhna Panday, Asha Nanavati
Argument 6
Children should be enabled to create their own “magic” rather than consume it passively (Tom Hall)
EXPLANATION
Tom Hall argues that children must move from passive consumption of AI outputs to active creation, using the metaphor of handing them a screwdriver and compass to build their own solutions. This shift fosters deeper understanding and agency.
EVIDENCE
He describes the danger of viewing AI as a magic box, emphasizes giving children tools to deconstruct and create, and calls for an education system that takes this responsibility seriously, asserting that the time to act is now [191-194][229-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI for Children stresses giving children tools to create rather than merely consume, highlighting agency over the “magic box” perception [S2]; Safeguarding Children with Responsible AI references the need for real-world curriculum that empowers children to build their own solutions [S5].
MAJOR DISCUSSION POINT
Empowering children as creators
AGREED WITH
Richa Menke, Saadhna Panday
R
Richa Menke
3 arguments163 words per minute1203 words441 seconds
Argument 1
Play‑driven imagination can harness AI’s creative potential (Richa Menke)
EXPLANATION
Richa Menke suggests that AI, when integrated with play, can unlock new creative possibilities for children. She highlights that generative AI’s probabilistic “hallucinations” can become playful features rather than bugs.
EVIDENCE
She explains that AI can inspire children facing a blank canvas, support diverse learning methods, and that hallucinations in generative AI could be viewed as playful features, illustrating the potential of AI-enhanced play [118-128].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Brainstorming with AI opens new doors for innovation describes AI as a creative partner that can inspire play and imagination [S30]; When Code and Creativity Collide discusses AI’s transformation of creative expression, supporting the idea of AI-enhanced imaginative play [S35].
MAJOR DISCUSSION POINT
AI as a catalyst for imaginative play
AGREED WITH
Tom Hall, Atish Joshua Gonsalves, Saadhna Panday
Argument 2
Safety and privacy are non‑negotiable; current products avoid AI use (Richa Menke)
EXPLANATION
Richa stresses that safety and privacy are absolute requirements for child‑focused products, and therefore their current LEGO offerings deliberately do not incorporate AI. This cautious stance reflects a higher standard for childhood products.
EVIDENCE
She states that safety and privacy are foundational, notes that none of LEGO’s AI products currently employ AI, and explains the high bar set for childhood experiences, emphasizing the tension between potential and safety [306-314].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
WS #172 Regulating AI and Emerging Risks for Children’s Rights emphasizes safety and privacy as non-negotiable for child-focused AI [S28]; Open Forum #38 highlights privacy concerns in AI deployments [S31]; Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities discusses the imperative of privacy and data protection [S32]; WS #162 Overregulation stresses privacy and bias as essential safeguards [S33].
MAJOR DISCUSSION POINT
Zero‑tolerance for safety risks
AGREED WITH
Atish Joshua Gonsalves, Saadhna Panday, Tom Hall
Argument 3
Tension between efficiency and imagination; over‑reliance on AI may curb creativity (Richa Menke)
EXPLANATION
Richa identifies a key tension: while AI can deliver fast answers, over‑reliance may diminish children’s imagination and confidence in their own creative abilities. She warns that personalization at a young age could limit future development.
EVIDENCE
She outlines the conflict between efficiency (quick answers) and imagination (struggle, creative development), and raises concerns about early personalization potentially holding children back, linking these points to broader risks for imagination and agency [130-135].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI for Children directly addresses the tension between quick AI answers and the need for imagination, warning that efficiency can limit creative development [S2]; When Code and Creativity Collide further explores how AI can both aid and hinder creative processes [S35].
MAJOR DISCUSSION POINT
Balancing speed with creative growth
AGREED WITH
Tom Hall, Saadhna Panday
A
Atish Joshua Gonsalves
4 arguments214 words per minute2059 words575 seconds
Argument 1
Lego product delivers safe, hands‑on, collaborative AI experiences (Atish Joshua Gonsalves)
EXPLANATION
Atish describes LEGO Education’s AI product as a safe, hands‑on platform that encourages collaboration among students. The product is built on universal design principles and runs AI locally to protect privacy.
EVIDENCE
He outlines guidelines for safety (no anthropomorphising, no text generation), fairness, transparency (clear data provenance), and privacy (on-device processing), and explains how the product supports collaborative, hands-on learning through coding and building activities [31-34][36-41][46-51].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safeguarding Children with Responsible AI outlines safe, hands-on approaches for child AI learning, aligning with LEGO’s collaborative, safety-first design [S5]; Open Forum #38 underscores the importance of trust and privacy in child-focused AI tools [S31].
MAJOR DISCUSSION POINT
Safe, collaborative AI learning tools
AGREED WITH
Tom Hall, Richa Menke, Saadhna Panday
Argument 2
Built‑in safety, fairness, transparency, and on‑device privacy safeguards (Atish Joshua Gonsalves)
EXPLANATION
Atish emphasizes that the LEGO AI solution embeds safety, fairness, transparency, and privacy by design, ensuring that data never leaves the device and that models have clear provenance.
EVIDENCE
He details that the product does not generate text or media, avoids anthropomorphising, follows universal design for neurodiverse learners, guarantees data provenance, and runs all AI locally with no data transmission to third parties [31-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities stresses privacy-by-design and bias mitigation as essential [S33]; WS #162 Overregulation highlights privacy, bias, and data protection as non-negotiable aspects of AI for children [S32]; WS #172 Regulating AI and Emerging Risks for Children’s Rights reinforces the need for trust, transparency, and safety [S28].
MAJOR DISCUSSION POINT
Privacy‑by‑design in educational AI
AGREED WITH
Richa Menke, Saadhna Panday, Tom Hall
Argument 3
Frugal, screen‑free, age‑appropriate AI concepts enable learning in resource‑limited settings (Atish Joshua Gonsalves)
EXPLANATION
Atish argues that AI concepts can be taught without screens or expensive hardware, using frugal approaches such as brick‑based activities that introduce computational thinking and probability even in low‑resource environments.
EVIDENCE
He references his prior work with UNHCR, mentions “frugal AI”, stresses age-appropriate progression starting with screen-free brick activities that teach sequences, loops, and probability without hardware [238-250].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Digital Public Infrastructure research notes that local language and low-resource accessibility are crucial for equitable AI adoption, supporting frugal, screen-free approaches in underserved contexts [S24]; Welfare for All discusses the challenges of applying standards without local adaptation, underscoring the need for resource-appropriate solutions [S25].
MAJOR DISCUSSION POINT
Low‑cost, screen‑free AI education
AGREED WITH
Tom Hall, Saadhna Panday
Argument 4
Dedicated teacher portal provides curriculum, guides, and scaffolding (Atish Joshua Gonsalves)
EXPLANATION
Atish highlights a teacher portal that supplies lesson plans, coding canvases, and scaffolding resources, enabling educators to support students in building and understanding AI.
EVIDENCE
He mentions the teacher portal that offers curriculum, guides, and scaffolding, allowing teachers to facilitate hands-on AI lessons and supports educators in delivering the product’s learning objectives [32-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safeguarding Children with Responsible AI highlights the importance of teacher-focused curricula and guides for AI literacy [S5]; How nonprofits are using AI-based innovations points to the need for teacher support resources to enable effective AI instruction [S26].
MAJOR DISCUSSION POINT
Teacher‑centric support infrastructure
AGREED WITH
Tom Hall, Saadhna Panday, Asha Nanavati
S
Saadhna Panday
6 arguments129 words per minute1416 words657 seconds
Argument 1
AI must be equitable and keep child agency central (Saadhna Panday)
EXPLANATION
Saadhna stresses that AI should be deployed in ways that are equitable and that preserve children’s agency. She warns that AI can exacerbate existing inequalities if not carefully managed.
EVIDENCE
She notes AI’s uneven impact across urban Delhi versus rural Jharkhand, emphasizes the need for equitable, evidence-based solutions, and highlights children’s agency and capacity to shape technology [160-178].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI for Children stresses that children are active agents, not passive recipients, aligning with calls for equitable AI that preserves agency [S2]; WS #172 Regulating AI and Emerging Risks for Children’s Rights underscores equity and child agency in AI deployment [S34].
MAJOR DISCUSSION POINT
Equity and agency in AI deployment
AGREED WITH
Speaker 1, Tom Hall, Atish Joshua Gonsalves
Argument 2
Joy of Lego bricks illustrates learning in low‑resource contexts (Saadhna Panday)
EXPLANATION
Saadhna shares a personal anecdote about seeing children in a rural South African hut using hand‑me‑down LEGO bricks alongside a workbook, illustrating how simple, tangible resources can spark learning even in resource‑poor settings.
EVIDENCE
She recounts traveling to rural KwaZulu-Natal, seeing a child with a workbook and LEGO bricks, describing the experience as “beautiful” and a vivid example of head, heart, and mind learning [293-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Digital Public Infrastructure research highlights how tangible, low-tech resources can bridge digital divides in low-resource settings, echoing the LEGO brick example [S24].
MAJOR DISCUSSION POINT
Tangible play as a bridge in low‑resource schools
AGREED WITH
Tom Hall, Atish Joshua Gonsalves, Richa Menke
Argument 3
Trust, transparency, and privacy must underpin all child‑focused AI tools (Saadhna Panday)
EXPLANATION
Saadhna argues that any AI tool for children must be built on trust, transparency, and robust privacy safeguards, ensuring that children’s rights are protected.
EVIDENCE
She calls for trust, transparency, privacy, and safety as non-negotiable, referencing concerns about privacy and safety raised earlier and emphasizing the need for these principles in all child-focused AI tools [310-314][300-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open Forum #38 and Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities both stress trust, transparency, and privacy as foundational for child-focused AI [S31], [S33]; WS #172 Regulating AI and Emerging Risks for Children’s Rights reinforces these principles [S28].
MAJOR DISCUSSION POINT
Rights‑based design for child AI
AGREED WITH
Richa Menke, Atish Joshua Gonsalves, Tom Hall
Argument 4
Urban‑rural AI divide threatens equitable education outcomes (Saadhna Panday)
EXPLANATION
Saadhna highlights the stark contrast between AI exposure in urban schools and its scarcity in remote tribal areas, warning that this divide could widen educational inequities.
EVIDENCE
She contrasts AI presence in urban Delhi homes and schools with its near-absence for a tribal girl in rural Jharkhand, underscoring the uneven rollout of AI in education [162-168].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Digital Public Infrastructure emphasizes the importance of local language and access to mitigate urban-rural gaps [S24]; Welfare for All discusses challenges of applying uniform standards across diverse locales, highlighting the urban-rural divide issue [S25].
MAJOR DISCUSSION POINT
Geographic disparity in AI access
AGREED WITH
Tom Hall, Atish Joshua Gonsalves
Argument 5
Empowering educators is key to delivering effective AI literacy (Saadhna Panday)
EXPLANATION
Saadhna stresses that teachers must be empowered with knowledge, tools, and support to effectively teach AI literacy, positioning educator empowerment as central to successful implementation.
EVIDENCE
She notes the need for empowerment, references the panel’s focus on empowerment, and invites audience questions, indicating that teacher capacity is essential for scaling AI literacy [205-210][311-313].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safeguarding Children with Responsible AI calls for teacher empowerment and resources to scale AI literacy [S5]; Rethinking Learning stresses professional development and institutional backing for teachers introducing AI [S27]; How nonprofits are using AI-based innovations notes teacher support as critical for impact [S26].
MAJOR DISCUSSION POINT
Teacher empowerment for AI literacy
AGREED WITH
Tom Hall, Atish Joshua Gonsalves, Asha Nanavati
Argument 6
Emphasize child agency, critical thinking, and responsible AI use (Saadhna Panday)
EXPLANATION
Saadhna calls for AI education that foregrounds child agency, critical thinking, and responsible use, arguing that children should be active creators rather than passive consumers.
EVIDENCE
She describes children as not passive recipients but agents who can consume, shape, and eventually lead AI, and frames the conversation around building agency, critical, independent thinkers [175-179].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI for Children highlights child agency and the need for critical, independent thinking in AI education [S2]; WS #172 Regulating AI and Emerging Risks for Children’s Rights underscores responsible AI use centered on children’s rights and agency [S34].
MAJOR DISCUSSION POINT
Agency‑centric AI education
AGREED WITH
Tom Hall, Richa Menke
A
Asha Nanavati
1 argument140 words per minute101 words43 seconds
Argument 1
Need affordable teacher training and support for AI in Indian charitable schools (Asha Nanavati)
EXPLANATION
Asha points out that charitable schools in India lack funding for teacher training on AI safety and adoption, and asks whether LEGO plans to support such initiatives locally.
EVIDENCE
She describes her charitable K-12 school in Kerala, notes limited resources for AI teacher training, and asks if LEGO is planning any programs in India to address these gaps [332-342].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safeguarding Children with Responsible AI stresses the necessity of teacher training and resources for safe AI adoption in schools [S5]; How nonprofits are using AI-based innovations and Rethinking Learning both underline the importance of affordable professional development for educators in low-resource contexts [S26], [S27].
MAJOR DISCUSSION POINT
Funding and support for teacher capacity in low‑income schools
AGREED WITH
Tom Hall, Atish Joshua Gonsalves, Saadhna Panday
N
Nikhil Bawa
1 argument116 words per minute200 words102 seconds
Argument 1
Parents need structured/unstructured resources for home AI learning (Nikhil Bawa)
EXPLANATION
Nikhil asks for guidance and resources that parents can use at home to teach AI, emphasizing the need for both structured curricula and unstructured play to support learning outside school.
EVIDENCE
He requests advice for parents, mentions developing an alternate home curriculum of four hours per week, and asks about resources for unstructured play, self-regulation, and broader AI adoption at home [320-328].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safeguarding Children with Responsible AI mentions the need for both structured curricula and play-based, unstructured learning resources to support AI education beyond the classroom [S5]; How nonprofits are using AI-based innovations notes the role of community channels (e.g., WhatsApp) in extending learning to homes [S26].
MAJOR DISCUSSION POINT
Home‑based AI education support
S
Speaker 4
1 argument1 words per minute1 words54 seconds
Argument 1
Brief learner acknowledgment of learning moment (Speaker 4)
EXPLANATION
Speaker 4 offers a short interjection that appears to acknowledge a learning moment, though the content is incomplete.
MAJOR DISCUSSION POINT
Learner acknowledgment
Agreements
Agreement Points
AI literacy is essential and should focus on fundamentals rather than a magical perception
Speakers: Speaker 1, Tom Hall, Atish Joshua Gonsalves, Saadhna Panday
AI literacy essential for future participation (Speaker 1) Teach AI fundamentals, not “magic box” (Tom Hall) Lego product delivers safe, hands‑on, collaborative AI experiences (Atish Joshua Gonsalves) AI must be equitable and keep child agency central (Saadhna Panday)
All four speakers stress that understanding AI – its basic concepts, limits and societal impact – is a prerequisite for future participation and for children to have a voice in AI policy, rejecting the view of AI as a mysterious magic box. [1-6][16-23][24-27][31-34][160-178]
POLICY CONTEXT (KNOWLEDGE BASE)
UNICEF’s policy guidance stresses grounding AI literacy in concrete fundamentals rather than hype, echoing the call to avoid “magical” framing of technology [S51][S52].
Hands‑on, tactile, play‑based learning deepens engagement and mastery of AI concepts
Speakers: Tom Hall, Atish Joshua Gonsalves, Richa Menke, Saadhna Panday
Manipulatives and tactile learning deepen engagement and mastery (Tom Hall) Lego product delivers safe, hands‑on, collaborative AI experiences (Atish Joshua Gonsalves) Play‑driven imagination can harness AI’s creative potential (Richa Menke) Joy of Lego bricks illustrates learning in low‑resource contexts (Saadhna Panday)
The speakers agree that learning through physical manipulatives, building, and playful interaction – whether with LEGO bricks or AI-enhanced play – leads to stronger cognitive development and better grasp of AI principles. [263-270][31-34][36-41][118-128][293-298]
POLICY CONTEXT (KNOWLEDGE BASE)
The “Responsible AI for Children” framework highlights that manipulatives and hands-on activities lead to deeper engagement and mastery, a principle also reflected in LEGO Education’s emphasis on giving children a “screwdriver” to explore AI [S48][S63].
Safety, privacy and ethical safeguards are non‑negotiable in child‑focused AI tools
Speakers: Richa Menke, Atish Joshua Gonsalves, Saadhna Panday, Tom Hall
Safety and privacy are non‑negotiable; current products avoid AI use (Richa Menke) Built‑in safety, fairness, transparency, and on‑device privacy safeguards (Atish Joshua Gonsalves) Trust, transparency, and privacy must underpin all child‑focused AI tools (Saadhna Panday) Emphasize policy discussion and safety as core to AI adoption (Tom Hall)
All speakers underline that any AI solution for children must guarantee safety, privacy and ethical design, with data never leaving the device and clear governance, making these aspects a hard floor for product development. [306-314][31-34][46-51][310-314][300-304][187-194][229-236]
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple sources underline non-negotiable safety standards for children, including UNICEF’s child-rights-based AI policy, LEGO’s public commitment to child-online safety, and broader calls for safeguards against unverified content [S51][S62][S60][S65].
Teacher empowerment and provision of resources are critical for scaling AI literacy
Speakers: Tom Hall, Atish Joshua Gonsalves, Saadhna Panday, Asha Nanavati
Teachers require dedicated AI training and resources to scale implementation (Tom Hall) Dedicated teacher portal provides curriculum, guides, and scaffolding (Atish Joshua Gonsalves) Empowering educators is key to delivering effective AI literacy (Saadhna Panday) Need affordable teacher training and support for AI in Indian charitable schools (Asha Nanavati)
The panel concurs that without targeted professional development, teacher toolkits and ongoing support, AI literacy cannot be effectively introduced or scaled, especially in low-resource settings. [344-350][32-38][205-210][311-313][332-342]
POLICY CONTEXT (KNOWLEDGE BASE)
LEGO Education identifies a gap where teachers recognise AI literacy but feel unprepared, urging empowerment and resource provision; similar calls appear in global capacity-building discussions for AI education [S63][S66].
Equity and localization are necessary to avoid widening the AI divide
Speakers: Tom Hall, Saadhna Panday, Atish Joshua Gonsalves
Localization of AI tools into relevant languages is essential (Tom Hall) Urban‑rural AI divide threatens equitable education outcomes (Saadhna Panday) Frugal, screen‑free, age‑appropriate AI concepts enable learning in resource‑limited settings (Atish Joshua Gonsalves)
All three speakers stress that AI tools must be adapted to local languages, low-resource contexts and frugal implementations to ensure that children in rural or underserved areas are not left behind. [381-385][162-168][160-178][238-250]
POLICY CONTEXT (KNOWLEDGE BASE)
UNICEF’s child-rights framework and the Inclusive AI dialogue stress that AI initiatives must be localized and equitable to prevent deepening existing digital divides [S65][S54][S66].
AI should be a tool for creation, not passive consumption; avoid the “magic box” trap
Speakers: Tom Hall, Richa Menke, Saadhna Panday
Children should be enabled to create their own “magic” rather than consume it passively (Tom Hall) Tension between efficiency and imagination; over‑reliance on AI may curb creativity (Richa Menke) Emphasize child agency, critical thinking, and responsible AI use (Saadhna Panday)
The speakers agree that children must move from being passive users of AI outputs to active creators, preserving imagination and critical thinking, and that education should give them the tools to build rather than just receive. [191-194][229-236][130-135][175-179]
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions warn against framing AI as a mysterious “magic box” and instead promote its use as a creative instrument, aligning with responsible AI for children guidelines [S52][S48].
Similar Viewpoints
Both emphasize that children need to be active participants in shaping AI’s role in society, linking literacy with agency and equitable involvement. [1-6][160-178]
Speakers: Speaker 1, Saadhna Panday
AI literacy essential for future participation (Speaker 1) AI must be equitable and keep child agency central (Saadhna Panday)
Unexpected Consensus
Current LEGO products deliberately do not incorporate generative AI due to safety concerns
Speakers: Richa Menke, Tom Hall
Safety and privacy are non‑negotiable; current products avoid AI use (Richa Menke) Emphasize policy discussion and safety as core to AI adoption (Tom Hall) – includes statement that LEGO does not use generative AI in its products
While many panelists promote AI-enabled learning tools, both Richa and Tom unexpectedly agree that LEGO’s existing offerings intentionally omit AI to meet a high safety bar, highlighting a cautious approach despite overall enthusiasm for AI in education. [306-308][361-363]
POLICY CONTEXT (KNOWLEDGE BASE)
LEGO’s public statements on prioritizing child well-being and avoiding generative AI in current offerings illustrate a precautionary approach consistent with industry safety recommendations [S62][S63].
Overall Assessment

The panel shows strong convergence on the need for AI literacy grounded in fundamentals, hands‑on and play‑based pedagogy, rigorous safety and privacy safeguards, and robust teacher support. Consensus also exists on equity, localization and the danger of treating AI as a magical black box. The only notable divergence is the degree of optimism about deploying AI now versus a more cautious stance, yet even that is bridged by shared safety concerns.

High consensus across most thematic areas, indicating a shared vision that AI education must be foundational, safe, equitable and teacher‑driven. This consensus suggests that future policy and product development can build on these common principles to advance inclusive AI literacy.

Differences
Different Viewpoints
Contradiction over whether LEGO Education products currently incorporate AI functionality
Speakers: Atish Joshua Gonsalves, Richa Menke
Lego product delivers safe, hands‑on, collaborative AI experiences Safety and privacy are non‑negotiable; current products avoid AI use
Atish describes a LEGO Education offering that runs AI locally on devices, providing safe, hands-on collaborative experiences [31-34][36-41]. Richa counters that none of LEGO’s AI products actually employ AI, citing safety and privacy as the reason for deliberately omitting AI [306-314].
Whether AI should be deployed in classrooms now with safeguards versus being held back until safety is fully assured
Speakers: Tom Hall, Richa Menke
Teach AI fundamentals, not “magic box” perception Children should be enabled to create their own “magic” rather than consume it passively Safety and privacy are non‑negotiable; current products avoid AI use
Tom argues for introducing AI tools in schools, emphasizing that children need to understand fundamentals and be given the means to create their own solutions, while maintaining safety through policy dialogue and teacher support [187-194][229-236]. Richa maintains a cautious stance, stating that LEGO’s current products avoid AI altogether because safety and privacy are non-negotiable, suggesting a pause on AI integration until those concerns are resolved [306-314].
POLICY CONTEXT (KNOWLEDGE BASE)
The debate mirrors broader policy tensions: some experts argue for immediate, safeguarded rollout, while others cite hallucination risks and the need for robust safeguards before classroom adoption [S57][S60][S58].
Impact of AI on children’s imagination and creativity – catalyst or constraint
Speakers: Richa Menke, Tom Hall
Tension between efficiency and imagination; over‑reliance on AI may curb creativity Children should be enabled to create their own “magic” rather than consume it passively
Richa warns that AI’s efficiency (quick answers) can suppress imagination and confidence, arguing that early personalization may limit creative development [130-135]. Tom counters that AI, when paired with hands-on learning, can empower children to build their own “magic” and thus foster creativity rather than diminish it [191-194][229-236].
Unexpected Differences
Direct contradiction about the presence of AI in LEGO’s current product line
Speakers: Atish Joshua Gonsalves, Richa Menke
Lego product delivers safe, hands‑on, collaborative AI experiences Safety and privacy are non‑negotiable; current products avoid AI use
While Atish presents a LEGO Education solution that runs AI locally on devices, Richa explicitly states that none of LEGO’s AI products currently employ AI, citing safety and privacy concerns. This stark inconsistency was not anticipated given the shared corporate context.
Differing views on AI hallucinations as a playful feature versus a risk to be demystified
Speakers: Richa Menke, Tom Hall
Play‑driven imagination can harness AI’s creative potential Teach AI fundamentals, not “magic box” perception
Richa frames AI hallucinations in generative models as potentially playful, enriching imagination, whereas Tom treats the “magic box” metaphor as a risk that must be stripped away through foundational teaching. The contrast between seeing AI’s unpredictability as a feature versus a hazard was not foreseen.
POLICY CONTEXT (KNOWLEDGE BASE)
Research on AI hallucinations highlights the danger of presenting fabricated outputs as playful features, urging demystification and trust-building measures [S57].
Overall Assessment

The panel shows strong consensus on the importance of AI literacy, child agency, and safety, but notable disagreements arise around the actual use of AI in LEGO products, the timing of AI integration in classrooms, and the perceived impact of AI on creativity. These disputes centre on technical implementation versus policy caution, and on whether AI can be a creative catalyst or a potential inhibitor.

Moderate – while participants share overarching goals (equitable, safe AI education), they diverge on concrete approaches (immediate AI deployment with safeguards vs. postponement, and even on whether AI is present at all). This level of disagreement suggests that future collaborations will need clear alignment on product roadmaps and shared safety standards to avoid mixed messaging.

Partial Agreements
All three agree that safety, privacy, and trust are essential for AI in education, but they differ on how to achieve it: Tom focuses on policy dialogue and teacher‑led discussions [187-194][229-236]; Saadhna stresses rights‑based design principles of trust, transparency, and privacy [310-314][300-304]; Atish emphasizes technical safeguards built into the product (local processing, data provenance) [31-34].
Speakers: Tom Hall, Saadhna Panday, Atish Joshua Gonsalves
Emphasize policy discussion and safety as core to AI adoption Trust, transparency, and privacy must underpin all child‑focused AI tools Built‑in safety, fairness, transparency, and on‑device privacy safeguards
All agree that teachers need support to deliver AI literacy. Tom proposes a concrete AI toolkit and training days for teachers [344-350]; Saadhna calls for broader empowerment of educators with resources and capacity building [205-210]; Asha asks for affordable, possibly funded, teacher‑training programmes for low‑income schools in India [332-342]. The divergence lies in the scale and funding mechanisms suggested.
Speakers: Tom Hall, Saadhna Panday, Asha Nanavati
Teachers require dedicated AI training and resources to scale implementation Empowering educators is key to delivering effective AI literacy Need affordable teacher training and support for AI in Indian charitable schools
Both champion hands‑on, tactile learning. Tom cites research showing manipulatives improve spatial awareness and math mastery [263-270]; Atish describes a LEGO product that provides collaborative, hands‑on AI activities built on universal design principles [31-34][36-41]. They differ in focus: Tom emphasizes the pedagogical research base, while Atish highlights product‑level implementation.
Speakers: Tom Hall, Atish Joshua Gonsalves
Manipulatives and tactile learning deepen engagement and mastery Lego product delivers safe, hands‑on, collaborative AI experiences
Takeaways
Key takeaways
AI literacy is essential for future participation and must focus on fundamentals rather than treating AI as a “magic box”. Hands‑on, collaborative, tactile learning (e.g., LEGO bricks) deepens engagement and helps children build real AI understanding. Safety, privacy, fairness, and transparency are non‑negotiable; current LEGO products run AI locally or avoid AI until standards are met. Equity and access are critical – there is a stark urban‑rural divide and a need for multilingual, low‑cost solutions and teacher support. Teachers need dedicated professional development, resources, and scaffolding (teacher portal, 5E model) to scale AI literacy. Balancing imagination, agency, and risk: children should create their own “magic” and retain agency, not be passive consumers. Policy discussion with children is important; involving them in shaping AI guidelines empowers agency. Frugal, screen‑free, age‑appropriate approaches can introduce AI concepts where resources are limited.
Resolutions and action items
LEGO Education will launch its new computer‑science and AI product in schools (April rollout) with built‑in safety safeguards. A teacher portal with curriculum, lesson plans, and scaffolding (5E model) will be provided to support educators. An AI toolkit/template for classroom policy discussions will be made available for teachers to facilitate dialogue with students. LEGO will showcase the product hands‑on at the conference booth (Hall 3) for educators to try. Commitment to keep AI processing on‑device, ensuring no data leaves the device and maintaining privacy. Future plans include localization of materials into additional languages and continued research on safety/ethics before adding generative AI to products.
Unresolved issues
Concrete strategies for delivering AI literacy in rural, multilingual, low‑resource classrooms (e.g., Rajasthan, Jharkhand) remain undefined. Funding and scalable teacher‑training programs for charitable schools in India (as raised by Asha Nanavati) were not resolved. Specific resources or guidelines for parents to support home‑based AI learning were requested but not provided. Determination of the appropriate age or stage to introduce generative AI versus screen‑free concepts is still open. Evidence‑generation and evaluation framework before large‑scale rollout was mentioned as needed but no plan was set. How to balance efficiency versus imagination in practice (preventing over‑reliance on AI) remains an ongoing tension.
Suggested compromises
Adopt a “pause and discuss” approach: hold policy conversations with children before deploying new AI tools. Start with screen‑free, brick‑based teaching of computational concepts, then gradually introduce AI features when safety is assured. Provide low‑tech AI discussion toolkits that do not require heavy hardware, allowing use in resource‑constrained settings. Combine structured (curriculum‑based) activities with unstructured play to satisfy both learning outcomes and creative exploration.
Thought Provoking Comments
AI is like taxes, it’s unavoidable and if you don’t learn to evolve with it you’re gonna be left behind.
Frames AI adoption as an inevitable societal shift, emphasizing urgency for literacy and policy involvement, which sets the stakes for the entire discussion.
Established the central premise that AI literacy is not optional, prompting subsequent speakers to justify why education systems must act now and to propose concrete strategies.
Speaker: Speaker 1
We need to give the child the screwdriver to take that box apart and really understand what’s going on under the cover… we don’t want them to be passive consumers of AI, but to be designers of what is to come.
Uses a vivid metaphor to shift the view of AI from a magical black‑box to a system that can be deconstructed and rebuilt, highlighting the need for deep, hands‑on understanding.
Redirected the conversation from surface‑level tool use to foundational literacy, leading others (e.g., Atish, Richa) to discuss curriculum design, hands‑on learning, and the importance of building rather than just using AI.
Speaker: Tom Hall
Childhood is a developmental window that closes; what enters that window shapes who we become. AI for play should expand imagination, not shortcut it.
Introduces the ethical tension between efficiency and imagination, questioning whether AI might diminish creative struggle that is essential for development.
Created a turning point where the panel moved from describing products to debating the broader developmental implications of AI, prompting follow‑up comments about bias, personalization, and agency.
Speaker: Richa Menke
We have to ask children what kind of conversation they want to have with AI and let them think through policy questions themselves.
Advocates for child‑centered policy co‑creation, turning the discussion toward participatory governance rather than top‑down implementation.
Shifted the tone from product showcase to democratic engagement, influencing Saadhna’s emphasis on equity and prompting audience questions about parental guidance and community involvement.
Speaker: Tom Hall
Even in the most resource‑constrained settings we can teach computational concepts like probability and loops without screens—using bricks or other tangible tools.
Challenges the assumption that AI education requires high‑tech hardware, introducing the concept of “frugal AI” and screen‑free pedagogy.
Opened a new line of discussion on scalability and inclusivity, leading Saadhna to raise concerns about rural implementation and prompting suggestions for low‑cost, teacher‑led approaches.
Speaker: Atish Joshua Gonsalves
If we optimize AI systems for engagement we get more attention; if we optimize for childhood we get potential. What we choose to optimize matters.
Poses a strategic question about the fundamental goals of AI design, reframing the conversation around value alignment rather than technical features.
Deepened the analysis by introducing a policy‑level dilemma, causing other speakers to reflect on safety, privacy, and the long‑term impact of AI on children’s development.
Speaker: Richa Menke
Safety and privacy are non‑negotiable; we deliberately do not embed AI in our current LEGO products until we are sure the bar is met.
Provides a concrete stance that contrasts with the enthusiasm for AI integration, highlighting responsible product development and risk aversion.
Tempered the earlier optimism, prompting Tom and Atish to discuss scaffolding, teacher training, and the timeline for safe AI deployment.
Speaker: Richa Menke
Equity is a core concern: AI is reaching urban children in Delhi but not a tribal girl in rural Jharkhand. We need scalable, evidence‑based solutions that don’t widen inequality.
Brings the social justice dimension into focus, reminding the panel that technological solutions must address systemic disparities.
Steered the conversation toward practical challenges of deployment in diverse contexts, leading to questions about language localization, teacher capacity, and frugal AI approaches.
Speaker: Saadhna Panday
Hands‑on, collaborative learning—using the 5E model and open‑ended design challenges—creates the ‘space between question and answer’ where magic and inspiration happen.
Links pedagogical theory to concrete classroom practice, emphasizing the importance of structured yet open learning experiences for AI literacy.
Provided a practical framework that other speakers referenced when discussing curriculum design, reinforcing the panel’s consensus on experiential learning.
Speaker: Atish Joshua Gonsalves
We must not rush to put the fastest, best model in kids’ hands; we should consider what is right for them, even in well‑resourced settings.
Reiterates caution against premature adoption of advanced AI, highlighting ethical responsibility over technological hype.
Reinforced earlier safety concerns, influencing the final remarks about pausing, reflecting, and ensuring that any AI integration aligns with child‑centric values.
Speaker: Atish Joshua Gonsalves
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from a product‑centric showcase to a nuanced debate about ethics, equity, and pedagogy. Early framing of AI as inevitable set a sense of urgency, while Tom Hall’s screwdriver metaphor reframed AI as a tool to be deconstructed, prompting deeper exploration of foundational literacy. Richa Menke’s focus on imagination versus efficiency and the optimization question introduced a strategic, values‑based lens, steering the panel toward considerations of developmental impact and societal goals. Contributions from Atish and Saadhna highlighted practical pathways for inclusive, low‑tech implementation and the stark equity gaps that must be addressed. Repeated emphasis on safety, privacy, and responsible rollout acted as a grounding force, ensuring that enthusiasm for AI did not eclipse caution. Collectively, these thought‑provoking comments redirected the dialogue toward child‑centered, equitable, and ethically sound AI education, shaping a balanced and forward‑looking conclusion.

Follow-up Questions
How can AI literacy be implemented effectively in rural, multilingual, multilevel classrooms such as those in rural Rajasthan?
Seeks strategies to make AI education equitable and relevant for underserved, linguistically diverse settings.
Speaker: Saadhna Panday
What resources and guidance can be provided to parents to support AI learning at home, given schools adapt slowly?
Parents need structured and unstructured play materials and advice to build a home AI curriculum while schools lag behind.
Speaker: Nikhil Bawa
What recommendations exist for responsible AI play adoption beyond the classroom, especially in home environments?
Raises concern about rapid, competitive AI adoption at home and asks for best‑practice guidance.
Speaker: Nikhil Bawa
Is LEGO planning initiatives in India to support teacher training and AI safety practices for charitable schools?
Requests localized support and capacity‑building for schools with limited funding.
Speaker: Asha Nanavati
How can we make AI learning interactive and supportive of creativity in the classroom?
Looks for methods to move beyond passive AI outputs toward engaging, creative experiences for students.
Speaker: Saadhna Panday
What should AI systems be optimized for (e.g., engagement vs childhood development) and how does that choice affect outcomes?
Highlights the need to align AI objectives with child development rather than merely maximizing attention.
Speaker: Richa Menke
What evidence is needed to assess AI’s impact on learning outcomes and equity before scaling deployments?
Calls for rigorous research to avoid unintended harms and ensure equitable benefits.
Speaker: Saadhna Panday
What safety, privacy, and ethical standards should govern AI‑enabled children’s products, especially regarding local vs cloud processing?
Emphasizes the necessity of non‑negotiable safeguards for child wellbeing in AI products.
Speaker: Richa Menke
How effective are current teacher training and professional development programs for integrating AI literacy into curricula?
Points out the existing gap in teacher preparedness and the need to evaluate training models.
Speaker: Tom Hall
How can bias in AI models trained on limited or non‑representative data be identified and mitigated in educational contexts?
Demonstrated in the demo; requires systematic study of bias mitigation techniques.
Speaker: Atish Joshua Gonsalves
What are the feasibility and impact of frugal, screen‑free AI education approaches for low‑resource settings?
Explores age‑appropriate, resource‑light methods for teaching AI concepts without screens.
Speaker: Atish Joshua Gonsalves
What are the long‑term effects of using AI hallucinations as playful features in children’s play?
Considers whether generative AI ‘mistakes’ can be beneficial or harmful in play contexts.
Speaker: Richa Menke
How does hands‑on, manipulatives‑based learning compare to screen‑based methods in fostering AI understanding and retention?
Claims deeper engagement with physical tools; needs empirical validation.
Speaker: Tom Hall
How can multilingual AI tools and automated translation be ensured to be high‑quality for diverse classroom contexts?
Ensures AI resources are accessible and effective across languages.
Speaker: Tom Hall
What is the optimal balance between child agency and scaffolding in AI‑enhanced curricula (e.g., using the 5E model)?
Seeks frameworks that support autonomy while providing necessary guidance.
Speaker: Atish Joshua Gonsalves
How can AI literacy be implemented effectively in humanitarian contexts such as refugee camps?
Draws on experience with UN Refugee Agency; needs adaptable, low‑resource solutions.
Speaker: Atish Joshua Gonsalves

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Scaling Innovation Building a Robust AI Startup Ecosystem

Scaling Innovation Building a Robust AI Startup Ecosystem

Session at a glanceSummary, keypoints, and speakers overview

Summary

The event was a Startup Felicitation Ceremony organized by the Software Technology Parks of India (STPI) to honor startups for excellence in revenue, funding, employment, women participation, innovation and AI impact [1-4]. Over thirty startups were called to the stage and awarded certificates and trophies in categories such as highest revenue (up to 25 CR and 50 CR), highest funding, highest employment, women employment, AI-based impact and most promising or innovative ventures [5-62].


Devika Chandrasekaran, co-founder of Fuselage Innovations, thanked STPI for early validation through the Scout 2021 program, highlighted the company’s drone solutions for agriculture, defence and disaster management, and noted recent recognition with a National Startup Award and a presentation to the Prime Minister [68-76]. Dr. Soumya Shukla of TectoCell described its AI-powered diagnostic platform that combines radiology and DNA sequencing, credited STPI’s assistance with regulatory compliance, data acquisition and global collaborations, and expressed confidence in scaling the technology worldwide [80-86]. Arita Dalan, representing SecurTech, explained the firm’s mission to simplify cybersecurity for enterprises across sectors, and acknowledged the role of STPI and SDPI in providing industry connections, investor access and mentorship [90-101].


Kirty and Milind Datar presented CaneBot, an AI-driven food-robotics system that produces fresh sugarcane juice autonomously, emphasizing its health benefits, farmer linkages and expansion to other beverages, while attributing STPI’s mentorship, exposure platforms and credibility boost to their scalability plans [104-115]. Noor Fatima and Meenal Gupta of EZO5 Solutions outlined their AI platform Imagix AI for precision oncology treatment planning, recounted a critical cash-flow crisis that was alleviated by STPI’s support leading to rapid growth, and mentioned recent interest from the Prime Minister and Bill Gates for global expansion [118-133].


After the startup presentations, STPI dignitaries-including the Director of STPI Gurugram, Additional Director, Joint Director and the Director of Startups and Innovation-were presented with mementos by senior officials, accompanied by applause [134-145]. Shri Praveen Kumar delivered a vote of thanks, praising the contributions of the Director General of NPC, the Director of Startups and Innovation, Geetika Dayal and other partners for fostering a collaborative ecosystem and reinforcing the relevance of Indian innovation across tiers [147-156]. The ceremony concluded with a group photograph of all felicitated startups and dignitaries, with the moderator inviting directors and senior women leaders to join the stage for the photo session [159-166].


The moderator emphasized that the innovators’ resilience and contributions to India’s digital economy serve as inspiration for the broader startup community [63-65]. Overall, the event highlighted STPI’s pivotal role in nurturing early-stage ventures, facilitating funding, regulatory guidance and global exposure, thereby underscoring the importance of institutional support for scaling Indian technology startups [63-65][147-156].


Keypoints


Major discussion points


Recognition of startups across multiple performance categories – The ceremony highlighted startups for “highest revenue,” “highest funding raised,” “highest employment generation,” “women employment,” “AI-based impact,” and “most innovative” achievements, with each company called to the stage and applauded [5-27][33-62].


Founders sharing their entrepreneurial journeys and impact – Several founders narrated how their ventures grew with STPI’s backing:


– Devika Chandrasekaran (Fuselage Innovations) described early validation from the Scout 2021 program and recent national awards [68-76];


– Dr. Soumya Shukla (TectoCell) explained AI-driven diagnostic solutions and the regulatory support received from STPI [80-86];


– Arita Dalan (SecurTech) outlined the cybersecurity platform and industry-connect opportunities facilitated by STPI [90-100];


– Milind & Kirty Datar (CaneBot) detailed an AI-powered food-robotics platform and how STPI mentorship and investor exposure accelerated scaling [105-113];


– Noor Fatima & Meenal Gupta (EZO5 Solutions) recounted a cash-flow crisis rescued by STPI, leading to a million scans processed and high-level interest from the Prime Minister and Bill Gates [122-130][131-133].


STPI’s role as an ecosystem enabler – Across the testimonies, speakers repeatedly credited STPI for validation, mentorship, regulatory assistance, global networking, and credibility that unlocked funding and market access [74-75][83-84][109-113][124-125]; the closing thank-you speech reinforced this by praising STPI’s “meaningful role” and “platforms like STPI” that make the session possible [147-155].


Formal closing with mementos, gratitude, and group photography – After the founder talks, dignitaries presented mementos, a vote of thanks was delivered, and all participants were invited for group photographs, underscoring the collaborative spirit of the event [134-160][161-170].


Overall purpose / goal


The event was organized to celebrate and publicly recognize high-performing startups within the STPI ecosystem, showcase their achievements, and highlight the supportive role of STPI and related government bodies in fostering innovation, funding, employment, and gender inclusion. By giving founders a platform to share their stories, the ceremony aimed to inspire other entrepreneurs and reinforce the collaborative network among startups, mentors, investors, and policymakers.


Overall tone and its evolution


– The discussion began with a formal, ceremonial tone-announcing categories, inviting startups, and prompting applause [1-4][5-62].


– It shifted to a personal, inspirational tone as founders recounted their journeys, expressing gratitude and pride [68-133].


– The tone then moved back to formal appreciation, with the moderator and dignitaries delivering thank-you remarks and logistical instructions for mementos and photographs [134-170].


Throughout, the atmosphere remained positive, celebratory, and supportive, with no noticeable negative or contentious moments.


Speakers

Arita Dalan – Area of expertise: Cybersecurity solutions for enterprises. Role: Representative of SecureTech (SecurTech) IT Solutions Private Limited. [S1][S2][S3]


Devika Chandrasekaran – Area of expertise: Drone technology for agriculture, defence, and disaster management. Role: Co-founder of Fuselage Innovations. [S4][S5]


Kirty Datar – Role: Representative of Caneboard Solutions Private Limited. [S6]


Noor Fatima – Area of expertise: AI-powered precision oncology treatment-planning platform (Imagix AI). Role: Co-founder of EZO5 Solutions. [S8][S9][S10]


Dr. Saumya Shukla – Area of expertise: AI-powered diagnostic solutions at the intersection of radiology and DNA sequencing (TectoCell). Role: (Founder/Lead) of TectoCell. [S11]


Meenal Gupta – Role: Founder of EZO5 Solutions. (Area of expertise aligns with AI-driven oncology imaging platform)


Milind Datar – Area of expertise: AI-powered food-robotics platform for autonomous beverage preparation (CaneBot). Role: Representative of Caneboard Solutions Private Limited. [S6]


Moderator – Role: Moderator of the Startup Felicitation Ceremony. [S17][S18]


Shri Praveen Kumar – Title: Joint Director, STPI. Role: Presented mementos and delivered the vote of thanks. [S20][S21]


Additional speakers:


DG (Director General) STPI – Title: Director General, STPI. Role: Presented certificates and trophies to startups.


Nirja Shekhar – Title: Director General, National Productivity Council (NPC). Role: Received a memento from STPI.


Shri Ashok Gupta – Title: Director, STPI, Gurugram. Role: Presented a memento to Nirja Shekhar.


Shri Atul Kumar Singh – Title: Additional Director, STPI. Role: Presented a memento to Shri Bala MS.


Shri Bala MS – Role: Recipient of a memento (title not specified).


Geetika Dayal – Title: Representative from Thai Daily NCR. Role: Recipient of a memento.


Rakesh Dubey – Title: Director, Startups and Innovation, STPI. Role: Recipient of a memento.


Kavita (ma’am) – Role: Invited for group photograph (title not specified).


Kishori (ma’am) – Role: Invited for group photograph (title not specified).


Full session reportComprehensive analysis and detailed insights

The ceremony opened with the moderator announcing a highly anticipated Startup Felicitation segment, noting that the event would honour startups nurtured within the Software Technology Parks of India (STPI) ecosystem for achievements in revenue, funding, employment, women’s participation, innovation and AI-driven impact [1-4]. One by one, dignitaries – the Director-General of STPI, the Director-General, National Productivity Council (Nirja Shekhar ma’am) and other officials – called each company to the stage, presented certificates and trophies, and prompted applause for every accolade [5-62].


The awards covered a wide spectrum of performance metrics. Phoenix Marine Exports and Solutions Private Limited received the “Highly Effective” award for the highest revenue (up to ₹25 cr) and impact in Tier 2 and Tier 3 regions [5-8]. Vimeo Consulting Private Limited was recognised for the highest funding raised within the same revenue bracket [10-13]. Swadha Agri Private Limited earned the honour for generating the most employment, while Strangify Technologies Private Limited was celebrated for the highest number of women employed [15-23]. Suhora Technologies Private Limited and Puvation Technologies Solutions Private Limited were lauded for top revenue (up to ₹50 cr) and funding respectively [24-32]. Sikwara Tech IT Solutions Private Limited stood out across multiple categories – employment, women employment and AI-based impact – receiving a special recognition for multi-dimensional excellence [33]. Atmik Bharat Industries Private Limited and Mobile Pay E-Commerce Private Limited were commended for beneficiary impact, the latter in a second-place position [34-38]. Devnagri AI Private Limited and Dactrocell Healthcare and Research Private Limited were honoured for AI-based impact and innovative healthcare solutions respectively [39-47]. EZO5 Solutions Private Limited and Connector Foods Private Limited were recognised as the most promising and most innovative startups (second position) [48-57], and Pew’s Ledge Innovations Private Limited received a similar “most promising” accolade [58-62]. Each citation was followed by applause [9-14].


Following the felicitation, the moderator invited selected founders to share their entrepreneurial journeys. Devika Chandrasekaran, co-founder of Useless Innovations (referred to as Fuselage Innovations by the moderator), recounted that early validation received through STPI’s Scout 2021 programme – described as more than funding, a confidence-boosting endorsement – propelled the venture forward [68-75]. She highlighted the company’s drone-based solutions for agriculture, defence and disaster-management, its service to over 10,000 Indian farmers, and recent national recognition, including a National Startup Award and a presentation before Prime Minister Narendra Modi [71-77].


Dr. Soumya Shukla of TectoCell explained the firm’s AI-powered diagnostic platform that fuses radiology with DNA sequencing to combat drug resistance and streamline clinical trials. She credited STPI for facilitating regulatory compliance, global collaborations and machine-readable data acquisition, which together positioned the startup to scale “from India for the world” [80-86].


Arita Dalan, representing the cybersecurity startup SecurTech, thanked the Software Development Promotion Initiative (SDPI) and STPI for industry connections, investor access and mentorship that have been pivotal to SecurTech’s growth [90-101].


The Datar siblings, Kirty and Milind, presented an AI-driven food-robotics platform that autonomously prepares fresh sugarcane juice in under 30 seconds, thereby replacing unhygienic roadside drinks and creating direct market linkages for sugarcane farmers with fair pricing and circular-economy benefits [105-108][110-113]. They attributed their rapid scaling to STPI’s mentorship, peer network, exposure programmes such as Tycon Exposure, and the credibility boost that came from STPI’s recognition [109-115][104].


Noor Fatima and Meenal Gupta of EZO5 Solutions spoke about their AI platform Imagix AI, which delivers precision oncology treatment planning. They narrated a critical cash-flow crisis when only two months of runway remained; STPI’s intervention enabled them to raise funds and avoid collapse [122-125]. Since incorporation, the startup has processed roughly one million scans, flagged thousands of TB and lung-cancer cases, and reduced radiotherapy planning time from a month to a week [126-130]. The founders also noted high-level interest from the Prime Minister and an invitation from Bill Gates via Microsoft, signalling rapid transition from a national to a global stage [131-133].


Across all founder testimonies, a clear consensus emerged that STPI functioned as a pivotal enabler – providing validation, mentorship, regulatory assistance, investor networking and emergency financial support – which each speaker identified as a decisive factor in their success [68-75][82-85][90-94][104][110-113][124-125][147-152]. The narratives also underscored the tangible societal benefits of AI-driven technologies: drones improving agricultural productivity and disaster response, AI diagnostics enhancing clinical accuracy, cybersecurity frameworks safeguarding critical sectors, autonomous food-robotics ensuring hygiene and farmer income, and AI oncology accelerating treatment planning and disease detection [71-77][82-85][105-108][122-130].


The ceremony concluded with a formal presentation of mementos to senior dignitaries – Director Ashok Gupta (STPI Gurugram), Additional Director Atul Kumar Singh, Joint Director Praveen Kumar and others – each accompanied by applause [134-145]. Shri Praveen Kumar delivered a vote of thanks that praised the Director-General, National Productivity Council, the Director of Startups and Innovation, Geetika Dayal and other partners for fostering a collaborative ecosystem, and congratulated all honoured startups for demonstrating that Indian innovation is both scalable and globally relevant [147-156]. The moderator then invited the other directors as well as senior women leaders, Kavita ma’am and Kishori ma’am, to join the group photograph [??].


Overall, the expanded summary captures the ceremony’s structured recognition of diverse achievements, the founders’ detailed accounts of how STPI’s ecosystem catalysed their growth, the cross-sectoral impact of AI-enabled solutions, and the concluding affirmation of collaborative momentum that underpins India’s digital future.


Session transcriptComplete transcript of the session
Moderator

Manwala Dignitaris seated. So we now come to one of the most awaited segments, the Startup Felicitation Ceremony. Today, we recognize startups supported under STPI ecosystem for excellence across revenue, funding, employment, women participation, innovation, and AI -led impact. I would like to request our honored dignitaries, DG sir, STPI, and Nirja Shekhar ma ‘am, Director General, National Productivity Council, to kindly come forward to present the certificate and trophy to our startups. I request these startups to kindly come on the stage as per the name announced. So the first name is, may I invite Phoenix Marine Exports and Solutions Private Limited to come on the stage. They are being recognized under the category, Highly Effective. Highest revenue up to 25 CR revenue and highest impact based on revenue, Tier 2 and Tier 3 reason.

May I request DG STPI and DG NPC to please present the certificate and trophy. Once again, a big round of applause for their outstanding contribution. Now, may I invite Vimeo Consulting Private Limited to please come on the stage. They are being recognized for highest funding raised, up to 25 CR revenue category. Heartiest congratulations on your fundraising success. A big round of applause. A louder round of applause please. Now may I invite Swadha Agri Private Limited to the stage. They are being felicitated for highest employment generation up to 25 CR revenue category. Congratulations for generating valuable employment. A big round of applause. Thank you. Now may I invite Strangify Technologies Private Limited to please come on the stage.

They are being recognized for highest number of women employment up to 25 CR revenue category. Well done for empowering women in the workforce. A big round of applause. A louder round of applause for women participation. Now our next startup is Suhora Technologies Private Limited. May I invite Strangify to Suhora Technologies Private Limited to this stage. They are being recognized for highest revenue up to 50 CR revenue category. Congratulations on your outstanding business performance. A big round of applause. Now I invite Puvation Technologies Solutions Private Limited. They are being felicitated for highest funding raised up to 50 CR revenue category. Applause for your impressive funding milestone. A big round of applause. A big round of applause. now I invite our next startup Sikwara Tech IT Solutions Private Limited to come on the stage they are being recognized under multiple categories so the categories are highest employment up to 50 CR revenue category highest women employment up to 50 CR revenue category highest AI based impact based on revenue a special recognition for excellence across multiple dimensions a big round of applause now I invite our next startup Atmik Bharat Industries Private Limited to the stage they are being recognized for highest impact based on beneficiaries Congratulations for touching countless lives.

A big round of applause. May I invite Mobile Pay E -Commerce Private Limited. They are being felicitated for highest impact based on beneficiaries as a second position. Well done for your meaningful outreach. A big round of applause. Thank you. Now I invite the another startup, Devnagri AI Private Limited to please come on the stage. They are being recognized for highest AI based impact based on revenue as a second position. Congratulations on leveraging AI for impact. A big round of applause. Thank you. Thank you so much, sir, DG, sir, for attending us. Now I invite our next startup, Dactrocell Healthcare and Research Private Limited. They are being recognized for most innovative startup. Applause for Breakthrough Healthcare Innovation.

A big round of applause. Now I invite our next startup, EZO5 Solutions Private Limited. Please come on the stage. They are being felicitated as most promising innovation. Please, please. A big round of applause. Thank you. Thank you. Connector Foods Private Limited. Please come on the stage for a beautiful couple. They are being recognized as most innovative startup as a second position. Well done for creative excellence. A big round of applause. Finally, our last startup. May I invite Pew’s Ledge Innovations Private Limited. They are being recognized as most promising innovation, second position. Congratulations on your forward -looking journey. A big round of applause. A big round of applause for all our felicitated startups. Your innovation, resilience and contribution to India’s digital economy truly inspire us all.

May I request our dignitaries to kindly resume their seats on the dais. We will now invite our selected startups to briefly share their journey with us. So may I invite Fuselage Innovations, Private Limited, to kindly come on the stage and share your

Devika Chandrasekaran

Hi everyone, my name is Devika Chandrasekaran. I’m the co -founder of Useless Innovations. It’s truly an honor to stand on a stage today being felicitated by STPI. This moment feels very special because we started our journey with STPI in our early days. Back in 2021, we participated in a program called Scout 2021. At that time, we were building our prototype. The support we received through the program was not just a funding, it’s a validation. That recognition gave us the confidence to push forward. We are proud to be a part of this. Today, Fuselage Innovations manufactures drones in agriculture, defence, disaster management applications We are working with more than 10 ,000 farmers across India helping them to improve productivity, efficiency through drone technology We are also contributing to defence, disaster management and maritime operations serving critical national needs Last month, we were deeply honoured to receive National Startup Award and we got the opportunity to present our journey in front of our Honourable Prime Minister, Narendra Modi Sir I would like to sincerely thank to STBI and everyone involved in the journey to believe a startup like us The ecosystem, the encouragement and the early trust that make a huge difference in our journey Thank you so much

Moderator

Thank you for sharing inspiring story. Now may I invite Dr. Rosals to kindly come on the stage and share your startup journey with us.

Dr. Saumya Shukla

Good evening, everyone. My name is Dr. Soumya, and I’m really glad to be a part of this prolific platform today. Just very quickly, I’d like to walk you through what we build. So at TectoCell, we build AI -powered diagnostic solutions at the intersection of radiology and DNA sequencing, while addressing the huge havoc of drug resistance and robust clinical trials panning across India, facilitated by the Software Technology Parks of India. we’ve been able to sort of exceptionally benchmark our accuracy, clinical accuracy, that sort of amplifies the reliability of our products. And the continued commitment of Software Technology Parks of India to sort of help us navigate through our regulatory compliances, get global collaborations, and also sort of get data acquisition, which is sort of machine -readable, is extremely noteworthy.

And this unique foundation sort of puts us in a very good position, in a very strengthful position to now sort of scale this globally, building from India for the world. So I’m very grateful for this. Thank you.

Moderator

Big round. Big round of applause. Thank you so much for sharing your story and journey with us. Now I invite Sequera Tech IT Solutions Private Limited to come on the stage and share your startup journey with us.

Arita Dalan

Hi everyone. They are one of the nurturing body which has done a lot of collaboration in the industries as well. They are one of the bodies which has given us an opportunity to talk to the investors as well. And there are various industry connect as well that is being established by the organization. And we are very sincerely thankful to the entire organization and the team of SDPI as well. Just to give you a brief about SDPI. SecurTech. SecurTech is a cyber security organization. Our mantra is to simplify security. We are touching, we are securing the security for the large enterprise organization, with size organization across the industry, whether it is pharma, banking finance organizations, or even the small organizations which are currently establishing the digital landscape in the country, while they are being regulated by large RBI and SAV.

So, in nutshell, we are providing them all the frameworks, security parameters, and the solutions as well, so that they can be powered, they can be enabled, and they can secure their own infrastructure platforms and the data that they are processing for the countries or for the users that they are providing services. So, whether it is a startup organization or even a large infrastructure organization, we are securing. We are providing them end -to -end. Thank you. Thanks, everyone. Thank you.

Moderator

Now, I invite… Caneboard Solutions Private Limited to come on the stage and share your journey with us.

Kirty Datar

Good afternoon everyone and thank you so much STPI for this honor here today.

Milind Datar

In what we have built an AI powered food robotics platform that prepares and serves fresh and hygienic beverages completely autonomously without any human intervention and as our first application we have built world’s first fresh sugarcane juice robotic vending machine which replaces the unhygienic and unsafe unhygienic beverages being sold on the road sides wherein the customers then finally choose packaged drinks over fresh and hygienic beverages or fresh beverages.

Kirty Datar

Our technology integrates robotics, IoT -embedded systems, and predictive AI to deliver farm -to -consumer juice in under 30 seconds in a fully autonomous manner. Beyond consumers, this creates direct market linkages for sugarcane farmers, ensures fair pricing, and also supports circular economy. We are extending now our platform beyond sugarcane juice to other fresh juices and smoothies, and positioning CaneBot as a productive platform company in food robotics. STPI has played a very meaningful role in our journey. The mentorship and peer network helped us think beyond the product to scalability, governance, and global readiness. Platforms like Tycon Exposure and, you know, through STPI gave us direct access to global investors and ecosystem partners, helping us sharpen our positioning as a deep tech company.

Most importantly, STPI’s recognition has strengthened our credibility with customers, investors, and government stakeholders. STPI is a great place to start your journey. We are very happy and very honored to be here today. And we thank you so much to STPI and everybody who is present here today. Thank you so very much.

Moderator

May I invite now EZO5 to kindly come on the stage and share your startup journey with us.

Noor Fatima

Hi, everyone. Good afternoon. I’m Noor Fatma, co -founder of EZO5 Solutions.

Meenal Gupta

Hi, I’m Meenal Gupta, founder of EZO5 Solutions. At

Noor Fatima

t EZO5, we have built an AI -powered platform, Imagix AI, that does precision treatment planning for oncology cases. And, so in the long, In the startup journey, there was a time for us when we, one and a half years back, when we had just two months of cash flow with us. We were thinking a lot what to do and that is when STPI came to our rescue and it helped us raise money and there has been no looking back since then. So in the past three years where we have been incorporated, we have processed around one million scans. We have, in the last three months, we have scanned around 50 ,000 chest XAs where we have flagged around 4 ,000 cases of TB, cut the transmission by short.

We have flagged six cases of lung cancer where the intervention was still possible. We have prepared 1 ,000 radiotherapy plans in the last three months and we have cut short the treatment planning and start from around one month to a week. So that is the impact we are making through the support. Of the whole ecosystem and STPI.

Meenal Gupta

and proudly I say that the impact that we have brought even our Prime Minister Mr. Narendra Modi was interested and he invited us to discuss our solution in IMC and just day before yesterday we have gone global because Bill Gates showed interest in our solution and he invited us in Microsoft to show our solution and he was discussing how he can help us. Thank you. So now we are going from local to global serving the whole world. Thank you.

Moderator

Thank you to all the founders for sharing such inspiring stories. So we now proceed with presentations of mementos to our esteemed dignitaries. To begin with may I request Shri Ashok Gupta Sir Director STPI Do Gurugram to kindly come on the stage. Sir will present the memento to Nirja Shekhar ma ‘am, Director General, NPC. A big round of applause. Thank you so much sir and thank you so much ma ‘am. Next may I request Shri Atul Kumar Singh sir, Additional Director, STPI to kindly come on the stage and present the memento to Shri Bala MS. A big round of applause. Thank you. May I now request Shri Praveen Kumar sir, Joint Director, STPI to kindly come on the stage and present the memento to Geetika Dayal ma ‘am.

A big round of applause. May I also request Shri Praveen Kumar sir, Joint Director, STPI to kindly present the memento to Shri Rakesh Dubey sir, Director, Startups and Innovation, STPI. Thank you sir. A big round of applause. Thank you. now I would like to request Shri Praveen Kumar sir joint director STPI to present the formal

Shri Praveen Kumar

vote of thanks respected dignitaries speakers startup founders innovators and ladies and gentlemen on behalf of software technology parks of India it is our true privilege to thank each one of you for making this session focused, meaningful and definitely forward looking Nirja Sekhar ma ‘am thank you for your thoughtful reflections on productivity and growth your perspective adds depth and direction both to our collective mission ma ‘am we are truly encouraged to have your presence thank you thank you so much we are grateful for it Sri Rakesh Dube, sir, thank you for your profound support, which has been both guiding and grounding, sir. Your constant encouragement and hands -on involvement in shaping the entire session together has helped us immensely, sir.

My sincere appreciation to Geetika Dayal, madam, from Thai Daily NCR, for your continued partnership and reinforcing the importance of collaborative startup ecosystem building, madam. Thank you. Thank you, Mr. Bala, for bringing up a sharp industry lens and pragmatic approach that startups need. And startups can directly relate to as they scale. So your thoughts on the GCC is definitely going to help them all. to the startups, all the startups who were felicitated today, congratulations. Your achievement demonstrates that innovation from India including tier 1 and tier 2 is both scalable and globally relevant. To all the founders who shared their journey, thank you for your candor and inspiration. Your stories remind us why platforms like STPI matter. And before I conclude, I sincerely appreciate my organizing team and every colleague who worked diligently behind the scenes to ensure the session came seamlessly.

With that, I once again thank all of you and the dignitaries and I request dignitaries and startups to come forward and have a group of staff. Thank you. Thank you again. I request to all the felicitated startups to kindly come on the stage and have the group photograph with all the dignitaries on the dais. Thank you.

Moderator

The other directors as well to please come on stage and join us for the group photographs. Yes, Kavita ma ‘am, please come on the stage. I also request Kishori ma ‘am to please join us for the group photograph. Thank you. Thank you. Thank you. Thank you. The floor is open. Anyone can take a group photograph with anyone. Please come and join. Thank you. Thank you. Thank you. they are kept on the right side of the podium please you I repeat please collect the trophy cases Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Milind Datar

da

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The ceremony opened with the moderator announcing a highly anticipated Startup Felicitation segment, noting that the event would honour startups nurtured within the Software Technology Parks of India (STPI) ecosystem for achievements in revenue, funding, employment, women’s participation, innovation and AI‑driven impact.”

The knowledge base explicitly describes the Startup Felicitation ceremony as recognizing startups supported under the STPI ecosystem across revenue, funding, employment, women participation, innovation and AI‑led impact.

Confirmedhigh

“Phoenix Marine Exports and Solutions Private Limited received the “Highly Effective” award for the highest revenue (up to ₹25 cr) and impact in Tier 2 and Tier 3 regions.”

S1 confirms that Phoenix Marine Exports and Solutions was recognized for the highest revenue impact in Tier 2 and Tier 3 regions.

Confirmedhigh

“Vimeo Consulting Private Limited was recognised for the highest funding raised within the same revenue bracket.”

S1 states that Vimeo Consulting was honoured for the highest funding raised, matching the report’s claim.

Confirmedmedium

“Swadha Agri Private Limited earned the honour for generating the most employment.”

S1 notes that Swadha Agri was celebrated for generating the most employment.

Confirmedmedium

“The dignitaries included the Director‑General of STPI, the Director‑General, National Productivity Council (Nirja Shekhar ma’am) and other officials.”

S2 mentions the presence of the National Productivity Council representative alongside STPI officials at the ceremony.

External Sources (66)
S1
Scaling Innovation Building a Robust AI Startup Ecosystem — -Arita Dalan: Role – Representative of SecurTech IT Solutions Private Limited; Area of expertise – Cybersecurity solutio…
S2
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — -Arita Dalan- Regional Head North, SecureTech IT Solutions Private Limited (cybersecurity)
S3
https://dig.watch/event/india-ai-impact-summit-2026/building-the-future-stpi-global-partnerships-startup-felicitation-2026 — Hi everyone. Good evening to everyone. So my name is Arita Dalal. I’m heading this region for North with SecureTech. I h…
S4
Scaling Innovation Building a Robust AI Startup Ecosystem — -Devika Chandrasekaran: Role – Co-founder of Fuselage Innovations; Area of expertise – Drone technology for agriculture,…
S5
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — -Devika Chandrasekaran- Co-founder, Fuselage Innovations (drone technology for agriculture, defense, disaster management…
S6
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — -Kirty Datar- Representative, Caneboard Solutions Private Limited -Milind Datar- Representative, Caneboard Solutions Pr…
S7
Scaling Innovation Building a Robust AI Startup Ecosystem — – Dr. Saumya Shukla- Kirty Datar
S8
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — Hi, everyone. Good afternoon. I’m Noor Fatma, co -founder of EZO5 Solutions.
S9
https://dig.watch/event/india-ai-impact-summit-2026/scaling-innovation-building-a-robust-ai-startup-ecosystem — Hi, everyone. Good afternoon. I’m Noor Fatma, co -founder of EZO5 Solutions. We have flagged six cases of lung cancer w…
S10
https://dig.watch/event/india-ai-impact-summit-2026/building-the-future-stpi-global-partnerships-startup-felicitation-2026 — Hi, everyone. Good afternoon. I’m Noor Fatma, co -founder of EZO5 Solutions. So that is the impact we are making to the…
S11
S12
Scaling Innovation Building a Robust AI Startup Ecosystem — -Shri Ashok Gupta: Title – Director STPI Gurugram; Role – Dignitary presenting mementos Hi, I’m Meenal Gupta, founder o…
S13
https://dig.watch/event/india-ai-impact-summit-2026/founders-adda-raw-conversations-with-indias-top-ai-pioneers — Accuracy is around 92%. So it is around 92 % to 99 % depending upon the data. complexity you can see this data we are wo…
S14
Founders Adda Raw Conversations with India’s Top AI Pioneers — Hello everyone, I am Meenal Gupta from EasyOPI Solutions and so nice to see you over here. Who all are founders over her…
S15
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — -Kirty Datar- Representative, Caneboard Solutions Private Limited -Milind Datar- Representative, Caneboard Solutions Pr…
S16
Scaling Innovation Building a Robust AI Startup Ecosystem — – Devika Chandrasekaran- Milind Datar – Dr. Saumya Shukla- Kirty Datar- Noor Fatima
S17
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S18
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S19
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S20
Scaling Innovation Building a Robust AI Startup Ecosystem — -Shri Ashok Gupta: Title – Director STPI Gurugram; Role – Dignitary presenting mementos -Shri Atul Kumar Singh: Title -…
S21
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — “Today, we recognize startups supported under STPI ecosystem for excellence across revenue, funding, employment, women p…
S23
Panel 2 – Responding to Disruptions: Crisis Management and Recovery — Announcer: We’ll take a group photograph with the moderator and panelists for session two, concluding the panel session…
S24
Policy Network on Artificial Intelligence | IGF 2023 — Moderator – Prateek:Thanks, Jose. So I think you put it three points, right? There is the data, which is coming a lot fr…
S25
AI for Social Good Using Technology to Create Real-World Impact — First one is diagnosis and diagnosing TB in economically vulnerable communities isn’t easy. X -ray machines, sputum anal…
S26
Revolutionising medicine with AI: From early detection to precision care — It has been more than four years since AI was first introduced intoclinical trials involving humans. Even back then, it …
S27
AI tool helps detect lung cancer — Dianne Covey, a 69-year-old retired hospital worker from Farncombe,creditsan AI tool with helping to save her life after…
S28
Shaping Investment: Spurring Investment in Cyber Sector Start-Ups — AI presents significant opportunities but requires a focus on security. The cybersecurity sector has experienced impress…
S29
Cutting through Cyber Complexity / DAVOS 2025 — Different sectors have different cybersecurity needs and challenges
S30
Achieving the SDGs through secure digital transformation | IGF 2023 Open Forum #92 — Additionally, it is crucial to view cybersecurity as an investment rather than simply a cost. Cybersecurity should not b…
S31
Shaping AI’s Story Trust Responsibility &amp; Real-World Outcomes — High level of consensus with strong alignment on fundamental principles and practical approaches. This suggests the AI g…
S32
Laying the foundations for AI governance — High level of consensus on problem identification and broad solution directions, suggesting significant potential for co…
S33
Keynote Adresses at India AI Impact Summit 2026 — The speakers demonstrate remarkable consensus across multiple dimensions: the strategic importance of U.S.-India partner…
S34
AI Meets Cybersecurity Trust Governance &amp; Global Security — “I want to extend our sincere thanks to our partner, Global Partners Digital, for co‑organizing this session and for the…
S35
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — Sectors such as agriculture, food, clothing, shoes, and cosmetics are particularly vulnerable. These industries, which r…
S36
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — Applications range from advanced data analytics and automation to augmenting human capabilities in healthcare, agricultu…
S37
IndoGerman AI Collaboration Driving Economic Development and Soc — “Productivity and resilience.”[4]. “As Anandi said, we already have an MOU with Fraunhofer, which we are working togethe…
S38
Scaling Innovation Building a Robust AI Startup Ecosystem — -Collaborative Ecosystem Building: The event highlighted partnerships between STPI, National Productivity Council, and o…
S39
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — All speakers consistently acknowledge STPI’s vital role in providing comprehensive infrastructure, funding, policy suppo…
S40
Towards inclusive digital innovation ecosystems – do’s and don’ts and what next? — In summation, the text reinforces the collective wisdom within communities and the importance of creating environments c…
S41
[Tentative Translation] — 69 Under the European Green Deal, the EU formulated an investment plan in January 2020 with the aim of achieving zero g…
S42
WSIS Prizes Champions’ Ceremony — While this transcript primarily documents an awards ceremony rather than a traditional discussion, the most impactful co…
S43
WSIS Prizes Ceremony — The reinforcement of this message underscores the commitment to long-term partnerships and the bonding of community in t…
S44
Scaling Innovation Building a Robust AI Startup Ecosystem — Multiple startups were recognized across different revenue categories and achievement areas. Phoenix Marine Exports and …
S45
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — A big round of applause. May I also request Shri Atul Kumar Singh Sir, Additional Director, STPI to kindly come on the s…
S46
https://dig.watch/event/india-ai-impact-summit-2026/building-the-future-stpi-global-partnerships-startup-felicitation-2026 — Good evening, everyone. My name is Dr. Soumya, and I’m really glad to be a part of this prolific platform today. Just ve…
S47
Lightning Talk #173 Artificial Intelligence in Agrotech and Foodtech — Alina Ustinova: Hello, everyone. My name is Alina. I represent the Center for Global IT Cooperation, and today I want to…
S48
https://dig.watch/event/india-ai-impact-summit-2026/scaling-innovation-building-a-robust-ai-startup-ecosystem — Hi, everyone. Good afternoon. I’m Noor Fatma, co -founder of EZO5 Solutions. We have flagged six cases of lung cancer w…
S49
https://dig.watch/event/india-ai-impact-summit-2026/founders-adda-raw-conversations-with-indias-top-ai-pioneers — Hello everyone, I am Meenal Gupta from EasyOPI Solutions and so nice to see you over here. Who all are founders over her…
S50
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — The conversation reinforced that effective digital regulation requires balanced leadership anchored in trust, inclusion,…
S51
Open Mic &amp; Closing Ceremony — The overall tone was formal yet appreciative. There was a sense of accomplishment and gratitude expressed throughout, wi…
S52
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S53
[Parliamentary Session Closing] Closing remarks — The tone of the discussion was formal yet collaborative and appreciative. There was a sense of accomplishment for the wo…
S54
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S55
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S56
AI for food systems — The tone throughout the discussion was consistently formal, optimistic, and collaborative. It maintained a ceremonial qu…
S57
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — The discussion maintained a consistently optimistic and collaborative tone throughout, characterized by mutual respect b…
S58
Launch / Award Event #159 Book Launch Netmundial+10 Statement in the 6 UN Languages — The tone was consistently celebratory, appreciative, and forward-looking throughout the session. Participants expressed …
S59
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S60
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S61
Any other business /Adoption of the report/ Closure of the session — In summary, the speaker artfully blended expressions of gratitude with recognition of collaborative efforts and a call f…
S62
Closing Session  — The tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory y…
S63
Proposal to Transition the Stewardship of the — | P3.8.1 | Normative References …
S64
Meeting REPORT — An effective strategy and work plan should be established prior to tracking performance metrics. The analysis emphatica…
S65
The 80th session of the UN General Assembly (UNGA 80) – Day 2 — The debate covered a wide spectrum of pressing global issues. A central theme was the state of international peace and s…
S66
29, filed Jan. 22, 2010, at 9-10. — – The government should focus broadband R&amp;D funding on projects with varied risk-return profiles, including a mix of…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Moderator
2 arguments47 words per minute1160 words1463 seconds
Argument 1
Ceremony highlights categories and honors top-performing startups (Moderator)
EXPLANATION
The moderator outlined the structure of the felicitation ceremony, specifying the various award categories such as revenue, funding, employment, women participation, and AI‑led impact. By announcing each startup and their respective achievements, the moderator emphasized the recognition of excellence within the STPI ecosystem.
EVIDENCE
The moderator opened the segment by announcing the startup felicitation ceremony and listed the criteria for recognition, including revenue, funding, employment, women participation, innovation and AI-led impact. He then called each startup to the stage, described the specific category for which they were being honored (e.g., highest revenue, highest funding, highest employment, highest women employment, highest AI-based impact) and prompted applause for their contributions [5-33].
MAJOR DISCUSSION POINT
Startup Recognition and Awards
Argument 2
Moderator’s final thanks and invitation for group photograph, reinforcing community spirit (Moderator)
EXPLANATION
At the close of the event, the moderator thanked the participants and dignitaries, and invited everyone to join for a group photograph, underscoring the sense of community and collective achievement. This closing remark aimed to cement the collaborative atmosphere fostered throughout the ceremony.
EVIDENCE
The moderator thanked the audience, requested the dignitaries to resume their seats, invited the startups to share their journeys, and later asked various directors and participants to come forward for group photographs, repeatedly expressing gratitude and encouraging collective participation [165-169].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A group photograph request at the close of the session is documented in [S23].
MAJOR DISCUSSION POINT
Moderator’s final thanks and invitation for group photograph, reinforcing community spirit
D
Devika Chandrasekaran
2 arguments125 words per minute212 words101 seconds
Argument 1
Early STPI program provided validation and confidence to launch the venture (Devika Chandrasekaran)
EXPLANATION
Devika explained that participation in the STPI’s Scout 2021 program gave her startup essential validation beyond financial support, which boosted their confidence to move forward. This early endorsement was pivotal in establishing their business trajectory.
EVIDENCE
She recounted that in 2021 they joined the Scout 2021 program, received support that was more than funding-it served as validation-and that this recognition gave them the confidence to push forward with their venture [72-75].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The STPI program is described as providing validation beyond funding, boosting founders’ confidence, in [S1].
MAJOR DISCUSSION POINT
Early STPI program provided validation and confidence to launch the venture
AGREED WITH
Noor Fatima, Arita Dalan, Milind Datar, Dr. Saumya Shukla, Kirty Datar, Shri Praveen Kumar
Argument 2
Drone technology applied to agriculture, defence and disaster management creates measurable productivity gains (Devika Chandrasekaran)
EXPLANATION
Devika described how her startup manufactures drones for multiple sectors, notably agriculture, defence and disaster management, thereby enhancing productivity and efficiency for thousands of farmers. The technology also supports critical national needs in defence and emergency response.
EVIDENCE
She stated that Fuselage Innovations manufactures drones used in agriculture, defence and disaster management, works with more than 10,000 farmers across India to improve productivity and efficiency, and contributes to defence, disaster management and maritime operations serving critical national needs [71-77].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Devika’s role and the drone applications in agriculture, defence and disaster management are detailed in [S2].
MAJOR DISCUSSION POINT
Drone technology applied to agriculture, defence and disaster management creates measurable productivity gains
AGREED WITH
Dr. Saumya Shukla, Noor Fatima, Milind Datar
K
Kirty Datar
1 argument1045 words per minute193 words11 seconds
Argument 1
Mentorship and peer network from STPI helped scale the AI‑food‑robotics platform (Kirty Datar)
EXPLANATION
Kirty expressed gratitude to STPI for the honor of being on stage, implying appreciation for the ecosystem’s support. While specific mentorship details were not elaborated, the acknowledgment reflects the perceived value of STPI’s network in the startup’s journey.
EVIDENCE
Kirty thanked STPI for the honor, stating “Good afternoon everyone and thank you so much STPI for this honor here today” [104].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Founders, including Kirty Datar, highlighted STPI’s mentorship and networking benefits in [S1].
MAJOR DISCUSSION POINT
Mentorship and peer network from STPI helped scale the AI‑food‑robotics platform
N
Noor Fatima
2 arguments159 words per minute205 words77 seconds
Argument 1
STPI intervention rescued the company during a cash‑flow crisis, enabling continued growth (Noor Fatima)
EXPLANATION
Noor recounted a period when the startup faced a severe cash‑flow shortage, and STPI’s intervention enabled them to secure funding, averting a crisis. This support allowed the company to continue its growth trajectory.
EVIDENCE
She explained that about a year and a half ago they had only two months of cash flow left, and STPI came to their rescue, helping them raise money, after which there was “no looking back” [124-125].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Noor’s account of STPI rescuing the startup during a cash-flow shortage is corroborated by [S1].
MAJOR DISCUSSION POINT
STPI intervention rescued the company during a cash‑flow crisis, enabling continued growth
AGREED WITH
Devika Chandrasekaran, Arita Dalan, Milind Datar, Dr. Saumya Shukla, Kirty Datar, Shri Praveen Kumar
Argument 2
AI oncology platform accelerates treatment planning and detects TB and lung cancer, delivering tangible health outcomes (Noor Fatima)
EXPLANATION
Noor detailed the impact of their AI‑powered oncology platform, highlighting large‑scale scan processing, disease detection, and a dramatic reduction in radiotherapy planning time. These outcomes demonstrate concrete health benefits derived from the technology.
EVIDENCE
She reported that in three years the company processed around one million scans, flagged about 4,000 TB cases and six lung cancer cases, prepared 1,000 radiotherapy plans in the last three months, and cut treatment-planning time from a month to a week, illustrating the platform’s impact [124-129].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI health applications that detect TB and lung cancer and accelerate treatment planning are discussed in [S25], [S26] and [S27], supporting the impact described.
MAJOR DISCUSSION POINT
AI oncology platform accelerates treatment planning and detects TB and lung cancer, delivering tangible health outcomes
AGREED WITH
Devika Chandrasekaran, Dr. Saumya Shukla, Milind Datar
A
Arita Dalan
2 arguments114 words per minute227 words119 seconds
Argument 1
Collaboration with STPI gave cybersecurity startup access to investors and industry partners (Arita Dalan)
EXPLANATION
Arita highlighted that the partnership with STPI provided opportunities to engage with investors and establish industry connections, which were crucial for the cybersecurity startup’s growth. The nurturing environment facilitated networking and exposure to potential partners.
EVIDENCE
She noted that the nurturing body (SDPI) offered collaboration in industries, gave them opportunities to talk to investors, and established various industry connections, for which they were sincerely thankful [90-94].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Arita’s remarks on STPI facilitating investor connections align with founders’ praise of STPI’s networking support in [S1] and the partnership description in [S2].
MAJOR DISCUSSION POINT
Collaboration with STPI gave cybersecurity startup access to investors and industry partners
AGREED WITH
Devika Chandrasekaran, Noor Fatima, Milind Datar, Dr. Saumya Shukla, Kirty Datar, Shri Praveen Kumar
Argument 2
Simplified security frameworks for enterprises across sectors address critical cyber‑risk challenges (Arita Dalan)
EXPLANATION
Arita described SecurTech’s mission to simplify security by providing comprehensive frameworks and solutions to enterprises in sectors such as pharma, banking, finance, and emerging digital firms. This approach helps organizations secure their infrastructure and data against cyber threats.
EVIDENCE
She explained that SecurTech offers simplified security, delivering frameworks, parameters and end-to-end solutions for large enterprises across pharma, banking, finance, as well as smaller organizations, thereby securing infrastructure and data [96-101].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for simplified security frameworks and sector-specific cyber-risk solutions is highlighted in [S28] and [S30].
MAJOR DISCUSSION POINT
Simplified security frameworks for enterprises across sectors address critical cyber‑risk challenges
D
Dr. Saumya Shukla
1 argument125 words per minute172 words82 seconds
Argument 1
AI‑powered diagnostic platform combining radiology and DNA sequencing improves clinical accuracy and scalability (Dr. Saumya Shukla)
EXPLANATION
Dr. Saumya outlined how TectoCell integrates AI with radiology and DNA sequencing to create diagnostic solutions that achieve high clinical accuracy. The platform’s strong benchmark performance and STPI’s support for regulatory and data acquisition facilitate scalable deployment.
EVIDENCE
She stated that TectoCell builds AI-powered diagnostic solutions at the intersection of radiology and DNA sequencing, has benchmarked clinical accuracy, and benefits from STPI’s assistance with regulatory compliance, global collaborations, and machine-readable data acquisition, positioning the company for global scaling [82-85].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI-powered diagnostic platform integrating radiology and DNA sequencing and STPI’s regulatory support are mentioned in [S1]; broader AI-medicine context is provided in [S26].
MAJOR DISCUSSION POINT
AI‑powered diagnostic platform combining radiology and DNA sequencing improves clinical accuracy and scalability
AGREED WITH
Devika Chandrasekaran, Noor Fatima, Milind Datar
M
Milind Datar
1 argument37 words per minute72 words114 seconds
Argument 1
Autonomous AI food‑robotics vending fresh juice links farmers directly to consumers and ensures hygiene (Milind Datar)
EXPLANATION
Milind described an AI‑driven food‑robotics platform that delivers fresh, hygienic beverages autonomously, beginning with a sugarcane‑juice vending machine. The solution creates direct market linkages for farmers, guarantees fair pricing, and supports a circular economy.
EVIDENCE
He explained that the platform prepares and serves fresh, hygienic beverages autonomously, with the world’s first fresh sugarcane juice robotic vending machine replacing unsafe roadside drinks, and that it links sugarcane farmers directly to consumers, ensures fair pricing, and promotes a circular economy; the technology integrates robotics, IoT-embedded systems and predictive AI to deliver juice in under 30 seconds [105-108].
MAJOR DISCUSSION POINT
Autonomous AI food‑robotics vending fresh juice links farmers directly to consumers and ensures hygiene
AGREED WITH
Devika Chandrasekaran, Dr. Saumya Shukla, Noor Fatima
M
Meenal Gupta
1 argument152 words per minute92 words36 seconds
Argument 1
Global interest from the Prime Minister, Bill Gates and Microsoft underscores the solution’s worldwide relevance (Meenal Gupta)
EXPLANATION
Meenal highlighted that the startup attracted attention from high‑level leaders, including the Prime Minister, Bill Gates, and Microsoft, indicating its global significance. Such endorsements reflect the solution’s potential impact beyond the local market.
EVIDENCE
She reported that the Prime Minister invited them to discuss the solution, Bill Gates showed interest and invited them to Microsoft to demonstrate their platform, underscoring a transition from local to global outreach [131-133].
MAJOR DISCUSSION POINT
Global interest from the Prime Minister, Bill Gates and Microsoft underscores the solution’s worldwide relevance
S
Shri Praveen Kumar
1 argument87 words per minute328 words225 seconds
Argument 1
Vote of thanks emphasizing collaborative ecosystem, gratitude to dignitaries and encouragement for continued innovation (Shri Praveen Kumar)
EXPLANATION
Shri Praveen delivered a comprehensive vote of thanks, acknowledging dignitaries, speakers, founders, and the organizing team, and emphasized the importance of a collaborative ecosystem for fostering innovation. He encouraged continued progress and highlighted the global relevance of Indian startups.
EVIDENCE
He thanked dignitaries, speakers, and founders, praised the contributions of various officials, highlighted the collaborative ecosystem, expressed gratitude for support, and concluded by thanking everyone and inviting group photographs, covering the entire session’s appreciation [147-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on a collaborative ecosystem and gratitude mirrors founders’ appreciation of STPI’s ecosystem in [S1].
MAJOR DISCUSSION POINT
Vote of thanks emphasizing collaborative ecosystem, gratitude to dignitaries and encouragement for continued innovation
AGREED WITH
Moderator
Agreements
Agreement Points
STPI provides a validating, mentoring, and financial support ecosystem that enables startups to launch, scale, and overcome crises.
Speakers: Devika Chandrasekaran, Noor Fatima, Arita Dalan, Milind Datar, Dr. Saumya Shukla, Kirty Datar, Shri Praveen Kumar
Early STPI program provided validation and confidence to launch the venture (Devika Chandrasekaran) STPI intervention rescued the company during a cash‑flow crisis, enabling continued growth (Noor Fatima) Collaboration with STPI gave cybersecurity startup access to investors and industry partners (Arita Dalan) STPI has played a very meaningful role in our journey… mentorship and peer network helped us think beyond the product (Milind Datar) STPI… helped us navigate through our regulatory compliances, get global collaborations, and also sort of get data acquisition (Dr. Saumya Shukla) Good afternoon everyone and thank you so much STPI for this honor here today (Kirty Datar) Vote of thanks… gratitude to dignitaries, speakers, founders… emphasizing collaborative ecosystem (Shri Praveen Kumar)
Multiple founders repeatedly credit STPI for validation, mentorship, networking, regulatory assistance, and crisis-time financial rescue, indicating a shared view that the STPI ecosystem is a critical enabling environment for their success [68-75][124-125][90-94][110-113][82-85][104][147-152].
POLICY CONTEXT (KNOWLEDGE BASE)
STPI’s role as an ecosystem enabler is documented in recent government-backed initiatives that highlight its infrastructure, funding and mentorship support for startups, especially in tier-2/3 cities [S38][S39].
AI‑driven technologies are delivering concrete social and economic benefits across sectors such as agriculture, health, and food services.
Speakers: Devika Chandrasekaran, Dr. Saumya Shukla, Noor Fatima, Milind Datar
Drone technology applied to agriculture, defence and disaster management creates measurable productivity gains (Devika Chandrasekaran) AI‑powered diagnostic platform combining radiology and DNA sequencing improves clinical accuracy and scalability (Dr. Saumya Shukla) AI oncology platform accelerates treatment planning and detects TB and lung cancer, delivering tangible health outcomes (Noor Fatima) Autonomous AI food‑robotics vending fresh juice links farmers directly to consumers and ensures hygiene (Milind Datar)
Founders from four distinct domains highlight how AI-enabled solutions boost productivity for farmers, improve diagnostic accuracy, speed up cancer treatment planning, and provide hygienic food, showing a consensus on AI’s cross-sector impact [71-77][82-85][122-129][105-108].
POLICY CONTEXT (KNOWLEDGE BASE)
UNCTAD’s analysis of the digital economy notes that AI applications are delivering tangible benefits in agriculture, health and food services, underscoring the sector-wide impact described [S36]. Indo-German AI collaborations further emphasize AI as a pillar for sustainable economic growth and social good [S37].
Public recognition and collective celebration reinforce community spirit and motivate continued innovation.
Speakers: Moderator, Shri Praveen Kumar
Ceremony highlights categories and honors top‑performing startups (Moderator) Vote of thanks emphasizing collaborative ecosystem, gratitude to dignitaries and encouragement for continued innovation (Shri Praveen Kumar)
Both the moderator’s structured felicitation ceremony and the formal vote of thanks stress the importance of publicly acknowledging achievements to foster a collaborative ecosystem [1-9][165-169][147-152].
POLICY CONTEXT (KNOWLEDGE BASE)
WSIS prize ceremonies have been cited as examples where public recognition reinforces community bonds and motivates further innovation, aligning with the described effect of collective celebration [S42][S43].
Similar Viewpoints
Both founders emphasize that AI‑enabled platforms can generate measurable, life‑changing outcomes in critical sectors—agriculture and health—by improving efficiency, accuracy, and reach [71-77][122-129].
Speakers: Devika Chandrasekaran, Noor Fatima
Drone technology applied to agriculture, defence and disaster management creates measurable productivity gains (Devika Chandrasekaran) AI oncology platform accelerates treatment planning and detects TB and lung cancer, delivering tangible health outcomes (Noor Fatima)
Both highlight that STPI’s networking and mentorship channels opened doors to investors and strategic partners, essential for scaling their technologies [90-94][110-113].
Speakers: Arita Dalan, Milind Datar
Collaboration with STPI gave cybersecurity startup access to investors and industry partners (Arita Dalan) STPI has played a very meaningful role in our journey… mentorship and peer network helped us think beyond the product (Milind Datar)
Both credit STPI for providing regulatory and technical support that enabled them to move from prototype to scalable commercial solutions [82-85][110-113].
Speakers: Dr. Saumya Shukla, Milind Datar
AI‑powered diagnostic platform… STPI helped us navigate regulatory compliances and data acquisition (Dr. Saumya Shukla) STPI has played a very meaningful role… mentorship and peer network helped us think beyond the product (Milind Datar)
Unexpected Consensus
Cross‑sector alignment on linking primary producers directly to end‑users through AI‑driven platforms.
Speakers: Devika Chandrasekaran, Milind Datar
Drone technology applied to agriculture… works with more than 10,000 farmers across India (Devika Chandrasekaran) Autonomous AI food‑robotics vending fresh juice links farmers directly to consumers and ensures fair pricing (Milind Datar)
Although operating in different domains (agriculture drones vs food robotics), both founders stress that AI solutions create direct market linkages for farmers, improving income and product safety-an alignment not explicitly anticipated given the sectoral differences [71-77][105-108].
Overall Assessment

The speakers exhibit strong consensus that the STPI ecosystem is a pivotal enabler—providing validation, mentorship, funding, and regulatory assistance—while AI‑driven innovations are delivering tangible socio‑economic benefits across diverse sectors. Public recognition further reinforces a collaborative community spirit.

High consensus: multiple founders across unrelated industries converge on the same three pillars (STPI support, AI impact, and community celebration), suggesting that policy emphasis on ecosystem support and AI investment is likely to be broadly supported and effective.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The discussion shows strong consensus on the positive impact of STPI, with no overt conflict. Differences lie in the emphasis on particular forms of support rather than contradictory views.

Low – the speakers largely agree on the importance of STPI, and the variations are complementary rather than oppositional, suggesting a cohesive narrative that reinforces the enabling environment for digital development.

Partial Agreements
All speakers emphasize the critical role of STPI in their startup journeys, sharing a common goal of fostering successful, innovative enterprises. However, they diverge on which specific STPI‑provided mechanism was most decisive—validation and confidence (Devika) vs mentorship and networking (Kirty, Arita) vs financial rescue (Noor) vs regulatory and data support (Dr. Saumya) vs technology scaling (Milind) [72-75][104][124-125][90-94][82-85][105-108].
Speakers: Devika Chandrasekaran, Kirty Datar, Noor Fatima, Arita Dalan, Dr. Saumya Shukla, Milind Datar
Early STPI program provided validation and confidence to launch the venture (Devika Chandrasekaran) Mentorship and peer network from STPI helped scale the AI‑food‑robotics platform (Kirty Datar) STPI intervention rescued the company during a cash‑flow crisis, enabling continued growth (Noor Fatima) Collaboration with STPI gave cybersecurity startup access to investors and industry partners (Arita Dalan) AI‑powered diagnostic platform combining radiology and DNA sequencing improves clinical accuracy and scalability (Dr. Saumya Shukla) Autonomous AI food‑robotics vending fresh juice links farmers directly to consumers and ensures hygiene (Milind Datar)
Takeaways
Key takeaways
The ceremony honored startups within the STPI ecosystem across multiple performance categories such as revenue, funding, employment generation, women participation, AI impact, and innovation. Founders repeatedly highlighted STPI’s role as a critical enabler—providing early validation, mentorship, networking, investor access, and crisis support that helped their ventures scale. Several startups showcased AI‑driven solutions delivering societal impact: drone applications in agriculture/defence/disaster relief; AI‑powered diagnostic tools combining radiology and DNA sequencing; cybersecurity frameworks for enterprises; autonomous food‑robotics vending fresh juice; AI oncology platform accelerating treatment planning and detecting TB/lung cancer. High‑level recognition from government officials (e.g., Prime Minister, Bill Gates, Microsoft) underscored the global relevance of these innovations. The closing vote of thanks emphasized collaborative ecosystem building, gratitude to dignitaries, and a call for continued innovation and community spirit.
Resolutions and action items
None identified
Unresolved issues
None identified
Suggested compromises
None identified
Thought Provoking Comments
The support we received through the Scout 2021 program was not just funding, it was a validation. That recognition gave us the confidence to push forward.
Highlights the psychological impact of early ecosystem validation beyond monetary aid, emphasizing how credibility can accelerate a startup’s trajectory.
Shifted the conversation from ceremonial accolades to the tangible role of STPI in de‑risking early‑stage ventures, prompting later speakers to reference specific ecosystem interventions.
Speaker: Devika Chandrasekaran (Co‑founder, Useless Innovations)
STPI helped us navigate regulatory compliances, secure global collaborations and acquire machine‑readable data – all of which put us in a strong position to scale AI‑powered diagnostic solutions globally.
Illustrates how a government‑linked body can address non‑technical bottlenecks (regulation, data) that are often the biggest hurdles for deep‑tech startups.
Introduced a new dimension—regulatory and data facilitation—into the discussion, leading other founders to mention similar ecosystem support (e.g., funding rescue, investor access).
Speaker: Dr. Saumya Shukla (Founder, TectoCell)
Our AI‑powered food‑robotics platform (CaneBot) not only delivers hygienic juice in 30 seconds, it creates direct market linkages for sugarcane farmers, ensures fair pricing and supports a circular economy.
Connects deep‑tech innovation with inclusive economic impact, showing how technology can simultaneously solve consumer health issues and empower primary producers.
Expanded the narrative from pure revenue/funding metrics to social‑economic outcomes, prompting applause and reinforcing the ceremony’s focus on ‘impact’ categories.
Speaker: Milind Datar (Co‑founder, CaneBot)
When we had only two months of cash left, STPI came to our rescue and helped us raise money – there has been no looking back since then.
Provides a concrete, urgent example of ecosystem intervention that directly prevented a startup’s failure, underscoring the critical safety net role of STPI.
Served as a turning point that shifted the tone from celebratory to urgent, highlighting the fragility of startup cash flows and the importance of timely support.
Speaker: Noor Fatima (Co‑founder, EZO5 Solutions)
Our solution attracted the attention of Prime Minister Narendra Modi and Bill Gates, who invited us to discuss our AI platform at the IMC and Microsoft respectively.
Demonstrates rapid escalation from national to global recognition, illustrating how ecosystem backing can catapult a startup onto the world stage.
Elevated the conversation to a global scale, reinforcing the ceremony’s theme of ‘India’s digital economy inspiring the world’ and inspiring other founders.
Speaker: Meenal Gupta (Founder, EZO5 Solutions)
Your thoughtful reflections on productivity and growth add depth and direction to our collective mission; the GCC perspective will help startups scale pragmatically.
Links macro‑economic policy (productivity council, GCC) with startup scaling, bridging high‑level strategic thinking with ground‑level entrepreneurship.
Re‑anchored the discussion back to policy implications, providing a bridge between individual founder stories and broader ecosystem strategy.
Speaker: Shri Praveen Kumar (Joint Director, STPI) – during vote of thanks
Overall Assessment

The ceremony began as a formal recognition of metrics, but a series of founder remarks—particularly those emphasizing validation, regulatory facilitation, inclusive impact, emergency funding rescue, and rapid escalation to national and global attention—reframed the dialogue. These comments introduced new ideas about the ecosystem’s non‑financial value, highlighted the fragility and resilience of startups, and connected individual successes to broader policy and economic goals. Consequently, the discussion evolved from a simple award ceremony to a nuanced showcase of how strategic support, both institutional and financial, can transform innovative ideas into scalable, socially impactful enterprises.

Follow-up Questions
What measurable outcomes have resulted from the STPI Scout 2021 program for participating startups?
Understanding the program’s effectiveness can help refine support mechanisms for early‑stage ventures.
Speaker: Devika Chandrasekaran
What are the specific regulatory compliance pathways and challenges for AI‑powered diagnostic solutions in India?
Clarifying compliance requirements is essential for scaling AI health technologies nationally and internationally.
Speaker: Dr. Saumya Shukla
What cybersecurity frameworks and standards are most effective for large enterprises in regulated sectors such as pharma and banking?
Ensuring robust security across critical industries mitigates risk and aligns with regulator expectations (RBI, SAV).
Speaker: Arita Dalan
How does the AI‑driven food robotics platform (CaneBot) affect farmer incomes, pricing fairness, and circular‑economy outcomes?
Assessing economic and environmental impacts will validate the platform’s sustainability claims and guide policy support.
Speaker: Milind Datar
What strategies and validation processes are needed for EZO5’s AI oncology platform to achieve global market entry and regulatory approval?
Identifying pathways to international adoption is crucial for scaling the technology and attracting global partners such as Bill Gates/Microsoft.
Speaker: Noor Fatima, Meenal Gupta
How effective are STPI’s mentorship, peer‑network, and exposure programs (e.g., Tycon Exposure) in facilitating startup access to global investors and ecosystem partners?
Quantifying the impact of these ecosystem services can inform future program design and funding allocation.
Speaker: Milind Datar
What metrics can be developed to evaluate AI‑based impact (beneficiary) across startups and compare performance across revenue tiers?
Standardized impact metrics would enable better benchmarking and policy decisions for supporting high‑impact ventures.
Speaker: Moderator (implicit)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Responsible AI for Shared Prosperity

Responsible AI for Shared Prosperity

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, chaired by UK Deputy Prime Minister David Lammy, examined how AI can be harnessed for development in Africa and Asia through language-focused initiatives and new computing infrastructure [1-3][6-8][9-11]. Lammy outlined the AI for Development programme, which includes expanding AI into more than 40 African languages, establishing Africa’s first public-sector AI compute cluster at the University of Cape Town, and launching an Asia AI for Development Observatory in partnership with Canada, Germany, Japan, Sweden and the GSMA Foundation [1-3][6-8][9-11].


Kenyan envoy Philip Thigo emphasized that the Global South possesses intelligence but has historically lacked the power to record and transmit it, making representation of oral cultures in AI models a matter of civilizational survival [23-33][37-44]. He argued that building research capacity, talent, and context-specific language models is essential for sovereignty and for delivering concrete use cases across the continent’s 2,000 languages [39-42][43-45]. The Masakane African Language Hub, described by its chair, aims to impact one billion Africans by developing high-quality data, inclusive benchmarks, gender-responsive projects such as Project Echo, and sustainable community-led AI ecosystems [52-60][61-70][71-77].


Indian CEO Shekhar Sivasubramanian explained that Wadwani AI designs applications in 14-16 languages from the outset, ensuring rural-urban inclusivity and tangible utility in health, education and agriculture [84-92][94-102]. He gave examples of a multilingual disease-surveillance system that alerts governments every four hours and an oral-reading tool that provides real-time feedback to children and teachers, illustrating how language and purpose are inseparable for adoption [95-102][108-113].


German parliamentary state secretary Babel Kofler highlighted that AI can only overcome inequality if it is inclusive, noting that bias in data and neglect of dialects must be addressed and that Germany has contributed through the Fair Forward initiative and partnerships with India to collect multilingual datasets [135-143][145-151]. The UK, Canadian, Japanese governments, Microsoft and the Gates Foundation are jointly funding public-good projects such as the African Compute Initiative-a high-performance GPU cluster at UCT-and the Lingua Africa open-core platform to turn language data into deployable services [258-270][272-279][224-233]. Microsoft’s Natasha Crampton stressed that compute is the enabler for making AI linguistically and culturally aware, required both for model training and for testing with local speakers, and that trustworthy AI depends on adequate infrastructure [224-233][236-244][245-248].


Across the discussion, participants agreed that without affordable compute and representative language resources, African and Indian researchers cannot contribute to or shape global AI systems [39-42][224-233][258-270]. They also concurred that public-sector investment and multi-stakeholder collaborations are necessary to fill market gaps, sustain talent and ensure that AI benefits are equitably distributed [212-214][258-270][71-77]. The session concluded that coordinated, inclusive AI development-grounded in local languages, robust compute and shared governance-offers a pathway to an equitable AI future for the Global South [13-14][126-127].


Keypoints

Major discussion points


Launching and scaling AI initiatives that prioritize African (and Asian) languages and compute capacity.


David Lammy outlined the AI for Development programme, the Masakane African Languages Hub, a public-sector AI compute cluster at the University of Cape Town, and the Lingua Africa open-core partnership - all aimed at bringing AI to over 40 African languages and providing the hardware needed for local model training [1-8][9-13][160-176][224-233][260-270].


Ensuring cultural representation and linguistic sovereignty.


Philip Thigo emphasized that the Global South’s oral heritage is at risk if AI models ignore local languages, and that building data, talent and research capacity is essential for “sovereignty” [22-33][38-45]. The Masakane Chair added that the hub’s four-pillar approach (data, research, innovation, sustainability) seeks to capture the nuance of each language and to preserve cultural memory [51-60][61-70][71-76].


Concrete, impact-driven use cases across sectors.


Wadwani AI described multilingual health-surveillance, disease-outbreak alerts, and an oral-reading-fluency tool for children, illustrating how language-aware AI can deliver tangible benefits in health, education and agriculture [84-102]. Masakane’s Project Echo was highlighted as a gender-responsive initiative that uses African-language AI to empower women’s economic participation and health [71-74].


Multi-stakeholder partnerships and public-goods funding to fill market gaps.


Representatives from the UK, Canada, Germany, the Gates Foundation, Microsoft and IDRC stressed that commercial markets overlook low-resource languages, so coordinated public-sector and philanthropic investment is required to create open data, benchmarks and compute resources [8-10][126-134][145-151][183-214][224-236][257-279].


Compute infrastructure as the critical enabler and a barrier to entry.


Both the African Compute Initiative and Microsoft’s commentary highlighted the exponential cost disparity for high-performance GPUs in Africa, arguing that dedicated clusters are indispensable for training, testing and deploying culturally-aware models [3][5][224-233][260-270][274-279].


Overall purpose / goal of the discussion


The panel was convened to announce and explain a coordinated, multi-nation effort to make AI inclusive of Africa’s (and Asia’s) linguistic diversity, to build the necessary data and compute foundations, and to demonstrate how such infrastructure can be turned into real-world applications that advance health, education, gender equity and economic development.


Overall tone and its evolution


The conversation began with an optimistic, visionary tone-celebrating “brilliant, genuinely African-led initiatives” and the promise of an equitable AI future [2][13]. As speakers detailed the challenges of language extinction, talent scarcity, and compute cost, the tone shifted to a more urgent, problem-focused stance, emphasizing the existential risk of exclusion [30-33][224-233]. Throughout, the dialogue remained collaborative and constructive, ending on a hopeful, call-to-action note that highlighted partnership, public-goods investment and the potential for lasting impact [214-219][254-259].


Speakers

Co-Moderator – Panel moderator (role: co-moderating the discussion).


David Lammy – Deputy Prime Minister of the United Kingdom; MP; leads the UK’s AI for Development programme. [S4]


Natasha Crampton – Chief Responsible AI Officer, Microsoft; expertise in AI ethics, trustworthy and multilingual AI. [S6]


Ankur Vora – Chief Strategy Officer and President, Africa and India Office, Gates Foundation; focuses on philanthropic strategy for AI-driven development. [S9]


Chenai Chair – Director, Masakane African Language Hub; specialist in African language NLP, data collection, and AI model benchmarking for low-resource languages. [S11]


Shekar Sivasubramanian – CEO, Wadwani AI; Chennai Chair and Director of the Mazakani African Languages Hub; works on applied AI solutions for health, education, agriculture and multilingual technology in India.


Julie Delahanty – President, International Development Research Centre (IDRC), Canada; expertise in research funding for AI in low- and middle-income countries. [S14]


Philip Thigo – His Excellency Ambassador Philip Thigo, Special Technology Envoy of the Government of Kenya; focuses on AI policy, technology strategy and representation of African languages in AI. [S18]


Barbel Kofler – Parliamentary State Secretary to the Federal Minister for Economic Cooperation and Development, Germany; works on international development policy and AI governance. [S20]


Additional speakers:


Debra Kofler – Mentioned during the panel change-over; no further role or expertise detailed in the transcript.


Full session reportComprehensive analysis and detailed insights

Opening Remarks – David Lammy


UK Deputy Prime Minister David Lammy opened the session by outlining the AI for Development programme. The programme has three pillars: (i) extending AI services to more than 40 African languages, (ii) creating Africa’s first public-sector AI compute cluster at the University of Cape Town, and (iii) launching an Asia AI for Development Observatory [1-8][9-13]. He noted the partnership network – the UK, Canada’s IDRC, the Gates Foundation, Germany, Japan, Sweden and the GSMA Foundation [1-8][9-13]. Lammy announced funding for four start-ups, including Torn AI in Morocco, which is building a voice-interface for low-literacy rural users to access digital and financial services [280-281]. He framed the work as a moral crossroads: AI can either concentrate power and widen inequality, or act as a force for good that uplifts humanity [13-14].


Panel Introduction – Co-Moderator


The co-moderator introduced the panel: Philip Digo, Kenya’s Special Technology Envoy; Babel Kofler, Germany’s Parliamentary State Secretary; Shekhar Sivasubramanian, CEO of Wadwani AI and Chair of the Masakane African Languages Hub; and Julie Delahanty, President of IDRC [15-21].


Philip Digo’s Response


When asked how AI in local languages could shape Kenya’s digital development (the continent has roughly 2 000 languages) [20-21], Digo described an “age of intelligence” in which AI reshapes how people live, learn and work [22-27]. He argued that the Global South has never lacked intelligence, but has lacked the power to record, transmit and recognise it, especially because many cultures are oral [28-33]. He warned that the absence of African languages in current models threatens an existential loss of civilisation [32-33]. Digo highlighted the need for research capacity, talent pipelines and a full AI stack-from data collection to model training-to achieve linguistic sovereignty and deliver context-specific use cases [38-45]. He cited youth in Kenya using “Chagipiti” (a local reference to ChatGPT) to create culturally relevant content [284].


Masakane African Languages Hub – Chair


The Chair explained that the Masakane African Languages Hub emerged in 2019 from a community-driven effort to digitise local languages when no external funding was available [51-55]. Its ambition is to reach 1 billion Africans through 50 of the most spoken languages, delivering economic, health and social benefits while preserving linguistic evolution [54-55][56-60]. The hub operates on four pillars:


1. Data – expanding high-quality, diverse datasets (building on the JW300 Bible corpus) [61-64];


2. Research & Benchmarking – creating an African speech-and-text benchmark to capture local nuances [65-67];


3. Innovation – allocating 40 % of the budget to concrete use-cases, notably Project Echo, a gender-responsive intervention that improves women’s economic empowerment and health in African languages [71-74];


4. Sustainability – focusing on institutional capacity-building so that open-source models can spawn local businesses and ensure long-term African-led AI [75-77].


Wadwani AI – Shekhar Sivasubramanian


Sivasubramanian described Wadwani AI’s inclusive design principle: every solution is built for at least 14-16 languages and must bridge the rural-urban divide [84-92]. He showcased two flagship projects: a multilingual disease-surveillance system that scans Indian news every four hours in 16 languages and alerts the government to outbreaks [94-99]; and an oral-reading-fluency tool that records children’s spoken reading, provides instant feedback and helps teachers tailor instruction [100-102]. He stressed that language and utility are inseparable-a model must deliver tangible value to achieve adoption [108-113].


Babel Kofler – Germany


Kofler reinforced the ethical dimension, stating that AI can only be a game-changer for the Sustainable Development Goals if it is inclusive and built on bias-free data [135-143]. She highlighted the importance of dialectal variation, noting that ignoring it reproduces cultural marginalisation. Kofler cited Germany’s Fair Forward initiative (launched in 2019) that collaborates with India to collect multilingual datasets for citizen-facing services [282-283][145-151].


Lingua Africa Announcement – Masakane Chair


The Chair announced Lingua Africa, an open-core, community-governed language-infrastructure platform funded by a multi-million-pound partnership among the UK, the Gates Foundation, Microsoft AI for Good and Masakane [160-176][170-178]. The platform will coordinate community-governed language infrastructure, domain-specific data collection, model development and deployment pathways.


Ankur Vora – Gates Foundation


Vora framed language as a public good that markets have abandoned because commercial incentives focus on English and Mandarin [183-190][191-215]. He argued that when markets are broken, coordinated public-good funding from governments, foundations and tech firms is required to develop low-resource language AI, and he reaffirmed support for Lingua Africa[188-192][205-212][160-176].


Natasha Crampton – Microsoft


Crampton highlighted that compute is the enabler for language-aware, culturally-sensitive AI. High-performance GPUs are needed not only to fine-tune models with locally collected data but also to test them with native speakers and to run day-to-day services [224-233][232-247]. She warned that AI diffusion is currently twice as fast in the Global North, underscoring the urgency of closing the compute gap [225-227].


Julie Delahanty – IDRC


Delahanty detailed the African Compute Initiative, the first dedicated high-performance GPU cluster for public institutions in Africa, to be hosted at the University of Cape Town [258-271]. The cluster will provide modern GPUs, fast storage and networking, enabling African researchers to train large models, test innovations quickly and support projects such as the Masakane Hub [274-279][272-273]. She also noted community-driven localisation work, citing subtitles created by the Amara.org community [285].


Consolidated Consensus

All speakers agreed that:


* Compute infrastructure is foundational for building and deploying multilingual AI models [3-4][224-233][258-271][39-40][188-192];


* Linguistic inclusion is essential to prevent cultural extinction and to deliver equitable AI benefits [13][28-33][54-66][86-92];


* Public-good, multi-donor partnerships are needed to fill market failures for low-resource languages [8-10][126-134][145-151][183-215];


* Domain-specific use cases in health, education and agriculture demonstrate real-world impact [10-11][71-74][94-102]; and


* Capacity-building in talent, research and institutions underpins AI sovereignty [40-41][64-66][86-92][267-271][212-215].


Points of Divergence

1. Funding Model – Vora stressed that markets are broken and only public-good investment can address language gaps [188-192][205-212]; Lammy called for strong state intervention to avoid leaving AI to the marketplace [218-219]; Sivasubramanian highlighted the role of private-sector risk-taking and innovation [84-86].


2. Priority: Compute vs. Data – Crampton and Delahanty positioned compute as the immediate bottleneck [224-233][267-271]; Kofler and the Masakane Chair emphasised high-quality, bias-free data and benchmarks as the prerequisite [136-143][60-66][70-76].


3. Framing of Language Preservation – Digo framed it as an existential civilisational threat [28-33]; Vora described it as a market failure requiring public-good investment [188-192][205-212].


Key Take-aways

* Linguistic inclusion safeguards cultural heritage and ensures AI benefits all communities [13][28-33][54-66][86-92].


* The Masakane Hub targets 50 major African languages, aiming to impact 1 billion people through data expansion, benchmarking, gender-responsive projects and sustainability [54-55][61-70][71-77].


* Effective AI solutions must be multilingual and directly useful, as shown by disease-surveillance, oral-reading tools, and voice-interface start-ups [10-11][94-102][280-281].


* Bias-free, representative data and local testing are prerequisites for trustworthy AI [135-143][232-247].


* Market forces ignore low-resource languages; coordinated public-good funding is required [188-192][205-215].


* Compute capacity is a major bottleneck; the African Compute Initiative will provide the first public-sector high-performance GPU cluster in Africa [258-271][274-279].


* Multi-stakeholder partnerships (UK, Gates, Microsoft, IDRC, Germany, Canada, GSMA) are mobilised to fund language infrastructure, compute and applied projects [8-10][126-134][145-151][183-215][224-233][258-271].


* Governance, ethics, gender-responsive design and long-term sustainability are central to ensuring AI remains safe, inclusive and equitable [71-77][135-143][224-233].


Resolutions & Action Items

* Launch of Lingua Africa, an open-core, community-governed language-infrastructure platform funded by a UK-Gates-Microsoft partnership [160-176][170-178].


* Allocation of substantial funding to Masakane for data collection, benchmark creation, use-case development and sustainability activities [9-10][71-77].


* Support for four additional start-ups (including Torn AI) via the GSMA Foundation to deliver responsible AI for underserved populations [9-10][280-281].


* Establishment of the African Compute Initiative – a dedicated high-performance GPU cluster at UCT for public-sector researchers [258-271][272-273].


* Commitment of 40 % of Masakane’s budget to concrete use cases, notably Project Echo for women’s economic empowerment and health [71-74].


* Development of an African speech-and-text benchmark to evaluate models in local contexts [66-67].


* Ongoing capacity-building programmes to train African researchers, data scientists and AI engineers [40-41][64-66][86-92][212-215].


Open Questions & Unresolved Issues

* Scaling beyond 50 languages – a roadmap is needed to extend support to the full 2 000 + African language ecosystem [42-45].


* Long-term financing – mechanisms to sustain the Masakane hub and the compute cluster after the initial grant period remain to be defined.


* Governance of Lingua Africa – the precise community-governed decision-making structure and benefit-sharing model require clarification.


* Data ownership and privacy – strategies for collecting high-quality data for extremely low-resource dialects while protecting community rights are still under discussion.


* Deployment at scale – methods to roll out voice-interface solutions like Torn AI to low-literacy, rural users across diverse contexts need further elaboration.


* Impact measurement – robust metrics for evaluating gender-responsive interventions (e.g., Project Echo) and health outcomes from AI tools are required [71-74].


* Balancing public-good and private-sector roles – ensuring that private innovators benefit from public infrastructure without compromising local sovereignty [218-219][84-86].


* Benchmarking and scenario-aware evaluation – defining standards for accuracy, cultural relevance and context-specific performance of African language models [66-67][232-247].


Alignment with Policy Context

The discussion mirrors recent AI-readiness reports that call for “the most useful” AI in Africa rather than the most powerful [S38], and stress that digital sovereignty must incorporate linguistic dimensions [S41][S86]. The emphasis on public-sector compute as critical national infrastructure aligns with recommendations to treat AI infrastructure as a public good [S55][S103]. By combining government funding, multilateral development partners and private-sector expertise, the panel’s agenda directly addresses identified gaps in capacity, infrastructure and inclusive governance [S39][S40][S52].


Overall, the panel presented a coordinated, multi-nation effort to build the data, talent and compute foundations required for multilingual, culturally-aware AI, while recognising the ethical imperative to preserve linguistic heritage and to deliver equitable development outcomes across Africa and Asia.


Session transcriptComplete transcript of the session
David Lammy

to make AI work in more than 40 African languages. This is a brilliant, genuinely African -led initiative which helps people to access AI in the languages that they actually use in their everyday lives. Second, we’re investing in Africa’s first dedicated public sector AI computer cluster at the University of Cape Town. Too many African researchers are held back by costs and a lack of access. And the hope is that this new hub will give them the computing power to build and train models locally. And third, we’re launching the Asia AI for Development Observatory. This is a new network to support research, responsible AI governance, to protect rights and to ensure AI reflects the realities of people’s lives across the region.

All of these initiatives are effectively part of our AI for Development programme, launched when we hosted the first of these AI summits back at Bletchley Park three years ago and made in partnership with Canada’s International Development Research Centre and as part of a wider collaboration to coordinate investments with the Gates Foundation, the governments of Germany, Japan and Sweden, as well as Community Jamil. And as part of our partnership with the GSMA Foundation, we’re proud to announce the support to four additional start -ups and these innovative businesses will harness responsible AI and will be able to support the development of AI for the future. to support the needs of underserved people. across Asia and Africa. They include Torn AI in Morocco, which creates voice interfaces to local dialects to help low -literacy rural users access digital and financial services through simple spoken interactions.

And all of these initiatives will make a real difference to people across the continent of Africa and Asia. But I hope they’ll do a bit more than that. Yesterday, I spoke about the choice the world faces, the two paths before us, one which sees AI take power and opportunity away from people and sadly divides us, and one that sees AI used as a force for good to solve problems and uplift all of humanity. and the projects I’ve mentioned, the ones we’re going to hear about today and the many new institutions and coalitions that are now emerging can help make sure we go down the right path and that is a path of a safe AI, an inclusive AI and importantly an equitable AI for everyone.

So let’s turn now to our panel, an exceptional group of leaders from across India and Africa and I’m going to get an introduction to our panel members and then I’ll start with the first question.

Co-Moderator

Thank you. It’s my pleasure to introduce, joining our Deputy Prime Minister on the stage, we have His Excellency Ambassador Philip Digo, Special Technology Envoy of the Government of Kenya. We have Dr. Babel Kofler. parliamentary state secretary to the federal minister for economic cooperation and development of Germany. We have Shekhar Sivasubramanian, CEO of Wadwani AI and Chennai Chair, Director of the Mazakani African Languages Hub. And so to the first question to Philip. So we’re beginning to see an attempt well we all experience how large language models are affecting our lives on a daily basis. I certainly use a, I don’t use chat GPT but I use a secure network which isn’t taking my stuff because obviously for obvious reasons as Deputy Prime Minister of the UK I have to be a bit careful but I’m using it to research and you know usually really get quickly to a that I don’t fully understand.

So as we move this on to local languages, dialects, and across Africa we’ve got an estimated 2 ,000 languages, how do you see AI in local languages shaping the next phase of your country’s digital development having been to Kenya many times and knowing the many groups that are there, how does this really work on the ground?

Philip Thigo

Thank you so much, Right Honourable Prime Minister. I think this moment is so profound that I don’t think you guys are realising what is happening here. I think the first thing is to understand that we’re actually in the age of intelligence, right? So it’s not about ICTs or technology, it’s about our AI shape. I think how we live, learn, work, collaborate. and engage. From our point of view, it’s a civilizational discussion. The Global South has never lacked intelligence, as we know, right? So what it has lacked is the power to define how that intelligence is recognized, recorded, or transmitted. Because our entire culture’s values have been coined in language. The Global South is largely an oral civilization.

And so the current models lacking our language means our civilization are at risk, almost existential, to be extinct. And I think this initiative, in our view, begins to ensure that we are represented in the current age of intelligence, but also our intelligence is part of our global collective history and memory. And so I think which means then how we engage in this is that now with the capabilities like Masakani, it means that when I get into Chagipiti that you refused to mention, is that… Young people in Kenya, who are the number one users of Chagipiti? by the way, are not only seeking emotional advice or guidance from these models, but then when they engage with these models, it actually also represents their cultures and civilization.

I think for me that’s how it works practically on the ground. First of all, it’s representation and existence. The second part, of course, is it also works when we have the entire stack. And you mentioned a couple of things around finding the models, but I think it would be interesting to see how we find the compute, the talent that then influences, develops the data, develops the language. The research and development capability, which I was in the first instance, and that was an amazing initiative because then we need to build research capacity and capability because talent development is the first instance of sovereignty. Then the final point, of course, is the specific use cases and languages, especially in the African context.

Again, you say 2000. So each of the 2000 are very context -specific. Kiswahili and Yoruba are not the same, and neither are their applications in the context of the African context. sense in even our history and cultures in Africa. So I think that capability, as diverse as it could be, also ensures that our diversity in the African continent is represented in the future models.

David Lammy

And that’s wonderful. And the first point you made is really about sort of seeing yourself in this story that the global community is going to be telling in relation to intelligence. And we know that in the past, Africa has been written out of that story. So it’s hugely important that African languages intellect history over thousands of years is in this storybook. So should I tell us then how the Masa Kani African languages is addressed? And how are we addressing these issues and really working as a tangible, real thing?

Chenai Chair

Thank you so much. Minister. So I think I want to say that I am proud to be representing the Masakane community that started in 2019, wanting to see their own languages represented in the global domain, and they did it by the bootstraps. No one wanted to fund them, and they came together and said, hey, how do I ensure that the language that I speak is captured digitally? So the Masakane African Language Hub emerges from that community -driven initiative, where our main goal is to impact 1 billion Africans through 50 of the most spoken languages with relevant AI tools that will allow for economic growth, health, and social benefit, and also working towards the preservation and also capturing the evolution of African languages.

The 2 ,000 -plus are growing, and so then even as Honorable Philip Degas mentioned, the diversity of the language is growing. So I think it’s a very important thing to do. Thank you. Thank you. Like I speak Shona, but the Shona I speak in Harare is not the same as the Shona spoken in Mutai. So it’s really capturing that diversity and nuance of that work. So what we do with support from the funding collaborative and the partners that we have is actually think about enabling the ecosystem through partnerships and grant making. So we specifically focus on four pillars of work, which is around data. So expanding and diversifying high quality data. In 2019, there wasn’t as much data, but the Masakana community actually started building up that data based from the JW300 Bible data set that had been created.

Secondly, we’re also looking at research. So it’s important for us to take it on as an ecosystem invention, where we are looking at developing and refining inclusive AI machine learning models, but also thinking about the tooling that’s resourcing these. So what we are working on specifically, that is having a benchmark project where we’re actually… going to create a relevant African benchmark looking at speech and text because the current benchmark models out there do not nuance the realities on the African context. And then also looking at innovation. So again, a lot of the questions that we’ve seen is when you create the data, where does it go? Is it taken up in the market? What’s the impact?

And so for us, we’re actually working 40 % of the funding that we have will go to creating use cases and impacting use cases. And one special mention I want to put forward is actually that we are working on a project called Project Echo, which means enhancing communications for her opportunities. This is a gender responsive intervention that exists in the context of high gendered inequality on the continent. And what that does is it will provide relevant use cases in African languages that lead to impact on women’s economic empowerment and health. And that’s a significant part of us recognizing the context we exist in. And then lastly, we really are thinking about sustainability. and right now we’re in a moment where there is resourcing, where there is funding and we also come from a moment where people were doing it without funding.

So we’re thinking about institutional capacity building for the African NLP community which will actually then see businesses coming up from these open source models, people innovating off the data that’s created and sustainability beyond the Masakana community which has been happening right now but then this funding allows us to actually have African -led AI which is built for impact. Thank you very much.

David Lammy

Thank you very much. Centres Africa but also importantly the fundamental inequality and gender issues that sit at the heart not just of Africa but making sure that women are, a big part of this story. Shekhar, bringing India into this and thinking of Wadwani AI and we’re sitting here in the most populous country on the planet. There are also lots of languages and tremendous diversity but also innovation and range across this country. So tell us how Wadwani AI is working at the heart of that innovation here in India. Thank you. First, the

Shekar Sivasubramanian

work we do is applied AI, which means we solve for problems in health, education and agriculture. And we’ve been doing it for the last seven years. The moment you work in India, the very first design principle that you start with is the ability to be… inclusive and embrace the entire population. So the dimensions of population we are looking at are language. So you start with at least 14 to 16 languages. You don’t even think of an application otherwise. Second, you also think of complete inclusivity, which means you need to think through the divide between rural and urban, the kinds of applications that will be delivered to people which will be of use to them. Third, are applications fundamentally must be useful to people.

Then they open out their ability to learn languages, changes that better interface with technology that can actually be of use to them. So that utility value sits at the heart of everything that we do, which drives a lot of behavior both by us as well as the ecosystem. Just as an example, we do media disease surveillance. We’ve been doing it for a while, which picks up every article published in India, it runs four hours, and it picks. Events of interest in health. 16 languages and it’s been running for the last two and a half to three years. It uses AI and it tells you in this region this many people got this disease at this time.

It runs every four hours and it tells the central government if it’s a disease outbreak what should you do. Another completely different example. We collect data from children in a couple of states and that will expand to 14 to 16 states where we have the largest data set of spoken local language. And which again in of itself of no use but when you provide something called oral reading fluency which assists the poorest child to read a paragraph and the AI tells you what you read well, what you did not read well and assists the teacher to cohortize the students and provide them information. Suddenly the language and the application you cannot distinguish between them. It is very important in human contexts to provide some value to the person in any interchange.

If you can work the value then the adoption is possible. option is easy. If you divorce the two, people don’t understand why I’m doing what I’m doing. It looks like an encumbrance. So for us, at the heart of our innovation is what does it mean for the person. Independent of which, we do analysis on various languages. We’ve done one on Tibetan, where we’ve preserved their entire culture by taking, we worked in Dharmashala, as well as in Karnataka. We digitized their entire library system and allowed the communities there to gain employment using it. Likewise, we plan to work on multiple less -used languages, Pan -India. We believe, it’s our position that, and we’ve got Agrivani, Healthvani, everything that we do is multilingual.

Everything that we do collects data. We have the largest data sets now, incidentally, of the work we do. It’s not what we do. It comes as a by -product of the work that we do. Over a period of time, it is my heart. Considered an humble opinion that these models using AI will take time. We should be ready to write this for a period of time. We should be ready to invest in deep research and or very utilitarian based approaches so that you can take the community along with you. That is super important. The theory is interesting, the practice is different. There is a theory as to how to design roads in India. I will keep quiet after a bit.

Very

David Lammy

very good example. Obviously I talked about the UK as a donor country doing this in partnership with others, Canada, Sweden, but also the German government and we’re joined by Babel Koffler just to bring the donor a perspective really to this and why this is so important. Thank

Barbel Kofler

you very much, Deputy Prime Minister. Thank you. Oh, there’s no one? What did I do? Oh, thank you. Thank you very much. I should put it on first, that’s true, yeah. Thank you very much, Deputy Prime Minister. I wouldn’t talk if it’s coming to AI in a manner of donor and recipient, because at the end of the day I think it’s a new technology where we all have to bridge if we really want to make it useful for everybody. And that’s also our interest, of course, from the German side. We see that AI can only be really the game changer to overpower… …to overcome inequality, to fulfill the promises of the SDG. And that’s for every country important, not only for global laws.

global South that’s important for everybody, can only be that game changer if it is inclusive. That starts with data at the end of the day and how biased data is. And if you talk about bias in data, language is quite close to it. You were pointing out how important it is and how you differently speak in various variations of your language. I really understand that I don’t speak standard German normally, so I also use a dialect, and that’s quite different from Hamburg. So we all have something to include also, which is connected with a cultural momentum, and we see so many languages neglected, totally neglected, dialects, cultures, because it’s not only the language, it’s what the language is transporting also, which is neglected.

And that’s why we really try to be part. And we are very proud to be part of your initiative. also. We were starting in 2019 with discussing those topics, working on an initiative called Fair Forward. That’s part of the initiative. And working also with partner countries like India on collecting data sets, so really to collect the necessary data on those local languages, which at the end of the day is offering then or should offer service to citizens in their mother tongue in multilingual countries or contexts, for example. So for us, it’s of utmost importance. Happy to be part of the initiative. We want to stay a reliable partner on that, and we will be part of that initiative.

And I hope the idea is spreading and growing. Thank you. Thank

Co-Moderator

you. We’ll now have a small… Changeover in our panellists. If I could ask, if we could have another big round of applause, please, for His Excellency Philip Higo, Debra Kofler and Shekhar Sivasubramanian. And now joining us on stage, we will have Ankara Vora, Chief Strategy Officer and President of the Africa and India Office at the Gates Foundation. Julie Delahunty, President of Canada’s International Development Research Centre. And Natasha Crampton, Chief Responsible AI Officer at Microsoft. Thank you very much.

David Lammy

Back to Chennai, my understanding is that Masakami are announcing a new multi -million pound partner for an open call today for Lingua Africa. So can you tell us a little bit more about this initiative and the gap that it’s designed to close effectively? And then why this moment is so important for African languages as a whole and AI?

Chenai Chair

Thank you so much Deputy Prime Minister. So yes, I do have the honour with my esteemed panellists to actually announce Lingua Africa. So with Masakami, which I’ve said means to build together, we’ve been working with researchers and communities across the continent to close the gap in how African languages are being used. And how African languages are represented in the AI systems. What we’ve constantly seen is that… I think I did mention this, is that it’s not just about data. It’s about whether the language resources actually translate into tools people can use, particularly in healthcare, education, agriculture, and public services, because those are the developmental domains that we’re likely to have significant impact with. So together with Microsoft AI for Good and the Gates Foundation, as well as our AI for D partners, LINGUA Africa will be a multi -partner open core focused on open community -governed language infrastructure, which will directly enable real -world AI applications.

I think a lot of the times as we’re developing AI solutions, the question becomes, if we’re building them in a lab, will they work in the real world? And that’s also consistently part of what we’re doing with the benchmarking work. So how we’ll do this is actually then it’ll be a use case or impact -focused specific approach. Where we will do model development, we will collect targeted data in those… specific domains and then also support strong pathways for deployment and adoption. So this is us working with multiple entities, the academic community, our partners here on stage with us, but also the tech entrepreneurs who are actually building up these solutions. And then for us, it’s quite simple.

The goal is to make sure that language is not a barrier anymore into including people into these solutions, particularly if you think about digital public infrastructure interventions. They need to be in languages that people communicate with because you will leave behind a majority of people if they are in languages that they do not understand. So that is our most significant contribution right now. Thank you.

David Lammy

Thank you. And obviously, we’re very pleased in the UK to be partnering with the Gates Foundation on new support for linguistic diversity across AI. But just explain, Ankur, the role. that the hub has effectively in that wider impact on the global south.

Ankur Vora

This is on? It’s on, all right. Languages matter. Can you first join me in giving a big round of applause to Chennai for this amazing movement. It is kind of brilliant where we are in this moment in time. Let me talk about three whys. One is why care about language? The second is why care about investing in language? And the third one is why care about investing in initiatives like Masakane? The first one, I think so everybody knows, but it’s useful to repeat it. And many people have talked about this before. Because we want to make… We want to make sure that the power of AI… actually changes lives. History is not going to remember us for the models we developed or the speeches we give here.

History is going to remember the impact we all had. We’re talking about mothers and babies not dying. We’re talking about the next generation growing up in a world without infectious diseases. We’re talking about hundreds of millions of people escaping the clutches of poverty. Those kind of things matter. And the solutions are there. They can get better. But we need to find a way of these solutions getting translated for these use cases. So that’s why we need to care about this thing. Why invest in language? Because the markets are broken. The markets are broken. Private sector companies are investing in models developed in English and Mandarin. And it makes sense for the markets to do that.

Because that’s where the economics work. But just because… the economics don’t work for the small resource languages doesn’t mean that we shouldn’t be investing in this. And that’s the point why all of us need to get together and say we need to do something about it. When markets are broken, funders can get together and invest in public goods. And that’s what we all are doing right now. UK government, Canadian government, the Japanese, Microsoft, IDRC, everybody is getting together and making the point that we need to invest in public goods because these markets are broken and this is an important thing. And so I’m quite excited about the fact that we’re all sitting at this panel, people in this audience, and saying we’re going to make sure that as we think about tomorrow, the important thing of developing solutions in the right languages that can solve the problem.

Thank you. be a problem that we will tackle. Thank you.

David Lammy

Thank you. I’ve got to say as a politician I like the idea that the state intervenes and doesn’t just leave it to the marketplace to determine where to put the funds, I’ve got to say. But obviously we are, and you have to in terms of the innovation, work at the cutting edge and that cutting edge is most often in the private sector as well, taking those risks to develop and innovate and here Microsoft and cloud computing is hugely important, Natasha, and it’s important to understand that interaction between the cloud. I think that the Masakani hub as it has been described so well, by Shaniai and how important it is that computation and cloud technology helps the innovation of these local languages.

Could you say a little bit more about that? But there is another subset to this, which is getting the balance right so that the languages and often the communities that we’re supporting that are at the front line have equity in this and don’t lose their own sovereign capabilities, which has been a theme of this conference. So I wonder if you could just reflect on that as well.

Natasha Crampton

is an urgent priority. Our own analysis shows that we have AI diffusing in the global north at roughly double the rate that we have it diffusing in the global south at the current time, and that is exactly why we need partnerships like these in order to start to put the right infrastructure in place to close that gap. Now, language is particularly important in terms of overcoming that AI divide. As we’ve heard many speakers say today, nobody is going to use AI if it does not speak the language that you speak, and importantly, that it does not work in the context, in the specific scenario in which you need to use it. So language -aware and scenario -aware, AI.

AI is incredibly important to empowering people to put… the technology to work in the use cases that mean the most to them. And that’s why we’re so thrilled to be partnering with Masakane, as well as the Gates Foundation, and the UK government on this Lingua Africa initiative. So how does compute come into all of this? I think, quite simply, compute is the enabler of making language and culturally aware AI. It’s a critical component of it. So when we take a base model that may have just been trained, like most models, on data sets that are predominantly English, we need to make sure that we can do this responsibly, locally -led data collection that Shania was talking about earlier.

And then we need to do some further work on the models to essentially ingest that data and make it well -registered. And that’s what we’re trying to do. And that’s what we’re trying to do. And that’s what we’re trying to do. And that’s what we’re trying to do. And that’s what we’re trying to do. And that’s what we’re trying to do. And that’s what we’re trying to do. And that’s what we’re trying to do. And that’s what we’re trying to do. takes compute. Then it’s very important once we’ve actually made the model language, linguistically and culturally aware, we need to make sure that we’re testing it with local language speakers and in the right scenarios.

That also, that testing, that also takes compute. And then finally, the day -to -day use of this technology, it also requires computing power. So we’re really here today as an enabler of an Africa -led effort by Africans for Africans to create this linguistically aware and multi -culturally aware technology, and compute fundamentally is just the enabler of it. I think my last thought to offer here today is I think these types of initiatives just really reinforce that trustworthy AI is not going to be the best tool for computing. I think it’s going to be the best tool for computing. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. because of the choices that we make, the ways in which we choose to build and test and deploy these AI systems and for us at Microsoft it’s really important that we do take all of those steps to represent the world as it is multicultural, multilingual and deeply interconnected so we’re thrilled to be part of this initiative.

Thank you very much.

David Lammy

And Julie, so we very much described this journey that we’ve been on since Bletchley Park and talked about languages, talked about computing. In a sense, AI, that’s the foundation stone for communities but just to round off this event, how do we see the opportunities going forward? particularly, where do we need to get to?

Julie Delahanty

Thank you. Thanks, everybody, for being here and for the Deputy Prime Minister for welcoming us. We’re incredibly proud at IDRC to be part of the AI4D initiative with the UK government and to be partnering with the Masa Kani African Language Hub as well as the African Compute Initiative. And I think going back a little bit to the Microsoft views on it, I think it’s very similar for us. Researchers in lower and middle -income countries really have to have strong computing power to be able to do the kind of cutting -edge AI work that Shana and others are doing. But right now, of course, they do face a lot of barriers. We did a study that showed the incredible increased cost of getting compute capacity, the difference between getting it in Germany and getting it in the UK.

But in an African country, the costs are exponentially larger. So that computing cost and how much it is, the local infrastructure that might be limited, the GPUs that are the… hardware that’s really driving the powers modern AI is also very difficult and hard to access for African countries. So it really makes it difficult for them to fully participate in global AI innovations. The African Compute Initiative is going to change all that, we hope. It is going to be the first dedicated high -performance computing cluster for public institutions in Africa. It will be based in South Africa at the University of Cape Town. And that initiative is going to include modern GPUs. It’s going to have faster and better storage capacity and much faster networking.

And it’s that kind of computing power that is essential. It’s essential for training large AI models. It’s essential for testing new ideas more quickly, as you mentioned. Subtitles by the Amara .org community And it’s been essential and will be essential for things like the Masakani African Language Hub. Both the initiatives that I’m talking about are really responding to the foundational gaps. So whether that’s compute capacity or the kinds of representative and robust data sets, both of those things are absolutely necessary. And if you don’t have those foundations, then you can’t contribute to AI systems. And if you can’t contribute to AI systems, then you can’t shape the AI systems. So it’s absolutely critical for Africa’s AI innovations to have those foundational elements that exist.

And I think the lessons that we’re going to learn through a lot of this programming is going to help other regions and other lower resource contexts to do that kind of work. And in terms of the next steps or the things that we can do with that, I mean, you can imagine some of the obvious things. I mean, some people have already mentioned it, but things like having…

Co-Moderator

Thank you very much. Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (23)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedmedium

“UK Deputy Prime Minister David Lammy opened the session.”

The knowledge base lists David Lammy as the Deputy Prime Minister of the United Kingdom, confirming his role in opening the session [S1].

Confirmedhigh

“The continent has roughly 2 000 languages.”

The knowledge base notes that there are over 2,000 documented African languages, corroborating the figure cited in the report [S106].

Confirmedmedium

“Babel Kofler is Germany’s Parliamentary State Secretary.”

Source [S21] identifies Bärbel Kofler as the Parliamentary State Secretary to the Federal Ministry of Economic Cooperation and Development, confirming her title.

!
Correctionhigh

“Philip Digo is Kenya’s Special Technology Envoy.”

The knowledge base records the envoy’s name as Philip Thigo, not Philip Digo, indicating a naming error in the report [S17].

Additional Contextlow

“The partnership network includes the Gates Foundation among other donors.”

Source [S109] mentions collaboration with partners such as the Gates Foundation in AI-related initiatives, providing additional context to the reported partnership list.

External Sources (118)
S1
Responsible AI for Shared Prosperity — -Co-Moderator- Role/title not specified
S2
https://dig.watch/event/india-ai-impact-summit-2026/building-the-workforce_-ai-for-viksit-bharat-2047 — Minister in the National Council on Scale Development We welcome you sir On the panel, we are joined by Guilherme Albusc…
S3
WSIS+20 Open Consultation session with Co-Facilitators — – **Bojana** – Global Forum for Media Development representative – **Jennifer Chung** – (Role/affiliation not clearly s…
S4
Responsible AI for Shared Prosperity — -Co-Moderator- Role/title not specified -David Lammy- Deputy Prime Minister of the UK
S5
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Chris Baryomunsi- Role/title not specified (represents Uganda) -David Lamy MP- Deputy Prime Minister, Lord Chancellor …
S6
Multi-stakeholder Discussion on issues about Generative AI — Natasha Crampton:So, I’m Natasha Crankjian from Microsoft. I’m incredibly optimistic about AI’s potential to help us hav…
S7
Towards a Safer South Launching the Global South AI Safety Research Network — – Mr. Abhishek Singh- Ms. Natasha Crampton- Ms. Chenai Chair – Ms. Natasha Crampton- Dr. Rachel Sibande
S8
Democratizing AI Building Trustworthy Systems for Everyone — – Dr. Saurabh Garg- Natasha Crampton – Dr. Saurabh Garg- Natasha Crampton- Justin Carsten – Natasha Crampton- Particip…
S9
Responsible AI for Shared Prosperity — -Ankur Vora- Chief Strategy Officer and President of the Africa and India Office at the Gates Foundation -Co-Moderator-…
S10
Keynote-Ankur Vora — “AI is not a leap into the unknown for India. It is the next chapter in a journey of building solutions that serve every…
S11
Towards a Safer South Launching the Global South AI Safety Research Network — -Ms. Chenai Chair- Director of the Masakane African Language Hub
S12
Responsible AI for Shared Prosperity — – Philip Thigo- Chenai Chair – Shekar Sivasubramanian- Chenai Chair
S13
Responsible AI for Shared Prosperity — – Shekar Sivasubramanian- Chenai Chair
S14
Responsible AI for Shared Prosperity — -Co-Moderator- Role/title not specified -Julie Delahanty- President of Canada’s International Development Research Cent…
S15
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-for-shared-prosperity — And I hope the idea is spreading and growing. Thank you. Thank Co-Moderator: you. We’ll now have a small… Changeover…
S16
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-empowerment_-driving-change-and-inclusion — So with that, I hope, provocative context setting, I am really grateful. On behalf of the Just Jobs Network, again, with…
S17
Philip Thigo named Kenya’s special envoy for technology — Philip Thigo, the Executive Director for Africa at Thunderbird School of Global Management, has been appointed as the Sp…
S18
Responsible AI for Shared Prosperity — -Philip Thigo- His Excellency Ambassador, Special Technology Envoy of the Government of Kenya
S19
Sustainable Capacity Building: Internet Governance in Africa — Mr Philip Thigo, Senior Adviser, Regional Bureau for Africa, UN Development Programme (UNDP)
S20
Responsible AI for Shared Prosperity — -Barbel Kofler- Parliamentary State Secretary to the Federal Minister for Economic Cooperation and Development of German…
S21
GermanAsian AI Partnerships Driving Talent Innovation the Future — -Dr. Bärbel Kofler- Title: Parliamentary State Secretary to the Federal Ministry of Economic Cooperation and Development…
S22
Multistakeholder Partnerships for Thriving AI Ecosystems — -Bärbel Kofler- Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development, me…
S23
Digital Inclusion Through a Multilingual Internet | IGF 2023 WS #297 — Edmon Chung:Yeah, I think it’s a great idea. In fact, I don’t know whether you intended it as an idea, but bringing up t…
S24
WS #119 AI for Multilingual Inclusion — – Supporting local chapters working in their languages 1. How to encourage local communities to produce better quality …
S25
Day 0 Event #261 Navigating Ethical Dilemmas in AI-Generated Content — Ernst Noorman: Thank you very much, Lei, and it’s always a pleasure to be together with the RNW media in an event, and I…
S26
Artificial intelligence (AI) and cyber diplomacy — Jovan Kurbalija:Vlada, just a quick journey through this pyramid. On computational power, many countries, and I would sa…
S27
Open Forum #26 High-level review of AI governance from Inter-governmental P — Speaker 1: Thank you. So just a couple of things I want to touch on. I think companies have significant responsibilit…
S28
AI, Data Governance, and Innovation for Development — Sade Dada: So, you know, getting to these areas is really, really complicated, very, very challenging, and it’s because …
S29
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — Second, entrepreneurship enabled by AI’s accessibility features. Voice activation and local language models can overcome…
S30
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — These efforts are crucial to prevent the exacerbation of inequality and the marginalization of vulnerable groups. Stakeh…
S31
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — **Ernst Noorman**, Cyber Ambassador for the Netherlands and co-chair of the FOC Task Force on AI and Human Rights, share…
S32
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — – Natasha Crampton- Vukosi Marivate Crampton advocates for integrating assurance from the beginning of system developme…
S33
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Tatiana Tropina:I must admit here that I cannot say that I cannot speak for global south, which is global majority, righ…
S34
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — I think we should, let’s not talk about Saudi Arabia or India for a moment, but let’s just talk about the global north a…
S35
AI as critical infrastructure for continuity in public services — Inclusive participation of all stakeholders (government, civil society, technical community, private sector) breeds legi…
S36
WS #219 Generative AI Llms in Content Moderation Rights Risks — ### The Low-Resource Language Crisis Dhanaraj Thakur provided extensive analysis of how language inequities create syst…
S37
Welfare for All Ensuring Equitable AI in the Worlds Democracies — “if a model or system is primarily prepared to perform well in high resource languages, but not in low resource language…
S38
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — ### Current Policy Landscape ### Infrastructure and Capacity Constraints **Additional speakers:** Ashana Kalemera: Mu…
S39
Smart Regulation Rightsizing Governance for the AI Revolution — The panelists identified several promising areas for cooperation, including technical standards through frameworks like …
S40
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — European contexts focus heavily on regulatory compliance and managing cultural resistance within established bureaucraci…
S41
Workshop 2: The Interplay Between Digital Sovereignty and Development — ## Cultural and Linguistic Dimensions **Anton Barberi** from the Organisation Internationale de la Francophonie expande…
S42
Ministerial Roundtable — ### Cultural and Linguistic Considerations
S43
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Hannah Taieb:Real diversity is very important indeed, and it all depends on the models and business models. Algorithms a…
S44
Non-regulatory approaches to the digital public debate | IGF 2023 Open Forum #139 — Addressing harmful content online requires a multidimensional approach that takes into account linguistic nuances, cultu…
S45
How Submarine Cables Enhance Digital Collaboration | IGF 2023 Town Hall #80 — In conclusion, the analysis showcases the immense potential of submarine cables across the Arctic. These cables offer a …
S46
Panel Discussion: 01 — Concrete impact stories / use cases
S47
Accelerating Structural Transformation and Industrialization in Developing Countries: Navigating the Future with Advanced ICTs and Industry 4.0 — Sama Mbang: Thank you very much, but you can hear me, right? It’s okay. It’s okay. Yeah. Okay. Yeah. Thank you very much…
S48
Scaling Multistakeholder Partnerships: Connectivity and Education — However, a glimmer of hope is evident in the formation of public policies directed towards bridging these gaps. The alli…
S49
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — Necessity of multi-stakeholder collaboration and partnerships
S50
About the Commission — Typically (and necessarily in jurisdictions where State aid rules govern this form of intervention), the pub…
S51
Switzerland: — All these gaps demand a strategic approach and indicate the need for cooperation among various stakeholders in…
S52
Bridging the Digital Divide: Inclusive ICT Policies for Sustainable Development — Development | Economic | Future of work Survey data showing barriers: lack of advanced skills (46%), poor internet infr…
S53
TABLE OF CONTENTS — The Policy therefore aims to address ICT infrastructure and other ecosystem gaps through the use of several policy instr…
S54
Regional Leaders Discuss AI-Ready Digital Infrastructure — This shifted the discussion from viewing regulation as a barrier to seeing it as an enabler of competitive advantage. It…
S55
Panel Discussion Inclusion Innovation &amp; the Future of AI — Treat compute infrastructure as critical national infrastructure requiring government investment and protection
S58
Responsible AI for Shared Prosperity — “The research and development capability, which I was in the first instance, and that was an amazing initiative because …
S59
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — I mean, the potential is so immense. We have not even scratched the surface, not even the tip of the iceberg we have tou…
S60
Ateliers : rapports restitution et séance de clôture — Joseph Nkalwo Ngoula Merci. C’est toujours difficile de restituer la parole d’experts de haut vol. sans courir le risque…
S61
Inclusive AI_ Why Linguistic Diversity Matters — Inclusivity, Language Coverage, and Cultural Preservation
S62
WS #119 AI for Multilingual Inclusion — AI technology can be used to preserve and protect endangered languages. This helps maintain cultural heritage and ensure…
S63
WS #254 The Human Rights Impact of Underrepresented Languages in AI — Market forces play a significant role in driving AI development, often favoring dominant languages like English. However…
S64
WS #219 Generative AI Llms in Content Moderation Rights Risks — ### The Low-Resource Language Crisis Dhanaraj Thakur provided extensive analysis of how language inequities create syst…
S65
Advancing Scientific AI with Safety Ethics and Responsibility — -Balancing Open Science with Security: Panelists explored the challenge of preserving open science benefits while preven…
S66
OpenAI explains approach to privacy, freedom, and teen safety — OpenAI has outlined how itbalances privacy, freedom, and teen safetyin its AI tools. The company said AI conversations o…
S67
DC-CIV &amp; DC-NN: From Internet Openness to AI Openness — Vint Cerf suggests that AI governance should concentrate on regulating specific applications and their associated risks,…
S68
WS #31 Cybersecurity in AI: balancing innovation and risks — Melodena Stephens: So thank you for the question. I think it’s a complex one. So let me start from the top. If you loo…
S69
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — ### Infrastructure and Capacity Constraints ### Infrastructure and Financing Audience: Good evening, everyone. Is it? …
S70
AI in Africa: Beyond the algorithm — Development | Infrastructure | Data governance She states ‘We’re building the data backbone for the global south. Not j…
S71
African AI: Digital Public Goods for Inclusive Development | IGF 2023 WS #317 — It is noted that the lack of proper data infrastructure can hinder the development and use of AI, especially in contexts…
S72
Global cyber capacity building efforts — Moctar Yedaly:Thank you, Martin. And thank you for the previous speakers. As I see in America, it’s very hard to follow,…
S73
Leaders TalkX: Local to global: preserving culture and language in a digital era — Cultural diversity | Development | Legal and regulatory Policy Requirements for Cultural Preservation Summary of sessi…
S74
DIPLOFOUNDATION UNIVERSITY OF MALTA — An important sociocultural issue is the shaping of content policy. On a cultural level, the advantages for preservation …
S75
Digital Inclusion Through a Multilingual Internet | IGF 2023 WS #297 — In addition to the role of the internet, government and community support are crucial for the promotion and preservation…
S76
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Low to moderate disagreement level with high strategic significance. While speakers agreed on fundamental goals of lingu…
S77
The mismatch between public fear of AI and its measured impact — Looking at real-world use cases helps clarify the mismatch.
S78
Skilling and Education in AI — The conversation began with a Professor’s detailed analysis of four critical sectors where AI can drive substantial impa…
S79
Global AI Governance: Reimagining IGF’s Role &amp; Impact — Paloma Lara-Castro: Thank you, Liz. Hi, everyone. Thank you for the space. I’m representing Derechos Digitales. We are a…
S80
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Specific use case priorities and resource allocation across different sectors (healthcare, education, agriculture, manuf…
S81
Responsible AI for Shared Prosperity — This discussion brought together international government officials, technology leaders, and development organisations t…
S82
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Lacina Kone’s observation that “Africa is not looking for the most powerful AI, it’s looking for the most useful one” re…
S83
How Small AI Solutions Are Creating Big Social Change — African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so lan…
S84
Smart Regulation Rightsizing Governance for the AI Revolution — The panelists identified several promising areas for cooperation, including technical standards through frameworks like …
S85
Toward Collective Action_ Roundtable on Safe &amp; Trusted AI — Continued scholarship programs prioritizing African women through the Women in Focus series Launch of the African Compu…
S86
Workshop 2: The Interplay Between Digital Sovereignty and Development — ## Cultural and Linguistic Dimensions **Anton Barberi** from the Organisation Internationale de la Francophonie expande…
S87
Leaders TalkX: Local Voices, Global Echoes: Preserving Human Legacy, Linguistic Identity and Local Content in a Digital World — Fostering cultural and linguistic diversity helps in preserving human legacy Local content creation emerges as a pivota…
S88
La découvrabilité des contenus numérique: un facteur de diversité culturelle et de développement (Délégation Wallonie-Bruxelles, Belgian Mission to the UN in Geneva) — Furthermore, language accessibility is impacted when content in certain languages is less discoverable. This raises conc…
S89
How Multilingual AI Bridges the Gap to Inclusive Access — Cultural preservation, sovereignty, and ethical considerations
S90
Panel Discussion: 01 — Concrete impact stories / use cases
S91
How nonprofits are using AI-based innovations to scale their impact — And then the fourth motor we switch on, we call impact evaluation, and that’s when you have tens of thousands, hundreds …
S92
The future of Digital Public Infrastructure for environmental sustainability — Yolanda Martinez:Yes, definitely. First of all, congratulations. I thoroughly agree that it’s not easy to put together t…
S93
https://dig.watch/event/india-ai-impact-summit-2026/how-the-global-south-is-accelerating-ai-adoption_-finance-sector-insights — We joke that we shouldn’t worry about AI until we figure out AV. So I guess this is a perfect example of that. Thanks fo…
S94
WSIS Action Line C7 E-environment: Milestones, challenges and future directions — David Jensen:Sure, thank you very much, happy to be here. You’ll notice I’m not Sally Radwan. Sally Radwan is UNEP’s Chi…
S95
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — Development banks and assistance programs filling market gaps where private investment is insufficient
S96
DPI+H – health for all through digital public infrastructure — Philippe Veltsos:Thank you very much, Lori. Good morning, everybody. Happy Friday, even though for some of you it’s rain…
S97
About the Commission — Typically (and necessarily in jurisdictions where State aid rules govern this form of intervention), the pub…
S98
Scaling Multistakeholder Partnerships: Connectivity and Education — However, a glimmer of hope is evident in the formation of public policies directed towards bridging these gaps. The alli…
S99
WS #225 Bridging the Connectivity Gap for Excluded Communities — Market failures require public investment and public-private alliances with greater community participation
S100
Press Conference: Closing the AI Access Gap — The governance, alongside the talent, the compute, the infrastructure, is an enabler of responsible innovation
S101
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Okay, two big questions. Thank you. So, as you mentioned, we launched Current AI last year. We’ll be launching just this…
S102
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — India possesses many essential ingredients for AI success: a robust software services industry, thriving startup ecosyst…
S103
Panel Discussion Inclusion Innovation &amp; the Future of AI — Treat compute infrastructure as critical national infrastructure requiring government investment and protection
S104
UK and India forge new tech security partnership — Britain hasinitiateda new technology security partnership with India, aiming to boost economic growth and collaboration …
S105
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — Ciyong Zou: Thank you. Thank you very much, moderator. Distinguished representatives, ladies and gentlemen, good afterno…
S106
The Foundation of AI Democratizing Compute Data Infrastructure — Language diversity creates enormous scope of work with over 2,000 documented African languages
S107
Reviewing Global Governance Capacity Development and Identifying Opportunities for Collaboration — Companies are also investing in joint ventures, research hubs and start-up incubators in partnership with universities a…
S108
https://dig.watch/event/india-ai-impact-summit-2026/ai-automation-in-telecom_-ensuring-accountability-and-public-trust-india-ai-impact-summit-2026 — Sure. Thanks. Thanks for your question. I think this builds on actually the last couple of comments. I mean, what we’re …
S109
https://dig.watch/event/india-ai-impact-summit-2026/how-small-ai-solutions-are-creating-big-social-change — Once it fails, the community is not going to believe it. So it’s very important that whatever we put in place work with …
S110
Tightening the interconnectedness of ICT, Digitalization and Industry 4.0 to accelerate Economic growth and industrialization in developing countries — Adel BEN YOUSSEF:Thank you very much, Sama. I’m going to provide some insights from the field and focusing on Africa bec…
S111
Signature Panel: Building Cyber Resilience for Sustainable Development by Bridging the Global Capacity Gap — Morocco:Thank you, Mr. President. Thank you, Mr. President. Thank you, Chair. I have the honor to speak to deliver the f…
S112
Morocco announces upcoming Digital Strategy 2030 at Gitex Africa 2024 — At Gitex Africa 2024 in Marrakech, Head of Government Aziz AkhannouchrevealedMorocco’s Digital Strategy 2030, a result o…
S113
Enhancing rather than replacing humanity with AI — Right now, amid valid concerns about displacement, manipulation, and loss of human agency, there are also real examples …
S114
AI and the moral compass: What we can do vs what we should do — If technology reshapes what we can do, moral education must reshape how we decide. Ethics cannot be outsourced to compli…
S115
Optimism for AI – Leading with empathy — Nicholas Thompson frames the present as a pivotal moment where AI development could take fundamentally different paths b…
S116
Open Forum #43 African Union Open Forum Advancing Digital Governance and Transformation — Maktar Sek: Thank you, Adil. And good morning to everyone. Good morning, P.S. Honorable Minister, distinguished delegate…
S117
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Diana Nyakundi:Yeah, thanks Fadi. So with regards to opportunities, there are a lot of AI pilot projects that are coming…
S118
Global AI Policy Framework: International Cooperation and Historical Perspectives — If we are talking about oral culture, it wouldn’t be a data problem because it has primarily historically been an oral c…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
C
Co-Moderator
1 argument41 words per minute328 words477 seconds
Argument 1
Framing question: AI’s daily impact highlights need for local language support
EXPLANATION
The co‑moderator points out that large language models are already affecting everyday life and asks how AI in local languages can shape digital development, especially given Africa’s linguistic diversity. This frames the discussion around the necessity of language‑specific AI solutions.
EVIDENCE
The co-moderator asks, “how do you see AI in local languages shaping the next phase of your country’s digital development… how does this really work on the ground?” highlighting the estimated 2,000 African languages and the relevance of daily AI use [19-21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Local language support is emphasized as key for digital inclusion, echoed in discussions on community networks and multilingual internet initiatives [S23] and efforts to support local language chapters [S24].
MAJOR DISCUSSION POINT
Linguistic Inclusion and Representation in AI
AGREED WITH
David Lammy, Philip Thigo, Chenai Chair, Shekar Sivasubramanian, Barbel Kofler, Ankur Vora, Co‑Moderator
D
David Lammy
5 arguments110 words per minute1032 words557 seconds
Argument 1
AI should be safe, inclusive, and equitable for all linguistic communities
EXPLANATION
Lammy stresses that AI must follow a path that benefits humanity, emphasizing safety, inclusivity, and equity for every linguistic group. He contrasts this with a scenario where AI widens inequality.
EVIDENCE
He describes two possible futures for AI-one that “takes power and opportunity away from people” and another that “uses AI as a force for good… a safe AI, an inclusive AI and importantly an equitable AI for everyone” [13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Lammy’s call for safe, inclusive, equitable AI aligns with his statements in the Responsible AI for Shared Prosperity briefing and with broader responsible AI governance frameworks [S1][S30].
MAJOR DISCUSSION POINT
Linguistic Inclusion and Representation in AI
AGREED WITH
Philip Thigo, Chenai Chair, Shekar Sivasubramanian, Barbel Kofler, Ankur Vora, Co‑Moderator
Argument 2
UK is funding Africa’s first public‑sector AI compute cluster at the University of Cape Town
EXPLANATION
Lammy announces a UK investment to create the continent’s first dedicated public‑sector AI compute facility, aiming to give African researchers the hardware needed for AI development. The cluster will be hosted at the University of Cape Town.
EVIDENCE
He states, “we’re investing in Africa’s first dedicated public sector AI computer cluster at the University of Cape Town” and notes that “Too many African researchers are held back by costs and a lack of access” [3-4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UK investment in a public-sector AI compute cluster at UCT was announced in the same briefing and reflects broader calls for African compute infrastructure [S1][S26].
MAJOR DISCUSSION POINT
Compute Infrastructure and Capacity Building
AGREED WITH
Natasha Crampton, Julie Delahanty, Philip Thigo, Ankur Vora
Argument 3
AI for Development programme coordinated with Gates Foundation, Canada, Germany, Sweden, and GSMA
EXPLANATION
Lammy outlines the AI for Development programme as a collaborative effort involving multiple governments and foundations, designed to align investments and accelerate AI initiatives across Africa and Asia. The partnership includes the Gates Foundation, IDRC, and several national governments.
EVIDENCE
He describes the programme as “launched… in partnership with Canada’s International Development Research Centre… coordinated with the Gates Foundation, the governments of Germany, Japan and Sweden, as well as Community Jamil” and notes a partnership with the GSMA Foundation supporting start-ups [8-9].
MAJOR DISCUSSION POINT
Funding, Partnerships, and Public‑Good Investment
AGREED WITH
Ankur Vora, Barbel Kofler, Julie Delahanty, Chenai Chair, Natasha Crampton
Argument 4
Torn AI creates voice interfaces for low‑literacy rural users to access digital and financial services
EXPLANATION
Lammy highlights Torn AI, a Moroccan start‑up that builds voice‑based interfaces in local dialects, enabling low‑literacy rural populations to interact with digital and financial platforms through spoken commands. This exemplifies AI tailored to linguistic needs.
EVIDENCE
He mentions “Torn AI in Morocco, which creates voice interfaces to local dialects to help low-literacy rural users access digital and financial services through simple spoken interactions” [10-11].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Voice-based AI for low-literacy users is highlighted as a way to unlock rural entrepreneurship and financial access in recent analyses of multilingual AI applications [S29].
MAJOR DISCUSSION POINT
Practical Applications and Use Cases for Development
AGREED WITH
Chenai Chair, Shekar Sivasubramanian, Ankur Vora, Natasha Crampton, Julie Delahanty
Argument 5
Responsible AI governance is needed to protect rights and ensure inclusive outcomes
EXPLANATION
Lammy argues that AI must be governed responsibly to safeguard human rights and ensure that AI systems reflect the lived realities of diverse populations. Governance mechanisms are essential for inclusive and equitable AI deployment.
EVIDENCE
He notes that the AI for Development programme includes “responsible AI governance, to protect rights and to ensure AI reflects the realities of people’s lives across the region” [7-8] and reiterates the need for a “safe AI, an inclusive AI and importantly an equitable AI” [13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for responsible AI governance to protect rights are reinforced by global human-rights-focused AI governance guidelines and inclusive participation principles [S30][S35].
MAJOR DISCUSSION POINT
Governance, Ethics, and Sustainable Impact
N
Natasha Crampton
3 arguments153 words per minute544 words212 seconds
Argument 1
Compute enables language‑aware AI and requires testing with local speakers
EXPLANATION
Crampton explains that high‑performance compute is the key enabler for adapting AI models to local languages and cultural contexts, and that both model training and user testing demand substantial compute resources. Without it, language‑aware AI cannot be reliably deployed.
EVIDENCE
She states, “compute is the enabler of making language and culturally aware AI… testing it with local language speakers… also takes compute… day-to-day use of this technology also requires computing power” [232-247].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Crampton stresses that compute underpins language-aware AI and that testing with local speakers is essential, as reflected in her remarks on safe AI and assurance practices [S6][S32].
MAJOR DISCUSSION POINT
Linguistic Inclusion and Representation in AI
AGREED WITH
David Lammy, Julie Delahanty, Philip Thigo, Ankur Vora
Argument 2
AI diffusion is twice as fast in the Global North; compute gaps must be closed
EXPLANATION
Crampton presents analysis showing that AI adoption is occurring at roughly double the speed in the Global North compared with the Global South, underscoring the urgency of closing compute gaps through partnerships and infrastructure investment.
EVIDENCE
She notes, “Our own analysis shows that we have AI diffusing in the global north at roughly double the rate that we have it diffusing in the global south… we need partnerships like these in order to start to put the right infrastructure in place to close that gap” [225-227].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies show AI adoption is roughly twice as fast in the Global North, underscoring the compute gap that must be addressed [S34][S22].
MAJOR DISCUSSION POINT
Compute Infrastructure and Capacity Building
AGREED WITH
David Lammy, Ankur Vora, Barbel Kofler, Julie Delahanty, Chenai Chair
Argument 3
Trustworthy AI requires rigorous building, testing, and deployment with local stakeholder involvement
EXPLANATION
Crampton stresses that trustworthy AI depends on careful development, extensive testing with local users, and responsible deployment, ensuring that AI reflects the multicultural and multilingual reality of its users. This approach aligns with Microsoft’s commitment to ethical AI.
EVIDENCE
She remarks that “trustworthy AI requires rigorous building, testing, and deployment with local stakeholder involvement… we do take all of those steps to represent the world as it is multicultural, multilingual and deeply interconnected” and repeats the need for testing and compute throughout [249-254].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Her emphasis on rigorous building, testing and stakeholder involvement matches discussions on assurance from development through deployment [S6][S32].
MAJOR DISCUSSION POINT
Governance, Ethics, and Sustainable Impact
A
Ankur Vora
3 arguments150 words per minute415 words165 seconds
Argument 1
Language is a public good; market forces ignore low‑resource languages
EXPLANATION
Vora argues that language resources are a public good that the market fails to provide because private investment focuses on high‑return languages like English and Mandarin. He calls for collective public‑good investment to fill this gap.
EVIDENCE
He explains that “markets are broken… private sector companies are investing in models developed in English and Mandarin… the economics don’t work for the small resource languages… funders can get together and invest in public goods” [205-212].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vora’s framing of language as a public good and market failure for low-resource languages is echoed in analyses of low-resource language crises and market dynamics [S10][S28][S36].
MAJOR DISCUSSION POINT
Linguistic Inclusion and Representation in AI
AGREED WITH
David Lammy, Philip Thigo, Chenai Chair, Shekar Sivasubramanian, Barbel Kofler, Co‑Moderator
Argument 2
Public‑good investment is needed because markets fail to provide compute for low‑resource languages
EXPLANATION
Vora emphasizes that when market mechanisms do not supply compute resources for under‑served languages, public‑good funding from governments and foundations must step in to ensure equitable AI development.
EVIDENCE
He states, “When markets are broken, funders can get together and invest in public goods… that’s what we are doing right now… UK government, Canadian government, the Japanese, Microsoft, IDRC… we need to invest in public goods because these markets are broken” [212-215].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for public-good funding to supply compute for low-resource languages is reinforced by market-failure assessments and low-resource language challenges [S10][S28].
MAJOR DISCUSSION POINT
Compute Infrastructure and Capacity Building
Argument 3
Collaborative public‑good funding addresses market failure in low‑resource language AI
EXPLANATION
Vora points out that coordinated funding from multiple donors and partners creates a public‑good model that compensates for market failures, enabling the development of AI for low‑resource languages.
EVIDENCE
He notes that “we are all sitting at this panel… we are making the point that we need to invest in public goods because these markets are broken… UK government, Canadian government, the Japanese, Microsoft, IDRC… all getting together” [212-215].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Collaborative donor funding to address market gaps aligns with Vora’s description of multi-partner public-good investments [S10][S28].
MAJOR DISCUSSION POINT
Funding, Partnerships, and Public‑Good Investment
C
Chenai Chair
5 arguments166 words per minute954 words343 seconds
Argument 1
Masakane hub targets 50 major African languages, building community‑governed data and tools
EXPLANATION
The chair describes the Masakane African Language Hub’s ambition to impact one billion Africans by focusing on the 50 most spoken languages, creating high‑quality data, tools, and benchmarks that reflect linguistic diversity. The effort is community‑driven and aims at economic, health, and social benefits.
EVIDENCE
She states the hub’s goal “to impact 1 billion Africans through 50 of the most spoken languages with relevant AI tools… preserving and capturing the evolution of African languages” and explains the four pillars of work, especially data expansion and high-quality datasets built from the JW300 Bible set [54-55][60-66].
MAJOR DISCUSSION POINT
Linguistic Inclusion and Representation in AI
AGREED WITH
David Lammy, Philip Thigo, Shekar Sivasubramanian, Barbel Kofler, Ankur Vora, Co‑Moderator
Argument 2
Lingua Africa is a multi‑partner open‑core initiative with Microsoft, Gates, and the UK to create community‑governed language infrastructure
EXPLANATION
She announces Lingua Africa as an open‑core, multi‑partner project that will develop community‑governed language infrastructure, focusing on targeted data collection, model development, and pathways for deployment in key sectors such as health, education, and agriculture.
EVIDENCE
She explains that “Lingua Africa will be a multi-partner open core… with Microsoft AI for Good and the Gates Foundation… will directly enable real-world AI applications… model development, targeted data collection, and support strong pathways for deployment and adoption” [170-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Lingua Africa is described as a multi-partner open-core effort involving Microsoft, the Gates Foundation and the UK government to build community-governed language infrastructure [S1].
MAJOR DISCUSSION POINT
Funding, Partnerships, and Public‑Good Investment
Argument 3
Grants and ecosystem partnerships sustain the Masakane community and its projects
EXPLANATION
The chair outlines how funding is allocated across four pillars—data, research, innovation, and sustainability—to build the ecosystem, create use cases, and ensure long‑term institutional capacity for African‑led AI. A portion of the budget is earmarked for concrete applications.
EVIDENCE
She notes “we specifically focus on four pillars… expanding and diversifying high quality data… research… innovation… 40 % of the funding will go to creating use cases… institutional capacity building for the African NLP community… sustainability beyond the Masakane community” [60-66][70-76].
MAJOR DISCUSSION POINT
Funding, Partnerships, and Public‑Good Investment
AGREED WITH
David Lammy, Ankur Vora, Barbel Kofler, Julie Delahanty, Natasha Crampton
Argument 4
Project Echo delivers gender‑responsive AI tools in African languages to boost women’s economic empowerment and health
EXPLANATION
Project Echo is presented as a gender‑responsive intervention that creates AI‑driven services in African languages, aiming to improve women’s economic opportunities and health outcomes, thereby addressing gender inequality in the continent.
EVIDENCE
She describes “Project Echo… a gender responsive intervention… will provide relevant use cases in African languages that lead to impact on women’s economic empowerment and health” [71-74].
MAJOR DISCUSSION POINT
Practical Applications and Use Cases for Development
AGREED WITH
David Lammy, Shekar Sivasubramanian, Ankur Vora, Natasha Crampton, Julie Delahanty
Argument 5
Gender‑responsive interventions and long‑term sustainability are central to community‑led AI
EXPLANATION
The chair emphasizes that AI projects must be gender‑responsive and built with sustainability in mind, ensuring that benefits persist beyond initial funding and that communities retain ownership of the technology.
EVIDENCE
She highlights the gender-responsive nature of Project Echo and discusses “thinking about sustainability… institutional capacity building… businesses coming up from open source models… African-led AI built for impact” [71-76].
MAJOR DISCUSSION POINT
Governance, Ethics, and Sustainable Impact
S
Shekar Sivasubramanian
4 arguments166 words per minute637 words229 seconds
Argument 1
AI applications must be multilingual and directly useful to users, integrating language into design
EXPLANATION
Sivasubramanian explains that inclusive AI design starts with supporting multiple languages (14‑16 in India) and ensuring applications address real needs across rural‑urban divides. Utility and cultural relevance are central to adoption.
EVIDENCE
He says “the very first design principle… is the ability to be inclusive… dimensions of population we are looking at are language… you start with at least 14 to 16 languages… applications must be useful… utility value sits at the heart of everything we do” [86-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The principle of designing AI first for multiple languages and utility mirrors recommendations for local-language chapters and multilingual inclusion initiatives [S24].
MAJOR DISCUSSION POINT
Linguistic Inclusion and Representation in AI
AGREED WITH
David Lammy, Philip Thigo, Chenai Chair, Barbel Kofler, Ankur Vora, Co‑Moderator
Argument 2
Multilingual disease‑surveillance system monitors health events across 16 Indian languages
EXPLANATION
He describes a health‑monitoring system that ingests news articles in 16 languages, runs every four hours, and alerts authorities to disease outbreaks, demonstrating the practical impact of multilingual AI.
EVIDENCE
He details “media disease surveillance… picks up every article published in India… runs every four hours… 16 languages… tells the central government if it’s a disease outbreak what should you do” [94-99].
MAJOR DISCUSSION POINT
Practical Applications and Use Cases for Development
AGREED WITH
David Lammy, Chenai Chair, Ankur Vora, Natasha Crampton, Julie Delahanty
Argument 3
Oral reading fluency tool uses AI to assess and improve children’s reading in local languages
EXPLANATION
Sivasubramanian outlines a tool that records children reading aloud, uses AI to evaluate pronunciation and fluency, and provides teachers with data to group students and tailor instruction, thereby enhancing literacy in native languages.
EVIDENCE
He explains “we collect data from children… oral reading fluency… AI tells you what you read well, what you did not read well and assists the teacher to cohortize the students” [100-102].
MAJOR DISCUSSION POINT
Practical Applications and Use Cases for Development
Argument 4
AI solutions in health, education, and agriculture must be language‑appropriate to achieve impact
EXPLANATION
He reiterates that for AI to be effective in sectors such as health, education, and agriculture, solutions must be tailored to the linguistic realities of users, ensuring relevance and adoption.
EVIDENCE
He notes “the dimensions of population we are looking at are language… applications must be useful… utility value sits at the heart of everything we do” and gives examples across health, education, and agriculture [86-92].
MAJOR DISCUSSION POINT
Practical Applications and Use Cases for Development
B
Barbel Kofler
3 arguments144 words per minute391 words162 seconds
Argument 1
Inclusive, bias‑free data is essential for equitable AI outcomes
EXPLANATION
Kofler argues that AI can only be a game‑changer if the underlying data is inclusive and free from bias, noting that language bias reflects broader cultural neglect. She stresses the need for diverse, representative datasets.
EVIDENCE
She says “AI can only be really the game changer… if it is inclusive… starts with data and how biased data is… language is quite close to it… many languages neglected, dialects, cultures… we try to be part of that” [136-143].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of bias-free, inclusive data is highlighted in discussions of bias in automated welfare systems and the low-resource language crisis [S31][S36].
MAJOR DISCUSSION POINT
Linguistic Inclusion and Representation in AI
AGREED WITH
David Lammy, Philip Thigo, Chenai Chair, Shekar Sivasubramanian, Ankur Vora, Co‑Moderator
Argument 2
German partnership via Fair Forward supports data collection for multilingual services
EXPLANATION
Kofler describes Germany’s Fair Forward initiative, which collaborates with partner countries to gather multilingual datasets, enabling services to be delivered in citizens’ mother tongues and supporting inclusive digital public services.
EVIDENCE
She notes “We were starting in 2019… initiative called Fair Forward… working with partner countries like India on collecting data sets… offering service to citizens in their mother tongue in multilingual countries” [145-148].
MAJOR DISCUSSION POINT
Funding, Partnerships, and Public‑Good Investment
AGREED WITH
David Lammy, Ankur Vora, Julie Delahanty, Chenai Chair, Natasha Crampton
Argument 3
Addressing bias in language data is crucial for fair AI systems
EXPLANATION
Kofler emphasizes that bias in language data leads to exclusion of many dialects and cultures, and that confronting this bias is essential for building fair and equitable AI systems that respect cultural diversity.
EVIDENCE
She explains “if you talk about bias in data, language is quite close to it… we see many languages neglected… we really try to be part of it” [138-143].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Addressing language-data bias is identified as essential for fair AI in analyses of bias in automated decision-making [S31].
MAJOR DISCUSSION POINT
Governance, Ethics, and Sustainable Impact
P
Philip Thigo
3 arguments173 words per minute444 words153 seconds
Argument 1
Global South languages must be represented to prevent cultural extinction
EXPLANATION
Thigo warns that the absence of Global South languages in AI models threatens the survival of oral cultures, arguing that representation is essential to preserve cultural memory and avoid existential loss.
EVIDENCE
He states “The Global South has never lacked intelligence… what it has lacked is the power to define how that intelligence is recognized… because our entire culture’s values have been coined in language… the Global South is largely an oral civilization… current models lacking our language means our civilization are at risk, almost existential, to be extinct” [28-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The risk of cultural extinction due to missing Global South languages is underscored by the low-resource language crisis literature [S36][S37].
MAJOR DISCUSSION POINT
Linguistic Inclusion and Representation in AI
AGREED WITH
David Lammy, Chenai Chair, Shekar Sivasubramanian, Barbel Kofler, Ankur Vora, Co‑Moderator
Argument 2
Development of language models requires compute, talent, and research capacity
EXPLANATION
Thigo outlines that building effective language models for the Global South needs not only compute resources but also skilled talent and robust research infrastructure, which together constitute AI sovereignty.
EVIDENCE
He mentions “the second part… find the compute, the talent that then influences, develops the data… research and development capability… talent development is the first instance of sovereignty” [39-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Building language models requires compute resources and skilled talent, as noted in discussions of African compute initiatives and talent development needs [S26][S28].
MAJOR DISCUSSION POINT
Compute Infrastructure and Capacity Building
AGREED WITH
Chenai Chair, Shekar Sivasubramanian, Natasha Crampton, Julie Delahanty, Ankur Vora
Argument 3
Building local talent and research capacity safeguards sovereignty over AI development
EXPLANATION
Thigo stresses that developing local research expertise and talent is the foundation of AI sovereignty for the Global South, ensuring that AI development remains under local control and reflects indigenous knowledge.
EVIDENCE
He notes “talent development is the first instance of sovereignty… we need to build research capacity and capability because talent development is the first instance of sovereignty” [40-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Developing local research talent is presented as a cornerstone of AI sovereignty in market-failure and capacity-building analyses [S28].
MAJOR DISCUSSION POINT
Governance, Ethics, and Sustainable Impact
J
Julie Delahanty
1 argument154 words per minute471 words183 seconds
Argument 1
African Compute Initiative will provide high‑performance GPUs, storage, and networking for African researchers
EXPLANATION
Delahanty describes the African Compute Initiative as a dedicated high‑performance computing cluster at the University of Cape Town, equipped with modern GPUs, fast storage, and networking, aimed at giving African public institutions the resources needed for cutting‑edge AI research.
EVIDENCE
She says “African Compute Initiative will be the first dedicated high-performance computing cluster for public institutions in Africa… based in South Africa at the University of Cape Town… will include modern GPUs, faster and better storage capacity and much faster networking” [267-271].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The African Compute Initiative’s provision of GPUs, storage and networking aligns with calls for dedicated African compute infrastructure [S26][S22].
MAJOR DISCUSSION POINT
Compute Infrastructure and Capacity Building
AGREED WITH
Philip Thigo, Chenai Chair, Shekar Sivasubramanian, Natasha Crampton, Ankur Vora
Agreements
Agreement Points
Compute infrastructure is essential for building and deploying AI models in African and Global South languages
Speakers: David Lammy, Natasha Crampton, Julie Delahanty, Philip Thigo, Ankur Vora
UK is funding Africa’s first public‑sector AI compute cluster at the University of Cape Town Compute enables language‑aware AI and requires testing with local speakers African Compute Initiative will provide high‑performance GPUs, storage, and networking for African researchers Development of language models requires compute, talent, and research capacity Public‑good investment is needed because markets fail to provide compute for low‑resource language AI
All speakers stress that without dedicated high-performance compute resources African researchers cannot train, test, or deploy language-specific AI models, making compute a foundational enabler for the initiative [3-4][232-247][267-271][39-40][212-215].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with the AI readiness assessments for Africa that highlight severe infrastructure and financing gaps and call for expanded compute access as a prerequisite for multilingual model development [S69][S70][S71].
Linguistic inclusion and representation of local languages are critical to prevent cultural loss and ensure equitable AI benefits
Speakers: David Lammy, Philip Thigo, Chenai Chair, Shekar Sivasubramanian, Barbel Kofler, Ankur Vora, Co‑Moderator
AI should be safe, inclusive, and equitable for all linguistic communities Global South languages must be represented to prevent cultural extinction Masakane hub targets 50 major African languages, building community‑governed data and tools AI applications must be multilingual and directly useful to users, integrating language into design Inclusive, bias‑free data is essential for equitable AI outcomes Language is a public good; market forces ignore low‑resource languages Framing question: AI’s daily impact highlights need for local language support
Speakers converge on the necessity of supporting African and other low-resource languages in AI to preserve cultural heritage, avoid bias, and deliver inclusive services, emphasizing that language diversity must be embedded in data, models, and governance [13][28-33][54-66][86-92][136-143][187-190][19-21].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of linguistic diversity and cultural preservation is documented in inclusive AI frameworks and IGF sessions on multilingual inclusion, which stress that under-represented languages must be covered to avoid cultural erosion and to deliver equitable services [S61][S62][S63][S64][S75].
Collaborative, multi‑donor public‑good funding and partnerships are required to address market failures and build sustainable AI ecosystems
Speakers: David Lammy, Ankur Vora, Barbel Kofler, Julie Delahanty, Chenai Chair, Natasha Crampton
AI for Development programme coordinated with Gates Foundation, Canada, Germany, Sweden, and GSMA Public‑good investment is needed because markets fail to provide compute for low‑resource language AI German partnership via Fair Forward supports data collection for multilingual services African Compute Initiative will provide high‑performance GPUs, storage, and networking for African researchers Grants and ecosystem partnerships sustain the Masakane community and its projects AI diffusion is twice as fast in the Global North; compute gaps must be closed
All agree that no single actor can fill the gaps; coordinated funding from governments, foundations, and private sector is essential to create data, compute, and capacity as public goods, countering market neglect of low-resource languages [8-9][205-215][145-148][214-215][60-76][225-227].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple policy briefs on digital public infrastructure and sovereign AI call for multi-donor cooperation, capacity-building funds, and public-good financing to correct market failures in low-resource language AI [S56][S57][S58][S59][S69].
Developing concrete, domain‑specific use cases (health, education, agriculture, gender empowerment) is vital to demonstrate AI’s real‑world impact
Speakers: David Lammy, Chenai Chair, Shekar Sivasubramanian, Ankur Vora, Natasha Crampton, Julie Delahanty
Torn AI creates voice interfaces for low‑literacy rural users to access digital and financial services Project Echo delivers gender‑responsive AI tools in African languages to boost women’s economic empowerment and health Multilingual disease‑surveillance system monitors health events across 16 Indian languages Language is a public good; market forces ignore low‑resource languages (implies need for impactful applications) Compute enables language‑aware AI and testing with local speakers (supports deployment of use cases) African Compute Initiative will provide resources essential for cutting‑edge AI work and applications
Speakers highlight that AI must move beyond research to tangible solutions in health, education, agriculture, and gender equity, showing that language-aware tools can directly improve livelihoods [10-11][71-74][94-102][194-199][245-247][267-271].
POLICY CONTEXT (KNOWLEDGE BASE)
Sector-focused AI impact studies and summit agendas repeatedly prioritize health, education, agriculture and gender-related applications as proof points for AI value in emerging economies [S78][S80][S69].
Building local capacity—talent, research expertise, and institutional strength—is fundamental for AI sovereignty and sustainable development
Speakers: Philip Thigo, Chenai Chair, Shekar Sivasubramanian, Natasha Crampton, Julie Delahanty, Ankur Vora
Development of language models requires compute, talent, and research capacity Masakane hub targets 50 major African languages, building community‑governed data and tools (includes research pillar) AI applications must be multilingual and directly useful to users, integrating language into design (implies capacity) AI diffusion is twice as fast in the Global North; compute gaps must be closed (implies capacity building) African Compute Initiative will provide high‑performance GPUs, storage, and networking for African researchers Public‑good investment is needed because markets fail to provide compute for low‑resource language AI
All speakers stress that developing skilled researchers, data scientists, and institutional frameworks is essential to achieve AI sovereignty and ensure that AI solutions are locally owned and maintained [40-41][64-66][86-92][225-227][267-271][212-215].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity-building is a cornerstone of several strategic documents on AI sovereignty and digital development, emphasizing talent pipelines, research institutions and local expertise as essential for self-reliant AI ecosystems [S56][S57][S58][S59].
Similar Viewpoints
Both emphasize that AI must be governed responsibly with inclusive, bias‑free data to protect rights and ensure equitable outcomes for all language groups [13][136-143].
Speakers: David Lammy, Barbel Kofler
AI should be safe, inclusive, and equitable for all linguistic communities Inclusive, bias‑free data is essential for equitable AI outcomes
Both argue that languages are a cultural public good whose preservation requires collective action beyond market mechanisms [28-33][187-190].
Speakers: Philip Thigo, Ankur Vora
Global South languages must be represented to prevent cultural extinction Language is a public good; market forces ignore low‑resource languages
Both stress that multilingual AI design, backed by community‑driven data, is essential for creating useful applications that serve diverse language speakers [86-92][54-66].
Speakers: Shekar Sivasubramanian, Chenai Chair
AI applications must be multilingual and directly useful to users, integrating language into design Masakane hub targets 50 major African languages, building community‑governed data and tools
Both highlight that high‑performance compute resources are the backbone for training, testing, and deploying language‑aware AI models in Africa [232-247][267-271].
Speakers: Natasha Crampton, Julie Delahanty
Compute enables language‑aware AI and requires testing with local speakers African Compute Initiative will provide high‑performance GPUs, storage, and networking for African researchers
Unexpected Consensus
Both European (German) and Indian representatives stress multilingual AI for health and education despite different regional focuses
Speakers: Barbel Kofler, Shekar Sivasubramanian
Inclusive, bias‑free data is essential for equitable AI outcomes Multilingual disease‑surveillance system monitors health events across 16 Indian languages
While Kofler discusses bias-free data from a European perspective, Sivasubramanian presents a concrete multilingual health surveillance system in India, showing a shared belief that multilingual AI is pivotal for health sector impact across continents [136-143][94-102].
Overall Assessment

The panel demonstrates strong convergence on four pillars: (1) the necessity of compute infrastructure; (2) the centrality of linguistic inclusion; (3) the need for collaborative public‑good funding; (4) the importance of real‑world, domain‑specific applications; and (5) capacity building for sustainable AI sovereignty.

High consensus – most speakers echo each other’s points, indicating broad political and technical agreement that coordinated investment in compute, data, talent, and multilingual use cases is essential to achieve inclusive, equitable AI for the Global South.

Differences
Different Viewpoints
Role of market versus public‑good funding for low‑resource language AI
Speakers: Ankur Vora, David Lammy, Shekar Sivasubramanian
Language is a public good; market forces ignore low-resource languages (Ankur Vora) [188-192][205-212] State intervention is needed; the UK should not leave AI development solely to the marketplace (David Lammy) [218-219] Private-sector innovation and risk-taking are essential to develop AI solutions (Shekar Sivasubramanian) [84-86][218-219]
Vora argues that markets are broken and only coordinated public-good investment can fill the gap for minority languages, while Lammy stresses the importance of state-led funding and intervention, and Shekar highlights the role of private-sector innovation and risk-taking, showing a tension between public-funded versus private-sector driven approaches to language AI development [188-192][205-212][218-219][84-86].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of low-resource language AI highlight a tension between market-driven development that favors dominant languages and public-service-oriented funding needed to address market failure [S63][S64][S69].
Primary lever for advancing African language AI – compute infrastructure versus data quality and bias mitigation
Speakers: Natasha Crampton, Julie Delahanty, Barbel Kofler, Chenai Chair
Compute is the enabler of language-aware AI and testing with local speakers (Natasha Crampton) [232-247] African Compute Initiative will provide high-performance GPUs, storage and networking (Julie Delahanty) [267-271] Inclusive, bias-free data is essential for equitable AI outcomes (Barbel Kofler) [136-143] Four pillars focus on expanding high-quality data, research and sustainability (Chenai Chair) [60-66][70-76]
Crampton and Delahanty prioritize building compute capacity as the critical bottleneck for multilingual AI, whereas Kofler and the Masakane Chair stress that high-quality, bias-free data and ecosystem support are the foundational needs, revealing a split on whether hardware or data should be addressed first [232-247][267-271][136-143][60-66][70-76].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates in African AI forums contrast the need for compute resources with concerns over data scarcity, quality and bias, indicating that both infrastructure and data governance are contested levers for progress [S69][S70][S61][S64].
Framing of language preservation – cultural survival versus public‑good market failure
Speakers: Philip Thigo, Ankur Vora
Absence of Global South languages in AI models threatens cultural extinction (Philip Thigo) [28-33] Language is a public good ignored by markets; collective investment is required (Ankur Vora) [188-192][205-212]
Thigo emphasizes the existential risk to oral cultures if languages are omitted from AI, framing the issue as cultural survival, while Vora frames the same challenge as a market failure that necessitates public-good investment, showing differing narratives for the same problem [28-33][188-192][205-212].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions frame language preservation both as a cultural-rights imperative and as a market-failure issue that requires public-good interventions, as reflected in inclusive AI literature and human-rights impact studies [S61][S62][S63][S73].
Unexpected Differences
Security‑focused personal AI use versus open promotion of AI tools
Speakers: Co‑Moderator, David Lammy, Shekar Sivasubramanian
Co-Moderator states personal caution, using a secure network and avoiding ChatGPT for security reasons (Co-Moderator) [20] Lammy and other panelists discuss broad AI adoption and public-sector initiatives without highlighting security concerns (David Lammy) [13][218-219] Shekar presents AI solutions for health, education and agriculture, assuming open deployment (Shekar Sivasubramanian) [84-86]
The Co-Moderator’s explicit concern about secure AI usage contrasts with the rest of the panel’s emphasis on scaling AI solutions, revealing an unexpected tension between personal data security considerations and the push for widespread AI deployment [20][13][84-86].
POLICY CONTEXT (KNOWLEDGE BASE)
The balance between open AI innovation and security/privacy safeguards is a recurring theme in responsible AI and cybersecurity policy debates, which advocate tiered access and application-specific governance rather than blanket openness [S65][S66][S67][S68].
Overall Assessment

The panel shows strong consensus on the importance of multilingual, inclusive AI for Africa and Asia, but diverges on how to fund, prioritize, and implement it—whether through public‑good investment versus private‑sector innovation, whether to focus first on compute infrastructure or on data quality, and how to frame the urgency of language preservation. These disagreements are moderate rather than fundamental, reflecting different strategic emphases rather than outright conflict.

Moderate disagreement: strategic and priority differences that could affect coordination and resource allocation, but not undermining the shared goal of equitable AI development.

Partial Agreements
All speakers concur that multilingual, inclusive AI is vital for development and cultural preservation, but they differ on the primary mechanisms—representation, data collection, application design, or compute infrastructure—to achieve that goal [13][28-33][54-55][60-66][86-92][232-247].
Speakers: David Lammy, Philip Thigo, Chenai Chair, Shekar Sivasubramanian, Natasha Crampton
AI must be safe, inclusive and equitable for all linguistic communities (David Lammy) [13] Representation and existence of local languages in AI are essential (Philip Thigo) [28-33] Masakane hub aims to impact 1 billion Africans through 50 major languages (Chenai Chair) [54-55][60-66] AI applications must be multilingual and directly useful to users (Shekar Sivasubramanian) [86-92] Compute enables language-aware AI and is required for testing and deployment (Natasha Crampton) [232-247]
Takeaways
Key takeaways
Linguistic inclusion is essential to prevent cultural extinction and to ensure AI benefits all communities, especially in the Global South. The Masakane African Languages Hub aims to support 50 major African languages through community‑governed data, research, innovation, and sustainability pillars, with a goal of reaching 1 billion Africans. AI solutions must be multilingual and directly useful to end‑users, integrating language into design (e.g., voice interfaces for low‑literacy users, disease‑surveillance, oral‑reading tools). Bias‑free, representative data and local testing are critical for trustworthy, equitable AI. Market forces do not provide AI resources for low‑resource languages; public‑good investment and multi‑partner funding are required. Compute capacity is a major bottleneck; the African Compute Initiative will create the first public‑sector high‑performance AI cluster in Africa (University of Cape Town). Partnerships across governments, foundations, and industry (UK, Gates Foundation, Canada, Germany, Sweden, GSMA, Microsoft, IDRC) are being mobilised to fund language infrastructure, compute, and applied projects. Specific projects such as Torn AI, Project Echo, multilingual disease‑surveillance, and oral‑reading fluency illustrate how language‑aware AI can drive health, education, agriculture, and economic empowerment. Governance, ethics, gender‑responsive design, and long‑term sustainability are central to ensuring AI remains safe, inclusive, and equitable.
Resolutions and action items
Launch of Lingua Africa – a multi‑partner, open‑core, community‑governed language infrastructure initiative (UK, Gates, Microsoft, Masakane, etc.). Commitment of multi‑million‑pound funding to Masakane for data collection, benchmark development, use‑case creation, and sustainability activities. Support for four additional start‑ups (including Torn AI) through the GSMA Foundation partnership to develop responsible AI solutions for underserved populations. Establishment of the African Compute Initiative – a dedicated high‑performance GPU cluster at the University of Cape Town for African public‑sector researchers. Allocation of 40 % of Masakane funding to develop concrete use‑cases; initiation of Project Echo targeting gender‑responsive economic empowerment and health outcomes. Development of an African speech‑and‑text benchmark to evaluate models in local contexts. Commitments from partners (UK, Gates, Microsoft, IDRC, German Fair Forward) to provide ongoing financial, technical, and capacity‑building support. Agreement to build research capacity and talent pipelines in Africa as a sovereign capability (as highlighted by Philip Thigo).
Unresolved issues
Scalable roadmap for extending support from the initial 50 languages to the full spectrum of 2,000+ African languages and dialects. Long‑term financing model beyond the initial grant period to ensure sustainability of the Masakane hub and the compute cluster. Detailed governance structure for the community‑governed Lingua Africa infrastructure and how decision‑making will be shared among stakeholders. Specific mechanisms to guarantee equitable benefit‑sharing and sovereignty for local communities over the AI models and data they help create. Technical strategies for collecting high‑quality data for extremely low‑resource dialects and for maintaining data privacy and ownership. Concrete deployment and adoption plans for rural, low‑literacy users, including training, support, and monitoring of impact. Metrics and monitoring frameworks to assess the social and economic impact of the announced projects.
Suggested compromises
Public‑good funding (government, foundations) is used to fill the market gap for low‑resource language AI, balancing private‑sector profit motives with societal needs. A multi‑partner collaboration model that pools resources from the UK, Gates Foundation, Microsoft, Germany, Canada, and others, sharing risk and expertise. Provision of a publicly accessible compute cluster alongside support for private‑sector start‑ups, ensuring open infrastructure while encouraging commercial innovation.
Thought Provoking Comments
We are actually in the age of intelligence… the Global South has never lacked intelligence, what it has lacked is the power to define how that intelligence is recognized, recorded, or transmitted. Because our entire culture’s values have been coined in language, the current models lacking our language means our civilization is at risk, almost existential, to be extinct.
Frames AI development as a civilizational issue rather than a purely technical one, highlighting the existential threat to oral cultures if their languages are excluded from AI models.
Set the tone for the panel, moving the conversation from a generic AI‑for‑development narrative to a deeper discussion about cultural survival, representation, and the urgency of building language‑specific models. It prompted other speakers to address concrete steps—compute, talent, and use‑cases—to prevent that extinction.
Speaker: Ambassador Philip Thigo
The Masakane African Language Hub aims to impact 1 billion Africans through 50 of the most spoken languages, focusing on four pillars: data expansion, research & tooling, innovation (including Project Echo for gender‑responsive economic empowerment), and sustainability through institutional capacity‑building.
Provides a clear, structured roadmap that links language data work to gender equity, economic impact, and long‑term sustainability, moving beyond rhetoric to actionable strategy.
Introduced the gender‑lens and sustainability dimension, prompting follow‑up questions about concrete use‑cases and influencing later remarks on public‑good funding and compute infrastructure.
Speaker: Chenai Chair (Masakane Hub representative)
Our first design principle at Wadwani AI is inclusivity – we start with at least 14‑16 languages, ensuring rural‑urban divide is addressed, and we build applications that deliver real value, such as a multilingual disease‑surveillance system and an oral‑reading fluency tool for the poorest children.
Illustrates how multilingual AI can be embedded in essential public services, showing that language inclusion is not an abstract goal but a driver of tangible health and education outcomes.
Shifted the discussion from high‑level policy to concrete, scalable applications, reinforcing the need for localized data and prompting other panelists to discuss how compute resources and partnerships can support such deployments.
Speaker: Shekhar Sivasubramanian
Markets are broken – private sector invests in English and Mandarin models because that’s where the economics work. That doesn’t mean we should abandon low‑resource languages; funders must step in to create public‑goods for these languages.
Diagnoses the structural market failure that leaves many languages unsupported and frames public‑good investment as the solution, providing a rationale for the multi‑partner funding model.
Reoriented the conversation toward the economics of language AI, justifying the coalition of governments, NGOs, and tech firms, and leading to deeper discussion on how the Lingua Africa initiative operationalises this public‑good approach.
Speaker: Ankur Vora, Gates Foundation
Compute is the enabler of language‑aware AI. From data collection, model fine‑tuning, testing with local speakers, to day‑to‑day deployment, every step needs high‑performance compute. Trustworthy AI will be the best tool for computing, not the other way around.
Connects the abstract need for language diversity with the concrete technical requirement of compute infrastructure, emphasizing that without it, inclusive AI cannot be realised.
Bridged the earlier cultural and policy discussions with the technical reality of GPU clusters, reinforcing the importance of the African Compute Initiative and prompting agreement from other speakers about the necessity of shared infrastructure.
Speaker: Natasha Crampton, Microsoft
Two paths before us: AI can take power and opportunity away from people and divide us, or it can be a force for good to solve problems and uplift all of humanity.
Frames the entire dialogue as a moral choice, setting up the stakes for the subsequent discussion on language, compute, and equitable AI.
Provided a narrative anchor that kept the panel focused on the ethical implications of their work, influencing how each speaker positioned their contributions as part of the ‘force for good’ pathway.
Speaker: David Lammy
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved it from a generic announcement of funding to a nuanced debate about cultural survival, market failure, gender equity, and technical infrastructure. Ambassador Thigo’s existential framing forced the panel to treat language inclusion as a civilizational imperative. The Masakane Hub’s four‑pillar strategy and Wadwani AI’s concrete multilingual applications grounded that imperative in actionable projects. Vora’s market‑failure analysis justified the coalition of public‑good funders, while Crampton’s emphasis on compute linked policy and cultural goals to the necessary technical backbone. Together, these comments created a coherent narrative: inclusive AI requires data, talent, compute, and sustained public investment, and without them, entire cultures risk erasure. The dialogue therefore progressed from high‑level ideals to concrete, interdisciplinary solutions, highlighting the interdependence of language, infrastructure, and equitable policy.

Follow-up Questions
How can sustainable compute resources be provided to African researchers and institutions to support AI development?
Both highlighted the critical need for compute power—Philip discussed finding compute and talent, while Natasha emphasized compute as the enabler for language‑aware AI and the current diffusion gap between Global North and South.
Speaker: Philip Thigo, Natasha Crampton
What specific metrics, benchmarks, and evaluation frameworks should be used to assess African language models for accuracy, cultural relevance, and scenario awareness?
Chenai mentioned creating an African benchmark for speech and text, and Natasha stressed testing models with local speakers and scenario‑aware evaluation, indicating a need for concrete measurement standards.
Speaker: Chenai Chair, Natasha Crampton
How will the impact of gender‑responsive interventions like Project Echo be measured and validated in terms of women’s economic empowerment and health outcomes?
Project Echo was presented as a key gender‑focused use case, but the transcript did not detail impact metrics, prompting a need for systematic evaluation.
Speaker: Chenai Chair
What methods can be employed to identify and mitigate bias in language data, especially across diverse dialects and non‑standard language varieties?
Babel highlighted that biased data and dialect variation threaten inclusivity, suggesting research into bias detection and mitigation strategies is required.
Speaker: Babel Kofler
What are the cost differentials for accessing high‑performance compute in African contexts, and how can these costs be reduced or subsidized?
Julie referenced a study showing exponentially higher compute costs in Africa, indicating a need for detailed cost analysis and financing models.
Speaker: Julie Delahanty
How can public‑private partnerships ensure sovereignty and equitable benefit for local communities while leveraging private sector innovation?
Both raised concerns about balancing state intervention with private sector risk‑taking, emphasizing the importance of preserving community ownership and equitable outcomes.
Speaker: Natasha Crampton, David Lammy
What strategies are needed to scale AI solutions for low‑literacy, rural users (e.g., voice interfaces like Torn AI) across diverse African contexts?
David referenced Torn AI as an example but did not explore scaling mechanisms, indicating a gap in deployment strategy research.
Speaker: David Lammy
What approaches are required to build research capacity and talent pipelines in Africa to develop, curate, and maintain language data and models?
Philip noted talent development as the first instance of sovereignty, pointing to the need for systematic capacity‑building programs.
Speaker: Philip Thigo
How can AI models be made culturally and scenario‑aware to ensure relevance and usability in specific African contexts?
Natasha emphasized that language‑aware AI must also be scenario‑aware, suggesting research into contextual adaptation techniques.
Speaker: Natasha Crampton
What sustainable business models can support African‑led AI initiatives beyond grant funding to ensure long‑term viability?
Chenai discussed sustainability and institutional capacity building, indicating a need to explore revenue‑generating or self‑sustaining models.
Speaker: Chenai Chair
What is the optimal balance between long‑term deep research investment and short‑term utilitarian approaches to foster community‑led AI development?
Shekar warned that theory differs from practice and advocated for sustained research investment, highlighting a strategic research planning gap.
Speaker: Shekar Sivasubramanian
How can partnerships with countries like India enhance data collection and model development for low‑resource languages in the Global South?
Babel mentioned collaborations with India for data sets, suggesting a need to study cross‑regional partnership models and data sharing frameworks.
Speaker: Babel Kofler
What evaluation frameworks are needed to assess the deployment and adoption of language models in real‑world public service domains (health, education, agriculture)?
Both speakers stressed the importance of moving from lab prototypes to real‑world impact, indicating a research need for deployment assessment methodologies.
Speaker: Natasha Crampton, Chenai Chair
How will the African Compute Initiative’s effectiveness be measured in terms of research output, model training capacity, and broader AI ecosystem growth?
Julie described the initiative’s goals but did not specify impact metrics, pointing to a need for systematic evaluation of the compute cluster’s outcomes.
Speaker: Julie Delahanty

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit

Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

The discussion centered on how AI and digital platforms are being used to make Indian health care accessible regardless of a person’s zip code, emphasizing sustainable costs, preventive care, and early detection [1-3]. India’s unique advantage stems from high out-of-pocket spending that drives innovation, a rapidly expanding doctor-nurse workforce, and a talent pool of over 600,000 AI engineers [4]. Apollo Hospitals’ “Apollo 24-7” digital front door lets users purchase medicines, order diagnostics, store health records, and interact with an AI assistant, attracting more than 45 million users and nearly a million daily interactions [12-14].


Their AI ecosystem processes about 3.5 million API calls across five workstreams-including clinical intelligence, doctor-workforce analytics, disease-risk scoring, multimodal imaging, and acute-care pathways-supporting a broad population of 1.4 billion people [19-30]. An early-warning system linked to 2,000 critical-care beds predicts sepsis 24-48 hours before onset, illustrating potential life-saving impact [30-32]. Throughput optimization targets smarter billing, zero waiting times, and automated record capture, with 19 solutions gaining MDSAP approval and nine receiving FDA clearance [35-38]. The EASE framework guides ethical AI adoption, ensuring suitability and explainability for health-care workers [40-43].


Preventive initiatives include AI-embedded ultrasound that detects NAFLD-affecting 40 % of Indian adults-enabling early intervention to avoid liver transplants [47-50]. Risk-scoring tools and an AI pre-diabetes algorithm have already served 450 000 individuals, with aspirations to reach 85 million diabetics [56-62]. Radiology collaborations, such as with Google, enable AI detection of tuberculosis and brain bleeds, facilitating rapid emergency diagnoses [63-66]. The Clinician Co-Pilot AI summarises records, saving 1-1.5 hours of physician time daily, while the Care Console integrates ICU, home, and rural monitoring to reduce staff burnout and improve decision-making [72-77].


Rural outreach extends these solutions via mobile vans for non-communicable disease and cancer screening, tele-ophthalmology, and data sharing with ASHA workers, demonstrating scalability beyond hospital walls and extensive validation efforts [79-82]. The speaker concluded by urging the creation of interconnected health systems that are predictive, preventive, personalized, participatory, and place-agnostic, calling for collaboration across public, private, research, and tech sectors to build a healthier future for all [91-98].


Keypoints


Major discussion points


A vision of democratized, AI-enabled health care across India – The speaker frames health care as a right not tied to zip code, highlighting India’s large out-of-pocket market, growing medical workforce, and a talent pool of over 600,000 AI engineers that together enable a new collaborative-care paradigm. The launch of “Apollo 24-7,” a digital front-door that lets users order medicines, store records, and interact with AI assistants, already serves 45 million users with about a million daily interactions. This scale is underpinned by a rapidly growing AI platform that has logged roughly 3.5 million API calls. [1-4][12-14][18-20]


Concrete AI applications that augment clinical practice – The organization has built a multi-layered AI stack: a clinical intelligence engine that gives doctors access to cumulative patient data; a decision-support system analyzing 20 million doctor records; disease-risk scoring for conditions such as cardiac disease, diabetes, and hypertension; multimodal imaging AI that interprets signals faster than any individual; an early-warning sepsis model that predicts onset 24-48 hours in advance for 2,000 ICU beds; and throughput-optimization tools that automate billing and record-population, saving up to 1.5 hours of clinician time per day. [21-24][27-33][34-37][64-66][72-74]


Ethical governance through the “EASE” framework – To ensure responsible AI use, the speaker introduces the EASE framework, which addresses ethical considerations, suitability of algorithms for specific clinical contexts, and explainability so that health-care workers can understand and trust AI outputs. [40-44]


Emphasis on preventive care and early disease detection – The talk stresses shifting resources from reactive, high-cost interventions to proactive screening. AI-embedded ultrasound is being used to detect NAFLD (affecting ~40 % of Indian adults) early enough to avoid liver failure; a pre-diabetes algorithm has already been applied to 450,000 individuals with the aim of reaching 85 million diabetics; and collaborations (e.g., with Google) enable AI-driven X-ray analysis for tuberculosis and rapid brain-bleed detection, illustrating how risk scoring and biomarker-based screening can reduce morbidity. [44-49][52-58][61-63]


Call for a collaborative, integrated health-system ecosystem – The speaker highlights ongoing rural outreach (mobile vans, tele-ophthalmology, ASHA-enabled screening), stresses the importance of rigorous validation to move pilots to mainstream, and envisions a future health system that links public and private sectors, primary and advanced care, research institutions, startups, and even drone logistics. This “flywheel” of data, AI, and partnership is presented as the pathway to a predictive, preventive, personalized, participatory, and place-agnostic health future for every village and city. [80-88][90-98]


Overall purpose / goal


The discussion is a strategic showcase aimed at demonstrating how Apollo Hospitals is leveraging AI, digital platforms, and a vast talent pool to make health care affordable, accessible, and preventive across India. It seeks to inspire confidence in the organization’s technological capabilities, outline concrete AI use cases, present an ethical framework, and rally stakeholders-from researchers to policymakers to industry partners-to collaborate in building an integrated, future-ready health system.


Overall tone


The speaker’s tone is consistently enthusiastic and visionary, punctuated by data-driven confidence when describing platform usage and AI outcomes. As the talk progresses, the tone shifts subtly from showcasing achievements to a more urgent, rally-calling stance, emphasizing the need for broader collaboration, validation, and systemic change to realize the “health systems of the future.” Throughout, the language remains optimistic and forward-looking, with a crescendo of collective responsibility toward the end.


Speakers

Speaker 1: Dr. Pratap Siredi – Role/Title: Chairman (Apollo Hospitals); Area of expertise: Healthcare leadership, AI‑enabled health services, hospital administration and innovation.


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

The speaker opened by asserting that health-care must be a right that does not depend on the postcode where a person is born, and that the system should be built around sustainable costs, preventive care and early detection [1-3]. He argued that India enjoys a unique strategic advantage: a large out-of-pocket spending base that drives innovation while keeping prices low, a rapidly expanding cadre of doctors and nurses, and a talent pool of more than 600 000 AI engineers [4]. This combination, he suggested, creates the conditions for a new collaborative-care paradigm in which technology can be leveraged at national scale [3-4][S4].


To translate that vision into practice, Apollo 24-7 has been launched as a digital front-door that lets users purchase medicines, order diagnostics, store health records and interact with an AI-driven assistant [12-13]. The platform now serves over 45 million registered users and records close to one million daily interactions, evidence that the market is rewarding the digital approach [14]. Its rapid growth is underpinned by an AI platform that has already handled roughly 3.5 million API calls [19-20].


The AI platform is organised into five principal work-streams – a clinical-intelligence engine that supplies doctors with cumulative patient data [21-22]; a doctor-workforce analytics layer that analyses about 20 million records to guide clinical choices [23-24]; disease-prediction and risk-scoring models that identify high-risk groups for cardiac disease, diabetes, hypertension and other chronic illnesses across a 1.4-billion-person population [24-27]; multimodal imaging and signal-synthesis AI that extracts and synthesises body signals into causal interpretations for clinicians [27-29]; and an acute-care augmented pathway that connects roughly 2 000 critical-care beds to an early-warning system predicting sepsis 24-48 hours before onset, with potential scaling to 100 000 ICU beds [30-33]. A sixth capability, throughput optimisation, sits on top of these work-streams to automate billing, eliminate patient waiting times and auto-populate records, freeing clinicians to focus on patient interaction [34-37].


Collectively, these capabilities have attracted regulatory endorsement: 19 solutions have secured MDSAP approval and nine have received FDA clearance, reflecting a commitment to rigorous validation [38]. He added that Apollo is actively seeking partnerships to co-create new solutions, noting that “a thousand flowers can bloom” when the ecosystem collaborates [39].


Recognising the ethical challenges of AI, the speaker introduced the EASE framework (Ethical, Adoption, Suitability, Explainability). The framework mandates that every algorithm be ethically vetted, appropriately adopted for its clinical context, and fully explainable to health-care workers, ensuring trust and transparency [45-48][S53][S54].


Preventive-care tools include an AI-embedded ultrasound that detects non-alcoholic fatty liver disease (affecting ~40 % of Indian adults) [47-50], a risk-scoring system that personalises lifestyle advice [51-57], and an AI-driven pre-diabetes predictor already validated on 450 k users and poised to reach the nation’s 85 million diabetics [58-62]. In radiology, collaborations with Google have produced AI models that identify tuberculosis on chest X-rays and detect acute brain bleeds, enabling rapid emergency diagnosis [63-66].


To reduce clinician burden, the clinician co-pilot synthesises patient records and saves between one and one-and-a-half hours of doctor time each day [72-74]. A parallel nurse pilot and the integrated care console connect ICU, home and ward monitoring, extending early-warning capabilities beyond the hospital, decreasing staff burnout and saving millions of lives [75-77].


The speaker stressed that these innovations are not confined to metropolitan hospitals. He also highlighted that Apollo’s network now spans more than 1,100 towns and cities across India, reaching patients beyond the major metropolitan areas [85-86]. Mobile vans now deliver non-communicable-disease and cancer screening, tele-ophthalmology services reach remote villages, and data are shared with ASHA workers and district health authorities to accelerate diagnosis in low-resource settings [79-82]. He noted that Apollo is among the largest validators of AI solutions in India, a crucial step for moving pilots into mainstream practice [83-84].


Looking ahead, the vision expands from a “hospital of the future” to a “health-system of the future” that interlinks public and private providers, primary and advanced care, research institutions, universities and health-tech startups. This interconnected “flywheel” will continuously feed data into new predictive, preventive, personalised, participatory and place-agnostic algorithms, driving both health outcomes and economic productivity [90-94][95-98]. He concluded by urging all stakeholders to close skill and regulatory gaps, collaborate across sectors, and make high-quality, place-agnostic care accessible to every community [95-98].


Session transcriptComplete transcript of the session
Speaker 1

India and that your health care should not be defined by the zip code in which you’re born. It’s about sustainable costs and it’s about preventive care and early detection. It’s a new paradigm in collaborative care where I believe India has an advantage. This advantage is because we not only have one of the highest out -of -pocket payment and therefore we’re creating innovation and keeping our costs low, but also we’re growing more doctors, we’re training more nurses, and we have the largest talent pool of over 600 ,000 AI engineers. All this coming together to create something truly significant. But I’m not here to talk to you about technology. I’m here to share our story. And this story is about using the passion and the mission of bringing health care within the reach of people and using every tool possible to enable this to happen.

Dr. Pratap Siredi. I’m the art chairman and I’m honored to say my father. brought to polar hospitals when he returned from the U .S. almost 43 years ago to bring this, to bring healthcare within the reach of people. Today, we’ve tried to embed and imbibe every technology, whether it’s surgical robots, the proton therapy, all kinds of treatment and curative capability. We’ve gone beyond to say we must find a way to not just use these machines, but also to connect with our customer. So Apollo 24 -7, our digital front door, is actually, not only can you buy your medicines, order your diagnostics, store your health record, but also on Apollo Assist, ask queries, questions, get these answered, and then find ways.

And our market has rewarded us with the volumes that we see. Over 45 million users have come into this. and now we have close to a million users on a daily basis coming in to interact on this ecosystem. These records, these capabilities are getting enhanced every day because of the power of the communications that we have. But moving on, I think what is most important is that we’re not just in the big cities. We’re serving multiple PIN codes across the country and over 1 ,100 towns and cities. Moving across divine methodologies, I just wanted to share with you quickly a few of the things that we’re doing in AI because this is the AI summit. And approximately now we have about 3 .5 million API calls on our AI platforms.

These platforms we’ve divided into five areas. Number one is really our clinical intelligence engine so that a new doctor can have the knowledge and the capability of the cumulative data that we’re providing to the patient. And number two is the cumulative doctor workforce of about 20 million records analyzed. So this is our clinical decision support and our clinical intelligence engine. The next one is the disease prediction and the risk score, because we need to know in a population of 1 .4 billion people, where do we focus? What should we do more? So this is the second work stream, and this goes across cardiac, diabetes, multiple others, including hypertension, but we’re also looking at embedded AI. The next and another critical one is taking images and signals, because the body is an amazing piece of machinery that continues to give us this messaging.

How do we pick this up, synthesize it smarter than any one individual can do, and bring this multimodal signaling into a causal interpretation to thereby enable the doctor. We also have acute care augmented pathways. About 2 ,000 of our critical care beds are connected with our early warning symptom, and there we are predicting. The onset of sepsis. 24 to 48 hours before it happens. Imagine if we could take this AI algorithm and put it into a hundred thousand ICU beds. Imagine the number of lives saved. So here I’m sharing these examples because I believe that the power of AI is directly proportionate to the impact that we can have on lives saved, disease prevented, cost reduction, and therefore talking about cost reduction, the final one is really throughput optimization.

How can you be smarter about billing? How can you ensure that your patient has zero waiting time, that the data capture is using ambient systems, therefore the doctor is able to look at the patient and talk to the patient and you’re doing auto -population of your records. Millions of these capabilities are coming together. We’ve collated them. We’re getting MDSAP approval on almost 19 of them, FDA approval for nine, and we’re looking for partnership to build because I believe in this space. a thousand flowers can bloom, and that there is deeper work to be done on the use of our blood bank and our biobank with genetic testing to move further into disease prediction, biomarkers. So these are just new dimensions opening up.

And I’m sharing more of the examples of how we’re working in these areas, but before I go into those, I want to talk about the EASE framework. I’m happy that our EASE framework has been published fairly extensively because it talks about the ethical considerations of the use of AI. It looks at adoption, the suitability of a certain algorithm within the area that it’s being used, and finally the explainability so that every healthcare worker is able to understand what they use in which environment and what the interpretation means. I believe this is a base framework that we need to put into every healthcare environment. Moving on is another area of deep passion, and that is that while we’re doing the highest end of surgeries, curative care, transplants, etc.

How much can we spend our time on health care prevention? Because for every life -saving intervention, for every 1 ,000 people screened, you will have 11 people where you have averted a major crisis. And therefore, the ability to look at proactive preventive care and get a lot more intuitive on the mechanism of biomarkers and early detection in cancer. We are working with the ultrasound company to do an embedded AI into the ultrasound machine so that we can pick up NAFLD, non -alcoholic fatty liver, of which 40 % of the adult population of India is susceptible to. And if you can pick it up early, you can completely prevent a major crisis if you find it late. These are candidates for liver transplant, a lot of pain and suffering, and some of them potential death.

So the interventions at the appropriate time using technology open up an entire… realm of what we can do differently in this world. I’m sharing now this aspect of how lifestyle changes risk reduction. All of you on Instagram are getting thousands of messages a day on what to eat, how to exercise, what to do better. But is it quantified? Is there a risk scoring? Do you understand the difference between a high -risk group and what they need to do to a low -risk group? But every single group, by understanding the risk profiling and the modifiable risk factors of these non -communicable disease can move into a healthy pattern. This has been studied in partnership with Solventum, the company with 3M, with definitive proof on the power of doing something like this.

We also have a significant product on AI prediabetes which I think we’ve used for a long time. We’ve used it for a long time. We’ve used it for a long time. We’ve used this algorithm over 450 ,000 people. But I would love to see the 85 million diabetics in our country using this to predict and to handle their diabetes better. I also want to move on to the fact that in radiology, because of the years of data and the teleradiology services that we do across the world, we are able to take these images, and here we’ve worked with Google on prediction of tuberculosis in a simple x -ray. We’re working with various other companies, whether it’s an early detection of a brain bleed.

So once somebody goes into the emergency room, you’re quickly able to diagnose these. Each one of these are amazing new factors which are coming in. This is a quick example of the clinician co -pilot. Because I’m running out of time, I’m not going to share this video, but basically… Okay, they are playing the video. Can we have some volume on this? Or I’ll click through, because we’re really running out of time. But basically what the clinician co -pilot does is it’s synthesizing the record so that you’re summarizing. We’re approximately saving… We’re saving one to one and a half hours per day of doctor time in the records. We’re now doing the nurse pilot. I’m moving now to reimagining the way patients are monitored, whether it’s the challenge of a misdiagnosis, the integrated solution, which is looking at Care Console, and the technology stack around this, which is connecting the command station with the ICUs, with home, and with connected wards.

And because of this, we’ve not only saved millions of lives, we’ve saved time for doctors, and this is connected even to external nursing homes in small rural areas. I believe this is a powerful solution where the current AI algorithm has multiple factors from antibiotic usage to early warning symptoms of sepsis, but there are potentially another hundred algorithms that we could add on to this to enhance the quality of decision -making. And share this further, enabling a safer patient care and also less burnout in our staff. I’ve been sharing lots of hospital -based examples, but I do want to say that many of the solutions are applicable in rural India. We’re running mobile vans, we’re doing non -communicable disease screening in small rural environments, we’re finding ways to do cancer screening, tele -ophthalmology screening, and sharing this data and enabling either the ASHA worker or the district health authorities or even the government hospitals to diagnose faster, better, cheaper, and earlier.

And this is really the power of what can be done through early screening. I also do want to say, because for those who are listening from research organizations, from pharmaceuticals, from manufacturing, that we are among the people doing the largest number of validations. So innovation happens from multiple quarters, but validation is what moves a pilot into a mainstream activity. And that is what is critical for our country because you’ve been hearing this over the last two days about the number of pilots happening, but we’re not finding ways to continue this. I believe the hospital of the future is interconnected in multiple ways, from the theatres to the ICUs to using drone delivery. But then as we were drawing and designing this, we actually said, no, our thinking is too small and narrow.

We need to think bigger because the world is more connected. And primary care, preventive care, out there in the market, home care, these are the important redefinition factors of the future of healthcare. And so now I talk not about hospitals of the future, but about health systems of the future. This is what we need to redefine, and we have to do this together. These health systems of the future connect public and private, connect primary care with advanced care, connect research institutions, universities, innovators, health tech startups, all together to build new solutions for the betterment of healthcare. And I believe that this is a flywheel which will drive not just positive health productivity and the economics of the healthcare environment, but this data will enhance into new algorithms.

And these algorithms can be predictive and preventive, and if you find disease earlier, you’re actually saving so many aspects. So let us remove skill gaps. Let us push through regulatory gaps. Let us bring companies, organizations, and people together to build a new healthcare world, which is predictive, preventive, personalized, participatory, and place agnostic. Let every village in any part of the world, or every city, or every apartment building, wherever you are, be able to access good clinical care. Let’s come together to build a healthier world. And definitely, let’s say that this is the time for us to… to dream of finding cures for cancer, of enabling the world to be healthier, and finding a methodology for us to say that we brought our next generation into a healthier world.

Thank you so much, and namaste. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The AI platform has handled roughly 3.5 million API calls.”

The speaker’s figure matches the internal statement that the platform has about 3.5 million API calls on its AI platforms [S6].

Additional Contextmedium

“India enjoys a unique strategic advantage: a large out‑of‑pocket spending base that drives innovation while keeping prices low, a rapidly expanding cadre of doctors and nurses, and a talent pool of more than 600 000 AI engineers.”

Several sources note that India’s high out-of-pocket health spending is framed as a catalyst for innovation and that the country is positioned as a strategic AI hub with cost-competitive innovation and a large talent pool, but they do not provide the specific figure of 600 000 AI engineers or detailed data on doctor/nurse expansion [S4] and [S62] and [S63] and [S64].

Additional Contextmedium

“The AI platform is organised into five principal work‑streams – a clinical‑intelligence engine, a doctor‑workforce analytics layer, disease‑prediction and risk‑scoring models, multimodal imaging and signal‑synthesis AI, and an acute‑care augmented pathway.”

The description of five AI work-streams, including a clinical-intelligence engine, aligns with the speaker’s outline of the platform’s structure, as the internal briefing also mentions five areas and a clinical intelligence engine, though it does not detail the specific analytics or prediction layers cited in the report [S6].

External Sources (76)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — Apollo’sacute care augmented pathwaysdemonstrate life-saving potential through early sepsis detection. Currently deploye…
S5
Cracking the Code of Digital Health / DAVOS 2025 — 1. Systems Approach: Roy Jakobs emphasized the need for a systems approach in healthcare, involving technology, clinical…
S6
https://dig.watch/event/india-ai-impact-summit-2026/keynote-by-sangita-reddy-joint-managing-director-apollo-hospitals-india-ai-impact-summit — So innovation happens from multiple quarters, but validation is what moves a pilot into a mainstream activity. And that …
S7
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — First, India possesses “a huge talent pool of young, vibrant, intelligent, smart, educated people,” with one of the worl…
S8
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — But the reality of life is that there are not going to be 50 megawatt or 100 megawatt data center. Now we are talking ab…
S9
Scaling Innovation Building a Robust AI Startup Ecosystem — EZO5 Solutionswas represented by co-founders Noor Fatima and Meenal Gupta, who described their Imagix AI platform for pr…
S10
Keynote-Roy Jakobs — It will be defined by the outcomes they generate. Earlier detection of disease. Fewer avoidable complications. Shorter w…
S11
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — – Gong Ke, Executive Director of the Chinese Institute for the New Generation Artificial Intelligence Development Strate…
S12
Technology in the World / Davos 2025 — Nicholas Thompson: Ruth, can I ask you a big question that’s quite relevant to this? So, to me, the most interesting,…
S13
WS #98 Towards a global, risk-adaptive AI governance framework — Paloma Villa Mateos: Paloma. Yeah, thank you. So Thomas and also Zulafa have said something which is for me really rel…
S14
(Interactive Dialogue 3) Summit of the Future – General Assembly, 79th session — Juan M. Lavista Ferres: Thank you, Co-Chairs, Mr. Presidents, Excellencies, ladies and gentlemen, for the opportunity …
S15
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — India’s unique position—combining technical talent, diverse datasets, a vibrant startup ecosystem, and supportive policy…
S16
Conversational AI in low income &amp; resource settings | IGF 2023 — Addressing healthcare inequity requires collaboration and the appropriate use of technology. Inequities exist not only a…
S17
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Development | Infrastructure Roy Jakobs argues that AI provides clinicians with fast and accurate data to support daily…
S18
MedTech and AI Innovations in Public Health Systems — “AI can do that prompt saying that, okay, this is the history, this is the data.”[100]. “Plus there is a evidence‑based …
S19
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The roadmap is built upon core principles including “human and planetary welfare, accountability and transparency, inclu…
S20
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S21
WS #205 Contextualising Fairness: AI Governance in Asia — 3. The potential for developing interoperable frameworks that incorporate best practices from different regions. Nidhi …
S22
Keynote-Rishad Premji — “In healthcare, it can enable earlier disease screening and strengthen rural care, especially where access is limited.”[…
S23
WS #171 Mind Your Body: Pros and Cons of IoB — IoB devices enable remote patient monitoring and early disease detection
S24
WS #53 Leveraging the Internet in Environment and Health Resilience — Call for thinking globally and integrated in policy decisions; mention of ecosystem including public safety, emergency, …
S25
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Alex Moltzau: Yes, thank you so much. My name is Alex Maltzau. And I work as a second national expert in the European AI…
S26
Multistakeholder Partnerships for Thriving AI Ecosystems — Low to moderate disagreement level with high strategic alignment. The disagreements are constructive and complementary r…
S27
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — And as you are… We are aware in the Netherlands that strong ICT ecosystems and highly innovative agricultural ecosyste…
S28
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — The discussion highlighted AI’s integration across multiple business functions and industries. Dowson Tong described how…
S29
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — – Practical, actionable recommendations based on risk assessment 5. Interactive Exercise Chris Martin: Thanks, Ahmed….
S30
Ethics and AI | Part 6 — Even if the Act itself does not make direct reference to “ethics”, it is closely tied to the broader context of ethical …
S31
Ethics and AI | Part 3 — In November 2021, UNESCO adopted theRecommendation on the Ethics of Artificial Intelligence, marking its first global st…
S32
Empowering communities through bottom-up AI: The example of ThutoHealth — In Botswana, a silent epidemic claims nearly half of all lives. Hypertension, diabetes, cancer, and other non-communicab…
S33
Keynote-Roy Jakobs — And Philips is working to make that a reality. It means more patients diagnosed earlier. Earlier detection of chronic an…
S34
Keynote-Rishad Premji — “In healthcare, it can enable earlier disease screening and strengthen rural care, especially where access is limited.”[…
S35
EU funds AI to spot disease risk early in children and teens — The European Unionhas launcheda major research initiative called SmartCHANGE to trial AI-powered tools to predict and pr…
S36
Digital Health at the crossroads of human rights, AI governance, and e-trade (SouthCentre) — The adoption of digital health technology should consider the principle of equitable access. This means ensuring that al…
S37
Equi-Tech-ity: Close the gap with digital health literacy | IGF 2023 — By placing the human at the center and acknowledging their existence within a larger system, health literacy can be impr…
S38
WS #49 Benefit everyone from digital tech equally &amp; inclusively — He mentions the need for investing in technological infrastructure, teacher training, and policies prioritizing equity i…
S39
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — The role of social determinants of health in influencing health outcomes was also emphasized. The panel noted that 30 to…
S40
Safe Digital Futures for Children: Aligning Global Agendas | IGF 2023 WS #403 — The analysis examines topics such as online crime, the dark web, internet fragmentation, internet companies, innovation,…
S41
WS #162 Overregulation: Balance Policy and Innovation in Technology — 2. Balancing Innovation and Safety 3. Context-Specific Regulation James Nathan Adjartey Amattey, from the private sect…
S42
WS #257 Data for Impact Equitable Sustainable DPI Data Governance — Andrew Vennekotter argues that government regulation should focus on risks and principles rather than mandating specific…
S43
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — “These health systems of the future connect public and private, connect primary care with advanced care, connect researc…
S44
DPI+H – health for all through digital public infrastructure — Experts advocate a shift towards integrated, future-focused strategies that champion partnerships, bolster legal and dat…
S45
Capacity Building in Digital Health — No, we have to close because we are running out of time. We have to launch also one thing. So I think answer lies in the…
S46
Agenda item 6: other matters — Capacity building is seen as a critical component that should be integrated across all aspects of the future mechanism.
S47
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — And our market has rewarded us with the volumes that we see. Over 45 million users have come into this. and now we have …
S48
Conversational AI in low income &amp; resource settings | IGF 2023 — Addressing healthcare inequity requires collaboration and the appropriate use of technology. Inequities exist not only a…
S49
Technology in the World / Davos 2025 — Ruth Porat highlights how AI is currently enhancing healthcare by enabling early disease detection and making high-quali…
S50
https://dig.watch/event/india-ai-impact-summit-2026/keynote-by-sangita-reddy-joint-managing-director-apollo-hospitals-india-ai-impact-summit — And our market has rewarded us with the volumes that we see. Over 45 million users have come into this. and now we have …
S51
AI tool improves accuracy in detecting heart disease — A team of researchers at Mount Sinai Hospital in New Yorkhas successfullycalibrated an AI tool to more accurately assess…
S52
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Development | Infrastructure Roy Jakobs argues that AI provides clinicians with fast and accurate data to support daily…
S53
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — The ethics framework includes elements of transparency, accountability, and explainability
S54
WS #110 AI Innovation Responsible Development Ethical Imperatives — Dr Zhang Xiao: Thank you everyone. I’m glad to be involved in this interesting discussion and I have three points to sha…
S55
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S56
WS #123 Responsible AI in Security Governance Risks and Innovation — Both industry and humanitarian perspectives converged on integrating governance considerations throughout the entire AI …
S57
Keynote-Rishad Premji — “Community health workers carry portable x -ray devices directly to people’s homes.”[76]”To address this, our foundation…
S58
TIMELINE — Early disease detection through the analysis of medical images.
S59
Transforming Health Systems with AI From Lab to Last Mile — The discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s…
S60
Managing Diplomatic Networks and Optimizing Value — – the regional level). If not, Belgium could-as a federation-risk losing its chances to tap into opportunities for coope…
S61
Keynote-Bejul Somaia — A country where every child has access to a genuinely excellent education and every person has access to the best person…
S62
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Taneja argued that India is uniquely positioned to lead in AI deployment due to its status as the world’s strongest grow…
S63
Keynote-Olivier Blum — India is positioned as a strategic innovation hub with unique advantages including cost-competitive innovation requireme…
S64
Keynote-Vinod Khosla — This comment is insightful because it reframes India’s healthcare deficit as a potential leapfrog opportunity. Rather th…
S65
Panel Discussion: 01 — Low to moderate disagreement level. The speakers fundamentally agreed on AI’s purpose (serving people, not technology), …
S66
Revolutionising medicine with AI: From early detection to precision care — It has been more than four years since AI was first introduced intoclinical trials involving humans. Even back then, it …
S67
Fixing Healthcare, Digitally — According to Christophe Weber, a prominent figure in the healthcare industry, AI and data have the potential to bring ab…
S68
Networking Session #74 Mapping and Addressing Digital Rights Capacities and Threats — Tran Thi Tuyet: Hello, everyone, and it’s nice to meet you all here. I’m Snow from the Institute for Policy Study and Me…
S69
WS #41 Big Techs and Journalism: Disputes and Regulatory Models — Iva Nenadic: Thank you. Yeah, I’ll start with the last point. I think Nihil said many super interesting and relevant t…
S70
AI creation platform Gizmo gains user traction — Gizmo, a new mobile platform for AI-generated interactive media, isintroducing a TikTok-style feedbuilt around playable …
S71
Gemini growth narrows gap in chatbot race — Google’s AI chatbot Gemini hassurpassed 750 million monthly users, signalling rapid consumer adoption, according to four…
S72
Indian startup secures funding for AI-powered presentations — Bengaluru-based startup Presentations.ai hasraised$3 million in a seed round led by Accel to enhance its AI-powered plat…
S73
AI in training and education: Launch of Diplo AI Campus — Diplo’s AI Campus is a training programme that focuses on preparing individuals, diplomatic services, and organisations …
S74
WSIS Action Line C7: E-health – Fostering foundations for digital health transformation in the age of AI — Technical working groups are being established for five key building blocks: electronic health records, supply chain, re…
S75
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — “What we are launching is the Indianized version of Blueverse, what we are calling Bharatverse and hence purpose built f…
S76
Microsoft AI trial boosts NHS productivity and frees frontline time — The NHS hascompleted a 30,000-staff pilotof Microsoft 365 Copilot across 90 organisations, reporting average time saving…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
17 arguments152 words per minute2092 words825 seconds
Argument 1
Healthcare must be independent of zip code; focus on sustainable costs, preventive care, and early detection
EXPLANATION
The speaker asserts that health care should not be determined by a person’s birthplace, emphasizing that the system must prioritize affordability, prevention, and early disease detection. This principle underpins the vision for equitable health outcomes across India.
EVIDENCE
The speaker states that health care should not be defined by the zip code in which you’re born, and that the focus should be on sustainable costs, preventive care, and early detection [1-2].
MAJOR DISCUSSION POINT
Equitable access regardless of geography
Argument 2
India’s high out‑of‑pocket spending fuels innovation, low costs, and a large AI talent pool
EXPLANATION
The speaker explains that India’s high out‑of‑pocket health expenditures drive cost‑effective innovation, while a growing medical workforce and a pool of over 600,000 AI engineers support this ecosystem. These factors together create a competitive advantage for health‑tech development.
EVIDENCE
The speaker notes that India has one of the highest out-of-pocket payments, which spurs innovation and keeps costs low, alongside expanding numbers of doctors, nurses, and a talent pool of more than 600,000 AI engineers [4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s large AI talent pool is highlighted by Jeetu Patel, noting the country’s youthful, educated workforce, supporting the claim of a strong AI talent base [S7].
MAJOR DISCUSSION POINT
Economic drivers of health‑tech innovation
Argument 3
Apollo 24‑7 serves as a digital front door for medicines, diagnostics, health records, and AI‑driven assistance
EXPLANATION
Apollo 24‑7 is presented as an integrated digital platform where users can purchase medicines, order diagnostics, store health records, and interact with an AI assistant for queries. It functions as the entry point for patients to engage with the health system online.
EVIDENCE
The speaker describes Apollo 24-7 as a digital front door that lets users buy medicines, order diagnostics, store health records, and ask questions via Apollo Assist [12].
MAJOR DISCUSSION POINT
Digital health platform for patient engagement
Argument 4
AI platform (3.5 M API calls) spans clinical intelligence, disease risk scoring, multimodal imaging, acute‑care pathways, and throughput optimization
EXPLANATION
The speaker outlines a comprehensive AI ecosystem that has processed about 3.5 million API calls and covers five functional areas: a clinical intelligence engine, a doctor‑workforce knowledge base, population disease‑risk scoring, multimodal image and signal analysis, and acute‑care early‑warning pathways. Throughput optimization is also included to improve operational efficiency.
EVIDENCE
The speaker reports roughly 3.5 million API calls on their AI platforms and details five work streams covering clinical intelligence, doctor-workforce analytics, disease prediction and risk scoring, multimodal imaging and signal synthesis, and acute-care augmented pathways, followed by throughput optimization [19-30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI platform’s scale is confirmed by the speaker’s report of about 3.5 million API calls and its division into five functional areas, as detailed in the summit presentation [S4].
MAJOR DISCUSSION POINT
Broad AI ecosystem across care continuum
Argument 5
Sepsis prediction algorithm alerts 24–48 hrs before onset, offering massive life‑saving potential
EXPLANATION
An early‑warning system linked to 2,000 critical‑care beds predicts sepsis 24 to 48 hours before it manifests, and the speaker envisions scaling this to hundreds of thousands of ICU beds to dramatically reduce mortality.
EVIDENCE
The speaker explains that about 2,000 critical-care beds are connected to an early-warning symptom system that predicts sepsis 24-48 hours before it occurs, and imagines deploying the algorithm to 100,000 ICU beds to save many lives [30-33].
MAJOR DISCUSSION POINT
Predictive AI for acute care
Argument 6
Throughput optimization automates billing, eliminates waiting times, and auto‑populates records
EXPLANATION
The AI‑driven throughput optimization streamlines billing processes, ensures patients experience zero waiting time, and uses ambient data capture to automatically populate medical records, thereby enhancing efficiency and clinician focus.
EVIDENCE
The speaker asks how to be smarter about billing, ensure zero waiting time, and use ambient data capture to allow doctors to focus on patients while auto-populating records [35-37].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Throughput optimization using ambient data capture to automate billing and reduce wait times, saving 1-1.5 hours of physician time, is outlined in the summit discussion [S4].
MAJOR DISCUSSION POINT
Operational efficiency through AI
Argument 7
The EASE framework ensures ethical use, appropriate adoption, and explainability of AI in healthcare
EXPLANATION
The EASE framework addresses ethical considerations, suitability of algorithms for specific contexts, and the need for explainability so that health‑care workers can understand and trust AI outputs.
EVIDENCE
The speaker outlines the EASE framework’s focus on ethical considerations, adoption suitability, and explainability for health-care workers [41-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The EASE framework for ethical AI use, adoption suitability, and explainability is presented in the keynote, and broader ethical AI considerations are discussed in an ethics session [S4][S11].
MAJOR DISCUSSION POINT
Ethical governance of AI
Argument 8
Embedded AI in ultrasound detects NAFLD early, preventing liver disease progression
EXPLANATION
By embedding AI into ultrasound machines, the system can identify non‑alcoholic fatty liver disease (NAFLD), which affects 40 % of Indian adults, enabling early intervention that can avert severe liver disease and the need for transplantation.
EVIDENCE
The speaker notes collaboration with an ultrasound company to embed AI that picks up NAFLD, affecting 40 % of adults, and stresses that early detection can prevent major crises and liver transplants [47-49].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-embedded ultrasound for detecting NAFLD, affecting 40 % of adults, is reported as a practical application in the presentation [S4].
MAJOR DISCUSSION POINT
AI‑enhanced diagnostic imaging
Argument 9
AI‑based risk scoring quantifies lifestyle risk, distinguishing high‑ vs low‑risk groups
EXPLANATION
The speaker describes AI‑driven risk scoring that evaluates lifestyle factors, separates high‑risk from low‑risk populations, and cites a partnership with Solventum and 3M that provides definitive proof of its effectiveness.
EVIDENCE
The speaker discusses quantified lifestyle risk scoring, risk profiling, and mentions a study with Solventum and 3M that demonstrates the power of this approach [51-57].
MAJOR DISCUSSION POINT
Personalized risk assessment
Argument 10
Prediabetes AI algorithm, already used on 450 k people, aims to serve 85 million diabetics
EXPLANATION
An AI algorithm for prediabetes has been applied to 450,000 individuals, and the speaker expresses the ambition to extend its use to India’s 85 million diabetic population to improve disease management.
EVIDENCE
The speaker mentions an AI prediabetes algorithm used on 450,000 people and the goal of reaching 85 million diabetics in the country [58-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The prediabetes AI algorithm has been applied to 450,000 individuals with a target of reaching 85 million diabetics, as stated in the summit remarks [S4].
MAJOR DISCUSSION POINT
Scaling AI for chronic disease management
Argument 11
Partnerships (e.g., with Google) enable AI detection of tuberculosis on X‑rays and early brain‑bleed identification
EXPLANATION
Collaborations with Google and other firms allow AI to analyze chest X‑rays for tuberculosis and to detect brain bleeds early in emergency settings, accelerating diagnosis and treatment.
EVIDENCE
The speaker cites work with Google on AI-based tuberculosis prediction from X-rays and other collaborations for early brain-bleed detection [63-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Partnerships with Google enable AI-based TB detection from chest X-rays, and similar TB AI efforts are highlighted in other AI startup showcases [S4][S9].
MAJOR DISCUSSION POINT
Collaborative AI for disease detection
Argument 12
Clinician co‑pilot synthesizes records, saving 1–1.5 hours of doctor time per day
EXPLANATION
The clinician co‑pilot tool aggregates patient information into concise summaries, freeing clinicians from extensive documentation and saving roughly one to one and a half hours each day.
EVIDENCE
The speaker explains that the clinician co-pilot synthesizes records, resulting in a saving of one to one and a half hours of doctor time per day [72-74].
MAJOR DISCUSSION POINT
AI‑assisted clinical documentation
Argument 13
Nurse pilot and Care Console integrate ICU, home, and ward monitoring, reducing burnout and enhancing decision‑making
EXPLANATION
The nurse pilot and Care Console connect ICU, home, and ward environments, providing continuous monitoring that lessens staff burnout and improves clinical decision‑making through integrated data.
EVIDENCE
The speaker describes the Care Console linking command stations with ICUs, homes, and wards, noting saved lives, reduced clinician time, and decreased burnout while enabling richer decision-making [75-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Care Console links ICU, home, and ward monitoring, reducing clinician burnout and enhancing decision-making, per the keynote description [S4].
MAJOR DISCUSSION POINT
Integrated AI monitoring across care settings
Argument 14
Mobile vans provide NCD and cancer screening, tele‑ophthalmology; data shared with ASHA workers and district health authorities for faster, cheaper diagnosis
EXPLANATION
Mobile units deliver non‑communicable disease and cancer screening, as well as tele‑ophthalmology services, with data transmitted to community health workers (ASHA) and district authorities to enable rapid, low‑cost diagnosis in rural areas.
EVIDENCE
The speaker mentions running mobile vans for NCD and cancer screening, tele-ophthalmology, and sharing data with ASHA workers and district health authorities for faster, cheaper diagnosis [80-82].
MAJOR DISCUSSION POINT
Rural outreach with AI‑enabled screening
Argument 15
19 AI tools have MDSAP approval, 9 have FDA clearance; partnerships are essential to move pilots to mainstream adoption
EXPLANATION
The organization has secured MDSAP approval for about 19 AI solutions and FDA clearance for nine, emphasizing that partnerships are crucial for scaling these tools beyond pilot phases.
EVIDENCE
The speaker states that they have MDSAP approval on almost 19 AI tools, FDA approval for nine, and are seeking partnerships to build further [38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for validation to move pilots to mainstream and the importance of partnerships are emphasized as critical for scaling AI tools [S6].
MAJOR DISCUSSION POINT
Regulatory validation and partnership for scaling
Argument 16
The future health system must interlink public‑private sectors, primary‑advanced care, research, universities, and startups to create predictive, preventive, personalized, participatory, place‑agnostic care
EXPLANATION
The speaker envisions a health ecosystem where public and private entities, primary and advanced care, research institutions, universities, and startups collaborate to deliver data‑driven, holistic health services that are predictive, preventive, personalized, participatory, and location‑agnostic.
EVIDENCE
The speaker describes connecting public and private sectors, primary and advanced care, research institutions, universities, innovators, and health-tech startups to build new solutions and a predictive, preventive, personalized, participatory, place-agnostic health system [91-94].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A systems approach that integrates public-private sectors, research, universities, and startups is advocated as essential for future health ecosystems [S5][S14].
MAJOR DISCUSSION POINT
Holistic, collaborative health‑system architecture
Argument 17
Urgent need to close skill and regulatory gaps and unite stakeholders to build a healthier world
EXPLANATION
The speaker calls for removing skill gaps, overcoming regulatory barriers, and bringing together companies, organizations, and individuals to create a health system that is predictive, preventive, personalized, participatory, and place‑agnostic.
EVIDENCE
The speaker urges removal of skill gaps, pushing through regulatory gaps, and uniting companies, organizations, and people to build a new health world that is predictive, preventive, personalized, participatory, and place-agnostic [95-98].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls to close skill and regulatory gaps and to unite stakeholders echo broader calls for capacity building and AI governance frameworks [S5][S6][S13][S14].
MAJOR DISCUSSION POINT
Call for capacity building and regulatory reform
Agreements
Agreement Points
Equitable access to health care through digital platforms and outreach programs
Speakers: Speaker 1
Healthcare must be independent of zip code; focus on sustainable costs, preventive care, and early detection Apollo 24‑7 serves as a digital front door for medicines, diagnostics, health records, and AI‑driven assistance Mobile vans provide NCD and cancer screening, tele‑ophthalmology; data shared with ASHA workers and district health authorities for faster, cheaper diagnosis
The speaker stresses that health care should not be defined by zip code and highlights the Apollo 24-7 digital front door and mobile-van outreach as tools to reach millions across urban and rural India, thereby promoting equitable, affordable, and preventive care [1-2][12][80-82].
POLICY CONTEXT (KNOWLEDGE BASE)
The principle of equitable digital health access is highlighted in EU policy discussions on human rights and e-trade, emphasizing equal opportunities regardless of geography or socioeconomic status [S36], and reinforced by calls for inclusive digital health literacy and infrastructure investment [S37][S38].
Comprehensive AI ecosystem that enhances clinical intelligence, early warning, operational efficiency and documentation
Speakers: Speaker 1
AI platform (3.5 M API calls) spans clinical intelligence, disease risk scoring, multimodal imaging, acute‑care pathways, and throughput optimization Sepsis prediction algorithm alerts 24‑48 hrs before onset, offering massive life‑saving potential Throughput optimization automates billing, eliminates waiting times, and auto‑populates records Clinician co‑pilot synthesizes records, saving 1‑1.5 hrs of doctor time per day Nurse pilot and Care Console integrate ICU, home, and ward monitoring, reducing burnout and enhancing decision‑making 19 AI tools have MDSAP approval, 9 have FDA clearance; partnerships are essential to move pilots to mainstream adoption
The speaker describes a large-scale AI platform (≈3.5 M API calls) covering decision support, disease risk, imaging, acute-care early warning (e.g., sepsis prediction), and throughput optimisation, complemented by clinician-pilot and nurse-pilot tools that save clinician time, and notes regulatory approvals for many of these solutions, illustrating a holistic AI-driven health-care model [19-30][35-37][72-78][38].
POLICY CONTEXT (KNOWLEDGE BASE)
Multistakeholder partnership models for thriving AI ecosystems are advocated as best practice, supporting integrated AI functions across health care and other sectors [S26], and are reflected in national AI-plus economy strategies that embed AI throughout operational processes [S28].
Ethical governance of AI through the EASE framework
Speakers: Speaker 1
The EASE framework ensures ethical use, appropriate adoption, and explainability of AI in healthcare
The speaker introduces the EASE framework, which addresses ethical considerations, suitability of algorithms, and explainability so that health-care workers can trust AI outputs [41-44].
POLICY CONTEXT (KNOWLEDGE BASE)
The EASE framework aligns with the EU Ethics Guidelines for Trustworthy AI and UNESCO’s global AI ethics recommendation, both of which call for human-centric, lawful, and responsible AI governance [S30][S31][S29].
AI‑enabled early detection and preventive interventions for chronic diseases
Speakers: Speaker 1
Embedded AI in ultrasound detects NAFLD early, preventing liver disease progression AI‑based risk scoring quantifies lifestyle risk, distinguishing high‑ vs low‑risk groups Prediabetes AI algorithm, already used on 450 k people, aims to serve 85 million diabetics Partnerships (e.g., with Google) enable AI detection of tuberculosis on X‑rays and early brain‑bleed identification
The speaker highlights several AI applications that enable early detection-ultrasound-based NAFLD screening, lifestyle risk scoring, a pre-diabetes algorithm, and collaborations for TB and brain-bleed detection-demonstrating AI’s role in preventive health care [47-49][51-57][58-62][63-66].
POLICY CONTEXT (KNOWLEDGE BASE)
EU’s SmartCHANGE programme under Horizon Europe funds AI tools for early detection of non-communicable diseases in youth, exemplifying policy support for preventive AI health solutions [S35]; similar commercial initiatives demonstrate early diagnosis and cost reduction benefits [S33].
Vision of an integrated, future health system and call for capacity‑building and regulatory reforms
Speakers: Speaker 1
The future health system must interlink public‑private sectors, primary‑advanced care, research, universities, and startups to create predictive, preventive, personalized, participatory, place‑agnostic care Urgent need to close skill and regulatory gaps and unite stakeholders to build a healthier world
The speaker envisions a health ecosystem that connects public and private actors, primary and advanced care, research institutions and startups, and urges removal of skill and regulatory gaps to realise this vision [90-94][95-98].
POLICY CONTEXT (KNOWLEDGE BASE)
Integrated digital public health infrastructure and capacity-building are emphasized in recent IGF and DPI+H discussions, calling for partnership-driven reforms and legal safeguards to enable future health systems [S44][S45][S46].
Similar Viewpoints
AI is presented as a multi‑layered engine that improves clinical decision‑support, early warning, operational efficiency and documentation while also achieving regulatory validation, underscoring AI’s central role in transforming health‑care delivery [19-30][35-37][72-78][38].
Speakers: Speaker 1
AI platform (3.5 M API calls) spans clinical intelligence, disease risk scoring, multimodal imaging, acute‑care pathways, and throughput optimization Sepsis prediction algorithm alerts 24‑48 hrs before onset, offering massive life‑saving potential Throughput optimization automates billing, eliminates waiting times, and auto‑populates records Clinician co‑pilot synthesizes records, saving 1‑1.5 hrs of doctor time per day Nurse pilot and Care Console integrate ICU, home, and ward monitoring, reducing burnout and enhancing decision‑making 19 AI tools have MDSAP approval, 9 have FDA clearance; partnerships are essential to move pilots to mainstream adoption
Multiple AI applications are leveraged for early detection and prevention of chronic and infectious diseases, illustrating a preventive‑care focus across diverse health conditions [47-49][51-57][58-62][63-66].
Speakers: Speaker 1
Embedded AI in ultrasound detects NAFLD early, preventing liver disease progression AI‑based risk scoring quantifies lifestyle risk, distinguishing high‑ vs low‑risk groups Prediabetes AI algorithm, already used on 450 k people, aims to serve 85 million diabetics Partnerships (e.g., with Google) enable AI detection of tuberculosis on X‑rays and early brain‑bleed identification
Unexpected Consensus
Cost‑driven innovation aligns with high regulatory standards
Speakers: Speaker 1
India’s high out‑of‑pocket spending fuels innovation, low costs, and a large AI talent pool 19 AI tools have MDSAP approval, 9 have FDA clearance; partnerships are essential to move pilots to mainstream adoption
While the speaker attributes rapid AI innovation to India’s high out-of-pocket health spending, he simultaneously reports that many of these AI tools have obtained rigorous international regulatory approvals (MDSAP and FDA), revealing an unexpected alignment between cost-driven innovation and compliance with global standards [4][38].
POLICY CONTEXT (KNOWLEDGE BASE)
Balancing cost-effective innovation with stringent regulatory standards is a recurring theme in policy debates on over-regulation and risk-based governance, highlighting the need to manage compliance costs while fostering innovation [S41][S42].
Overall Assessment

Speaker 1 consistently emphasizes an AI‑centric, equitable, and preventive health‑care model that combines digital platforms, large‑scale AI services, ethical governance, and integrated system design, while calling for capacity‑building and regulatory reforms.

High internal consensus – the speaker’s multiple arguments reinforce a unified vision of AI‑enabled, inclusive health care, suggesting strong alignment among the presented points and indicating that future policy and investment discussions can build on this cohesive narrative.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains only statements from Speaker 1, and the supplied list of arguments all originates from the same speaker. Consequently, there are no opposing viewpoints, no partial agreements, and no unexpected areas of disagreement identified in the material provided.

None – the discussion reflects a single perspective, indicating full consensus (or lack of debate) on the topics addressed.

Takeaways
Key takeaways
Healthcare must be equitable, independent of zip code, and shift toward sustainable costs, preventive care, and early detection. India’s high out‑of‑pocket spending drives innovation, low costs, and provides a large pool of AI talent. Apollo’s digital platform (Apollo 24‑7) serves as a front‑door for medicines, diagnostics, health records, and AI assistance, reaching over 45 million users. Apollo’s AI platform (3.5 M API calls) includes clinical intelligence, disease risk scoring, multimodal imaging analysis, acute‑care pathways (e.g., sepsis prediction), and throughput optimization. The EASE framework was introduced to ensure ethical AI adoption, suitability, and explainability in healthcare. Preventive‑care AI initiatives include ultrasound‑embedded NAFLD detection, lifestyle risk scoring, a pre‑diabetes algorithm (used on 450 k people), and AI‑driven TB and brain‑bleed detection in partnership with Google. Operational tools such as the Clinician Co‑Pilot and Nurse Pilot reduce clinician workload and burnout while improving decision‑making. Rural outreach via mobile vans, tele‑ophthalmology, and data sharing with ASHA workers extends AI‑enabled screening to remote populations. Significant regulatory progress: 19 AI tools with MDSAP approval, 9 with FDA clearance; emphasis on partnerships to scale pilots to mainstream use. A vision for a future health system that interconnects public‑private sectors, primary and advanced care, research institutions, universities, and startups to deliver predictive, preventive, personalized, participatory, place‑agnostic care.
Resolutions and action items
Seek and formalize partnerships with technology firms, research organizations, and pharmaceutical companies to co‑develop and scale AI solutions. Accelerate validation and regulatory approval processes for AI tools to move pilots into mainstream deployment. Expand the Apollo 24‑7 ecosystem and AI platforms to cover additional PIN codes, towns, and rural areas. Implement skill‑development programs to close AI and digital‑health talent gaps among clinicians and staff. Address regulatory gaps by collaborating with authorities to create supportive frameworks for AI adoption. Scale the pre‑diabetes AI algorithm from 450 k users to the broader diabetic population (~85 million). Integrate AI‑driven early‑warning systems (e.g., sepsis prediction) into a larger network of ICU beds. Continue development of the EASE ethical framework and embed it across all AI deployments.
Unresolved issues
How to efficiently and uniformly scale AI validation and regulatory approval across the diverse Indian healthcare landscape. Specific mechanisms for data sharing and interoperability between private platforms (Apollo) and public health systems. Sustainable financing models to support widespread deployment of AI tools in low‑resource and rural settings. Details on how to measure and monitor the impact of AI‑driven preventive programs on long‑term health outcomes. Strategies for ensuring patient privacy and data security while expanding the digital health ecosystem.
Suggested compromises
Balancing investment in high‑end curative technologies (e.g., surgical robots, proton therapy) with a strong focus on preventive, low‑cost AI solutions for broader population health. Combining centralized AI development with decentralized delivery (mobile vans, tele‑health) to reach both urban and rural populations. Integrating AI automation (billing, record auto‑population) while preserving clinician‑patient interaction to maintain care quality.
Thought Provoking Comments
Health care should not be defined by the zip code in which you’re born.
Frames health equity as a foundational principle, shifting the conversation from technology to social impact.
Sets the ethical tone for the talk, prompting later references to preventive care, rural outreach, and the EASE framework; it reframes subsequent technical details as tools for achieving equity.
Speaker: Speaker 1 (Dr. Pratap Siredi)
We have the largest talent pool of over 600,000 AI engineers, and we are growing more doctors and nurses – this creates a unique advantage for India to innovate while keeping costs low.
Highlights a strategic national asset—human capital—that underpins the scalability of AI in health care.
Leads into the discussion of large‑scale AI platforms and justifies the ambition to deploy AI solutions across a billion‑plus population.
Speaker: Speaker 1
Apollo 24‑7, our digital front door, now has over 45 million users and close to a million daily interactions, allowing people to buy medicines, order diagnostics, store records, and ask health queries.
Demonstrates a concrete, high‑impact digital health ecosystem that bridges the gap between technology and patient access.
Provides a real‑world example that validates the earlier claim about equity; it transitions the talk from abstract AI potential to an operational platform with measurable reach.
Speaker: Speaker 1
Our AI platforms are organized into five work streams: clinical intelligence engine, disease‑prediction risk scores, multimodal imaging AI, acute‑care augmented pathways (e.g., sepsis prediction 24‑48 hrs early), and throughput optimisation.
Offers a clear, structured roadmap of how AI is being applied across the health‑care continuum, moving the conversation from vision to implementation.
Creates a pivot point where the audience can grasp the breadth of AI use‑cases, leading to deeper questions about each stream (e.g., sepsis prediction, imaging).
Speaker: Speaker 1
Predicting the onset of sepsis 24‑48 hours before it happens; imagine scaling that algorithm to 100,000 ICU beds – the lives saved would be massive.
Quantifies AI’s potential life‑saving impact, turning a technical capability into a compelling public‑health narrative.
Elicits a shift from discussion of technology to its humanitarian consequences, reinforcing the earlier equity theme and inspiring enthusiasm for large‑scale deployment.
Speaker: Speaker 1
We have published the EASE framework – Ethical, Adoption, Suitability, Explainability – to ensure every AI tool is transparent and appropriate for its clinical environment.
Introduces a systematic ethical guardrail, addressing common concerns about AI bias and opacity.
Temporarily redirects the conversation toward governance, prompting listeners to consider not just what AI can do, but how it should be responsibly integrated.
Speaker: Speaker 1
For every 1,000 people screened, we avert a major crisis in 11 of them – preventive care delivers far more value than curative interventions.
Re‑frames the value proposition of health‑care from treatment to prevention, supporting the earlier equity argument.
Leads to a deeper dive into specific preventive AI tools (e.g., NAFLD detection, pre‑diabetes scoring) and justifies investment in population‑level screening.
Speaker: Speaker 1
Embedded AI in ultrasound machines can detect non‑alcoholic fatty liver disease, which affects 40 % of Indian adults, enabling early intervention before transplant‑level disease.
Provides a tangible, high‑impact use‑case that links AI, imaging, and a prevalent chronic condition.
Illustrates how AI can be woven into existing hardware, prompting the audience to envision similar integrations for other diseases.
Speaker: Speaker 1
Our clinician co‑pilot saves one to one‑and‑a‑half hours of doctor time per day by auto‑summarising records, and we are now extending similar pilots to nurses.
Shows measurable efficiency gains, addressing clinician burnout—a major barrier to AI adoption.
Shifts the narrative toward workforce sustainability, reinforcing the earlier point about throughput optimisation and encouraging stakeholder buy‑in.
Speaker: Speaker 1
The hospital of the future must become a health‑system of the future – interconnected public and private sectors, primary care, research institutes, startups – creating a flywheel where data fuels new predictive, preventive, personalized, participatory, place‑agnostic care.
Broadens the scope from isolated hospitals to an ecosystem, encapsulating all prior themes into a strategic vision.
Serves as the concluding turning point, unifying earlier technical, ethical, and equity discussions into a call for collaborative action across the entire health‑care landscape.
Speaker: Speaker 1
Overall Assessment

The discussion was driven by a single, highly articulate speaker whose comments repeatedly reframed the conversation—from a focus on cutting‑edge AI technologies to the larger goals of equity, prevention, ethical governance, and systemic integration. Each pivotal remark introduced a new dimension (e.g., digital front‑door adoption, structured AI work streams, sepsis early‑warning, the EASE ethical framework, preventive screening, workflow efficiency, and ecosystem‑level vision) that redirected audience attention, deepened analysis, and built momentum toward a holistic vision of a future health system. Collectively, these thought‑provoking statements shaped the dialogue into a coherent narrative that linked technical possibility with societal need, ultimately urging collaborative, cross‑sector effort to realize a predictive, preventive, and inclusive health‑care future.

Follow-up Questions
How can partnerships be formed to build and scale the AI platforms and algorithms mentioned?
Collaboration with external partners is needed to expand AI capabilities, accelerate development, and ensure broader implementation across the health system.
Speaker: Speaker 1
What research is required to integrate the blood bank and biobank with genetic testing for disease prediction and biomarker discovery?
Linking genetic data with biobanking could enhance predictive models and enable earlier, more precise interventions, but it demands extensive validation and ethical considerations.
Speaker: Speaker 1
How can the pre‑diabetes AI algorithm be scaled to reach the estimated 85 million diabetics in India?
Scaling the algorithm would significantly improve diabetes management nationwide, yet it raises questions about infrastructure, user adoption, and outcome measurement.
Speaker: Speaker 1
What additional AI algorithms (potentially a hundred) can be added to the ICU early‑warning system to further improve patient safety and reduce burnout?
Expanding the suite of predictive models could detect more complications early, but each new algorithm requires rigorous testing, integration, and clinical validation.
Speaker: Speaker 1
What standardized validation frameworks are needed to move pilots into mainstream clinical practice?
Ensuring that pilot projects are scientifically validated is crucial for regulatory approval, clinician trust, and large‑scale deployment.
Speaker: Speaker 1
How can regulatory gaps be addressed to accelerate the adoption of AI‑driven healthcare solutions?
Regulatory barriers can delay implementation; identifying pathways for faster yet safe approvals is essential for timely impact.
Speaker: Speaker 1
What strategies are required to connect public and private sectors, primary care, advanced care, research institutions, and startups into an integrated health‑system ecosystem?
A unified ecosystem would enable data sharing, coordinated care, and innovation, but it demands governance models, interoperability standards, and stakeholder alignment.
Speaker: Speaker 1
How can AI be embedded into ultrasound machines to reliably detect non‑alcoholic fatty liver disease (NAFLD) at scale?
Early NAFLD detection could prevent liver failure and transplants; research is needed to develop, test, and certify such embedded AI tools.
Speaker: Speaker 1
What risk‑scoring models can quantify lifestyle‑related risk factors for non‑communicable diseases and guide personalized interventions?
Transforming generic health advice into actionable, risk‑based recommendations could improve prevention outcomes, requiring robust data and validation.
Speaker: Speaker 1
How can AI‑based radiology tools (e.g., TB detection, brain‑bleed identification) be further refined and deployed in emergency settings?
Rapid, accurate imaging analysis can save lives; continued research is needed to improve accuracy, integrate with workflows, and assess impact on clinical decisions.
Speaker: Speaker 1
What innovations are needed to achieve throughput optimization such as zero waiting times, automated billing, and ambient data capture?
Optimizing operational efficiency can reduce costs and improve patient experience, but requires advanced AI, IoT, and process redesign research.
Speaker: Speaker 1
How can the EASE framework for ethical AI be operationalized across diverse healthcare settings in India?
Ensuring ethical adoption, suitability, and explainability of AI is vital for trust and compliance; research is needed to translate the framework into practice.
Speaker: Speaker 1
What models and logistics are needed to extend AI‑enabled screening (e.g., mobile vans, tele‑ophthalmology) to rural and underserved communities?
Bringing advanced diagnostics to remote areas can reduce health disparities, but requires studies on feasibility, cost‑effectiveness, and community engagement.
Speaker: Speaker 1
How can the clinician and nurse co‑pilot tools be further developed to maximize time savings and reduce documentation burden?
These tools have shown potential to save up to 1.5 hours per day; further research can enhance usability, integration, and impact on clinician burnout.
Speaker: Speaker 1
What are the design and implementation requirements for an integrated Care Console that connects ICUs, home care, and remote wards?
A unified monitoring platform could improve continuity of care and early detection of deterioration, necessitating technical, clinical, and workflow research.
Speaker: Speaker 1
How can drone delivery be safely and efficiently incorporated into hospital logistics for medication, blood products, or equipment?
Drone logistics could enhance supply chain resilience, especially in hard‑to‑reach areas, but requires regulatory, safety, and operational studies.
Speaker: Speaker 1

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Panel Discussion Data Sovereignty India AI Impact Summit

Panel Discussion Data Sovereignty India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel discussed what digital sovereignty means for AI, focusing on who sets the rules and where data and compute are located [5-7]. Sunil argued that sovereignty does not require total isolation; instead it involves deciding which parts of the technology stack must be controlled locally and which can be collaborative [21-23]. He emphasized that core compute infrastructure and model training should reside within national borders to serve language-specific needs, noting that a domestically built model can satisfy about 95 % of India’s use cases without needing frontier-scale systems [25-28][34-38][39-42].


Sunil illustrated this with the migration of India’s Pashini language platform from a hyperscale cloud to Yotta’s local data centre, and the open-sourcing of a critical NVIDIA component to keep it under Indian control [136-142][148-154]. Nasubo highlighted that many African countries lack compute capacity (≈1 % of global) but possess rich data and local use cases, so they pursue partnerships while also developing offline-capable solutions for regions with limited connectivity [57-63][166-174][176-179]. Seima stressed that sovereignty is about strategic control rather than full ownership, calling for visibility into supply-chain structures, guardrails to protect digital assets, and trust-engineered designs that are transparent and auditable [77-84][88-98][100-107][109-118].


The moderator summed up that sovereignty requires a balance between building domestic AI compute and engaging trusted global partners, noting that no nation can do everything alone [48-53][130-134]. All speakers agreed that government must provide clear policies and sovereign guardrails while fostering public-private collaboration to ensure security, continuous verification, and industry innovation [184-191][198-205]. They also concurred that the ultimate goal of sovereign AI is to address real-world problems for the most underserved citizens, such as language diversity in India or specific health data for African women [31][60][220-223].


Seima added that treating digital infrastructure as a national asset and providing long-term policy stability encourages private investment in large-scale AI factories [185-196][200-204]. Overall, the panel presented a roadmap where sovereignty is achieved by controlling critical layers, establishing trust frameworks, and aligning market, government, and society to deliver AI solutions that are both secure and locally relevant, ensuring AI benefits the “last person in the line” [215-218][221-223].


Keypoints


Major discussion points


Sovereignty is fundamentally about who sets the rules and controls the digital infrastructure, not about total isolation.


The moderator frames the core question as “who gets to make the rules?” and notes that sovereignty is usually discussed in terms of data location [5-7]. Sunil stresses that sovereignty means “we do not allow a single country or a single company to define our digital destiny” and that compute must be “within your country, … where your data is getting processed” [25-27][34-36].


Local (sovereign) compute and storage are essential, but they can incorporate foreign technologies within a ring-fenced environment to avoid lock-in.


Sunil argues that “compute infrastructure … has to be within your country” while also acknowledging the need for collaboration [25-27][34-42]. He illustrates this with the Yotta example: the AI platform was moved from a hyperscale cloud to a locally-controlled data centre, using NVIDIA, Microsoft and Amazon tech inside a “ring-fenced” wall where no third-party can log in [135-154].


Designing AI for local contexts and building trust are critical; this includes using native language data, respecting supply-chain transparency, and establishing clear governance.


Nasubo highlights that Africa’s advantage is “data and use cases” and stresses designing for lived realities such as breast-cancer diagnostics for African women [57-64]. Seema adds that sovereignty requires “visibility into ownership structures” and “policies, guardrails … to ensure … not compromised externally,” emphasizing transparency, traceability, and engineered trust [78-86][104-110].


Strategic partnerships, not dependence, are the pragmatic path forward.


Seima notes the rise of “air-gapped, ring-fence environments” and the need for “global partnerships” while keeping strategic ownership [91-98][118-120]. Sunil’s story about open-sourcing a critical NVIDIA component shows how a partnership can be reshaped into sovereign control [139-154].


For low-resource regions, offline capability and community-driven innovation are necessary to achieve sovereign AI.


Nasubo points out that only ~50 % of Africa has reliable connectivity and that “offline access” is a key design goal, with local innovators using compute provided by Kala [166-173][176-179]. The moderator links this to India’s Aadhaar offline verification as a precedent for the Global South [180-182].


Overall purpose / goal of the discussion


The panel was convened to move the concept of “data sovereignty” from a slogan to concrete practice in the AI era. Participants shared how governments, industry, and civil society can jointly define rule-making, secure local compute and data, design culturally-relevant AI, and establish trusted partnerships so that every country-especially those with limited resources-can reap AI benefits without surrendering strategic control.


Overall tone and its evolution


The conversation began with a definitional, analytical tone, focusing on what sovereignty means and why it matters. It then shifted to a pragmatic, solution-oriented tone, with detailed examples of infrastructure, open-source workarounds, and governance frameworks. By the end, the tone became optimistic and inclusive, emphasizing collaboration, trust, and the moral imperative to serve “the last person in the line.” Throughout, the speakers remained collaborative and constructive, moving from abstract framing to actionable, real-world strategies.


Speakers

Speaker 1 – Moderator/host (appears to be the event moderator) [S11]


Sunil – Panelist (no specific role or title mentioned) [S2]


Nasubo – Panelist (no specific role or title mentioned) [S1]


Seema – Chief Executive Officer, L&T Vioma [S8][S9]


Speaker 5 – Panelist (no specific role or title mentioned) [S5]


Additional speakers:


(none identified beyond the listed speakers)


Full session reportComprehensive analysis and detailed insights

1. Opening & framing (Speaker 1) – The moderator opened the session by defining digital sovereignty as “who gets to make the rules” and noting that the debate usually centres on the location of data and compute [5-12]. He then asked Sunil whether a “sovereign yet connected” model was realistic [8-12].


2. Sunil’s view – Sunil rejected the idea that sovereignty means total isolation, explaining that it is often confused with a “do-everything-ourselves” mindset but in practice requires recognising inter-dependencies across the global technology stack [13-27]. He argued that a country must retain control over the strategically essential parts of the stack-compute infrastructure where data is processed, stored and models are trained-while collaborating on the rest [25-27][34-42]. Using India’s linguistic diversity, he highlighted the need for native-language voice AI that can handle regional slang and deliver real-time responses, a capability achievable with domestically hosted models of 20-100 billion parameters rather than frontier trillion-parameter systems [28-42].


To illustrate operationalisation, Sunil described migrating the national AI language platform Pashini from a hyperscale public cloud to Yotta’s locally-controlled data centre [135-142]. After the migration the team built roughly 30-40 different technology components and deployed them on virtual machines inside the data centre[143-150]. The only remaining foreign dependency was the NVIDIA NVCF library, which was open-sourced and brought in-house, eliminating the external reliance [151-154]. He concluded that the best foreign technologies-NVIDIA, Microsoft, Amazon-can be used provided they run inside a sovereign, access-controlled (ring-fenced) compute stack [144-152][144-154].


3. Design layer (Speaker 1 → Nasubo) – The moderator asked Nasubo to discuss design for local realities. Nasubo noted that Africa possesses only about 1 % of global compute capacity [57-58] but has rich data and concrete use-cases. He cited the development of breast-cancer diagnostic models that reflect the specific tissue composition of African women [58-64]. Because roughly half of Africa’s population lacks reliable internet, he argued that offline-capable AI is essential; Kala is therefore building compute resources that can operate without constant connectivity and offering them to innovators at the AI Village [165-174][170-179]. He stressed that sovereignty must be pursued through partnerships that provide compute while allowing African stakeholders to define the rules, rather than accepting externally dictated solutions [166-176].


4. Critical systems (Speaker 1 → Seima) – The moderator shifted to critical systems and invited Seima. Seima asserted that sovereignty is about strategic control and visibility rather than outright ownership of every component [77-85]. She called for clear policies and guard-rails that treat digital infrastructure as a national asset, ensuring transparent supply-chain structures that cannot be compromised by external geopolitical leverage [88-98][121-128]. She introduced an air-gapped, ring-fenced sovereign-infrastructure model within commercial infrastructure [100-107]. Trust, she argued, must be engineered and continuously verified; designs should be transparent, traceable and auditable, and partnerships with global vendors must be built on mutually-established trust rather than dependence [109-118][122-124]. Seima also highlighted the need for a public-private partnership model in which the government provides stable, long-term policy guard-rails while industry focuses on innovation, scale and time-to-value [184-196][200-204], and she stressed continuous verification rather than point-in-time checks[198-205].


5. Moderator synthesis – The moderator summarised that sovereign compute is both desirable and feasible [48-49] and that sovereignty also involves who decides how systems are designed [52-53]. He reiterated the consensus that sovereignty requires a balance between domestic control and trusted global collaboration [130-134] and that co-accountability among market (bazaar), government and civil society (samaj) is essential for implementing sovereign AI[214-218]. Continuous security verification was identified as a key governance practice [198-205].


6. Consolidated trust-partnership insight – Across the panel, speakers agreed that foreign hardware and software can be leveraged safely when deployed inside sovereign, ring-fenced environments[144-154][100-107][109-118], providing the pragmatic path forward while avoiding lock-in.


7. Actionable takeaways


Anchor sovereignty in rule-making authority and control of critical compute and data[25-27][48-49].


Leverage foreign technologies inside sovereign, ring-fenced environments[144-154].


Design AI around indigenous data, multilingual needs and offline capability where connectivity is sparse[28-31][165-174].


Engineer trust through transparent, auditable supply chains and continuous verification[100-107][109-118][198-205].


Establish stable government guard-rails and public-private partnerships for long-term AI infrastructure investment[184-196][200-204].


Treat digital infrastructure as a national asset to ensure consistent security, oversight and accountability[201-208].


8. Moral framing – The moderator quoted Gandhi’s call to consider “the last person in the line” and stressed that AI, even in a technocratic age, must serve the most underserved citizens [221-223].


9. Session close (Speaker 5) – Speaker 5 thanked the participants and asked the audience to wait for the next session [220-222].


Session transcriptComplete transcript of the session
Speaker 1

been used almost as much as AI in this session, the last three days, it’s been sovereignty. So I think it’s good that we get 24 minutes and 47 seconds to discuss what sovereignty is about. So I’ll jump straight in. We’ve got a great panel. And I think the key question of sovereignty is a question of who gets to make the rules. And the way in which sovereignty has been discussed is in terms of where data is stored. So we have a variety of viewpoints here, and I look to get some opening remarks from each of you. So Sunil, I’ll start with you. You’re running some very large and very impressive data centers in India. One term that we’ve often heard is sovereign yet connected.

So we want to be sovereign but connected. Is that realistic?

Sunil

No, as you said, there are different ideas, different theories, different narratives going on in sovereign. Everybody has their own take on sovereignty. And so many times, sovereignty is also confused with we will do everything ourselves. We’ll start looking inwards, we’ll isolate ourselves from the rest of the world and everything is done by us also. I think let’s understand any and every technology stack, AI is now the latest one, you will always have interconnectedness, interdependencies across the world. Somebody will be good at making chips, somebody will be making raw material for the chips like gases and maybe rare earths, somebody will be making models, somebody will have great data sets, somebody will be very good in making applications, agentic AI.

You will have, and of course capital flows, somebody will have lots of capital and somebody will be waiting for that capital and somebody will have talent. We all know where India is good at and where any other country is good at. So, sovereignty for sure does not mean we become isolated and just try to do everything ourselves. It is a matter of what is the thing we need to control and what is the thing where we need to collaborate. For sure, it definitely means that as a country, we do not allow a single country or a single company to define our digital destiny for future. Answering your second question, there are certain things which are fundamental.

Compute infrastructure, I strongly believe, has to be within your country, has to be within your control. That is where your data is getting processed, that is where data is getting stored, that is where your models are being made. Your needs as a country, forget control, your needs as a country are unique. You want to create a voice -based AI because majority of population may not be comfortable speaking in English or writing in English, but they’ll be very, very comfortable talking in their own native language. We all are very comfortable talking in native language. We have a mix of Hindi, English, Malayalam, Kannada, whatever native languages, and we mix up with English. So if we are able to talk to a device in my own native language with my own slang, and the device does all the processing and gives me my answer in real time in my own language, my slang, that is where the real benefit, that is where population scale benefits comes in.

Maybe the model builders of any other country may have a different viewpoint of how they want to adopt AI at a global level. So frontier models are good for those use cases, but for India use cases, possibly I need the focus to go on my use cases which can benefit masses at a larger scale. So both from control point of view that nobody else should tomorrow just switch off my access to digital infrastructure, also from the point of view that my priorities for my citizens can be different, I would rather like to have sovereign compute, right? And some of the models which are taken care of, let’s say, as Minister I think said in the last three, four days, in Devas also, that 95 % of the use cases which India requires can possibly be handled.

So I think that’s the goal. by having models which are 20 billion to let’s say 100 billion parameters. You don’t need to necessarily go for frontier models, trillions of parameters. So we build our compute here. We store our data here. We allow controlled data flow outside. We build the models which are satisfying 95 % of my need. That is what I need to do. But what we can do, and I give you our own example as Yotta. While on one side I’m building…

Speaker 1

So Sunil, we’ll just come back to that. We’ll just get everybody else in and then we’ll speak about your examples. I’m just mindful of time. So I think the takeaway is that as far as the infrastructure layer is concerned, as in sovereignty in compute is not only desirable but perhaps possible. And as far as control is concerned, and we should try to have control, but I’ll take that to you, Nasubu. Let’s look at the design layer. I mean, Sunil gave what the infrastructure is about. Sovereignty is also about who makes the rules in terms of how things are designed. And what Sunil said, we work for a very large country like India where there are lots of buildings.

There are lots of builders. But how does it translate to the rest of the world? Maybe some experiences from Kala as well.

Nasubo

Excellent, thank you very much for this when you look at or when you think about let’s say Africa sometimes it’s we are disadvantaged in that we don’t have compute when you look at the computing capacity it’s like at 1 % so already we are at disadvantage before we even leap forward and get ahead but the one thing we have is we have data, we have use cases so when it comes to use cases how are we able to design for our lived realities because we as he said that the language the different things that we are looking at for example when you look at when you look at the local needs, what are the things that we want that we can adopt for example if I look at the use case of health if you look at the at how People in other sectors have been looking at health, for example.

We’ve done a lot of work on designing for our needs in terms of breast cancer. We were able to get data sets from our lived context, knowing that when you look at the composition of the breast tissue for African women, it’s different. So those are the use cases that we need to look at. Because we can be confident and say that, yes, we don’t have compute, but we have the use cases, and that’s the important bit that we need to put into place. That in as much as we are disadvantaged, we have use cases, we have the people who are able to build. That’s the one component that we never talk about. We always talk about, you know, we are getting the data there, someone else is defining the rules.

But we can define the rules by building the tools that actually work for the people in our context and being confident that, you know, once it works for our context, that people are going to use.

Speaker 1

That’s right. I think that’s a really powerful statement. because at the end of the day, it’s only local people who have skin in the game who will build for local problems. And I think that’s where actually the opportunity also lies. So I think that’s a very critical intervention. And I’ll take that to you, Seema. So we’ve discussed the infrastructure layer, the design layer. And I think it would be good to get a holistic perspective as far as critical systems and sovereignty in critical systems is concerned, especially because, as Sunil was saying, that while we can certainly try to build, compute, and store it locally, locally, it’s, again, a pipe dream to think that any country can do everything itself.

So there are, of course, questions of supply chains, trusted supply chains, who’s supplying what, and how that control is going to be exercised. So maybe a little bit from your experiences as to what sovereignty means for you, building a large data center, many large data centers now in India, but the rest of the world as well.

Seema

So first of all, thank you. I’ll just keep it. I’ve answered this in two parts, and real quickly. So. So. critical question at the critical moment I think it’s very important it’s like an important question for this decade what first question is can you be connected and sovereign yes, I don’t see a problem at all of being sovereign and connected I think over there what is important to understand is basically the strategic control that you need needs to be sovereign and it remains sovereign, I think that’s the definition more on sovereign so you don’t need to really build everything yourself so if you want me to just elaborate around the three what does really government look from its services, so it’s like public services, critical citizen data, financial networks AI systems, unlimited amount, so we’re not talking of outsourcing right over here what we’re talking of is basically critical national infrastructure, I think it’s very important to define not in general but in specific right, what it means so let’s look at three things one one is ownership.

Is ownership very important across all the components in the supply chain and in the critical infrastructure for government? Not really. Not really. I think we need to define the extent to which you want to have ownership. Second is visibility into ownership structures. And third, I think for most important for all countries, whatever, developed, underdeveloped, developing, whatever it might be. I think it’s important for all of us to treat like our digital assets like any other precious asset. And therefore, you have to have policies, guardrails that ensure whatever you have in a sovereign or semi -sovereign infrastructure is not compromised externally and you have a degree of assurance. Where you don’t have geopolitical leverage. I think that is important.

That defines sovereignty to a great extent. So what does it mean for industry? Industry, we have seen some some really good models come up, right? So there is like sovereign infrastructure model. I’ve seen some real good air -gapped, some kind of ring fence environments within the commercial infrastructure, which has been very interesting. And of course, the public -private, which still remains. What does it all mean? It means no national… We are not trying… The goal is not to nationalize. I think the goal is assurance, which is most important. That’s number one. That’s your strategic ownership question. The second is operational efficiency. I think over here, yes, degree of sovereignty does matter. It goes well beyond a few definitions of infrastructure that we have.

I think what is important here is to ensure the extent of operational control, look at efficiencies of operational control, the components within operational control that can be sovereign. And I think that’s what we’re trying to do. is to ensure the extent of operational control, and to operate So what does it mean for industry? We need to build things that are transparent, traceable, and also observable. I think that is the code to your design. That is sovereign design. Then you decide how you want to implement it. So the second thing, what does it mean? It means trust. So trust is not paper -based. Trust can only be engineered, and it needs to be verified, in my opinion.

Okay, I’m quickly coming to the third question. I think you had so many things. Supply chain trust, absolutely. Today, if you look at data sovereignty, it goes well beyond data, digital data. It goes into hardware, chipsets, network components, AI provenance, a whole lot of stuff. So in this case, I think industry needs to basically, you can’t isolate yourself. I do not believe in that. You need to forge very good technology global partnerships. It is important. Again, another degree of trust. The second thing is, of course, you can have some guardrails around it by the government, and you can govern that. I think what is most important in this case is to build some sovereign capacity.

By domestic, which is because in the age of AI, I strongly believe that the sovereign AI compute infrastructure has become a global leverage. So it is important, right? So these are my take. And basically, what I also believe in this is national digital infrastructure for any countries, like a national infrastructure, which could be like a power grid port. or a telecom. So you treat it with that level of whatever you need to do for it. Secondly, a very good guardrails from the government to safeguard sovereignty and govern it. Industry should focus on innovation and not worry too much, whatever you can, not try to own everything, because it slows down your transition and your aspiration of growth.

And this

Speaker 1

That’s great. And I think one underlying point that you made across these three is of trust, because at the end of the day, you can’t build everything yourself. Sovereign nations don’t do things themselves, even in a non -AI analog world, so it’s not that you’re going to do everything yourself. But sovereignty is only partly what we say, but more importantly, I think is what we do. And so I want to take that to each of you in terms of what you are doing in your own domains, in your own companies, and where that line, that what am I going to do? myself, what am I going to do with somebody else and if so how will I ensure that this person is trusted and I have control.

So Sunil you were saying about Yotta and what you do briefly so that we can get the others in.

Sunil

Yeah sure. So I’ll just go by the actual example which we have sort of done in the last two years or so. Last week we inaugurated and made open to the world that India’s AI language platform which I think every government entity is using, Pashini, we actually migrated that from a hyperscale cloud operator to our cloud. It’s a combination of a whole lot of general compute services and AI and GPUs and all on which all those language models are working which are giving real -time translation services. Now their purpose considering that it is a digital public infrastructure they were very very clear that at no point of time we want to be dependent on the platform service of a hyperscale operator because that makes it make stickiness that you cannot come out of that platform.

Whether it is a hyperscale platform or for that matter Yotta’s platform they don’t want to remain dependent on only one entity. They want it to be independent. They wanted a choice. we ended up not only giving them the physical infrastructure which was obviously local in my data center but we ended up creating almost I can say 30 or 40 different technologies we developed put it on their virtual machines in their environment in my data center and brought them into their control they were not using PaaS anymore. At the last when everything was going live we suddenly realized there is one component NVCF which is a NVIDIA’s software tool which was still running on NVIDIA’s platform somewhere in US and it was not running in India and then they said even though it is all fine NVIDIA is my biggest technology supplier giving me GPU software, everything but they said no this cannot go this software component is very critical for this whole structure but it has to come within your control into my environment so what we have done after that after everything was done was also to and NVIDIA agreed to open source that part of the software brought the software into our environment and now it is available and it is running within my control this example is telling.

I’m just in same breath, I’m telling you same thing. I’m using the best of the foreign technologies. I’m using Nvidia’s technologies. I’m using, of course, open source technologies. I’m using Microsoft technology. We have great partnership with Azure. I’m using Amazon’s technologies. But I’m not using these technologies in the public cloud. I’m using their technology stack within my ring -fenced walls, within my GPU and CPU compute infrastructure. The access control of these technologies firmly lies with me. No third party is able to log into my system and control or dictate what will be running and what will not be running. And that, I think, is the real balance, that you use the best technologies. These guys have spent hundreds of years, put billions of dollars in creating great technologies.

We must benefit from that. But you use these technologies within your environment, within your control.

Speaker 1

That’s right. So I think partnership and not dependence. What I’m interested in, Nasubo, and I’ll come to you on this, is what you’re doing at Kala. Because what Sunil is saying may work, say. In a setting like India, where… tell NVIDIA perhaps that some part can be stored on something locally. But I’m thinking of Malawi, I’m thinking of Lesotho or Eswatini, I’m thinking of smaller Southern African countries, which also will want to use AI for solving local problems. And so what does it look like from your perspective, having done so much work in Southern Africa?

Nasubo

When you look at, let’s just first ground what Africa has, right? How are we going to use compute that also allows for offline? That is one of the use cases we are looking at, because in as much as digital connectivity is everywhere in Africa, it’s up to like 50%. So how do we also ensure that people are able to use? So one of the ways that we do this is, one, we are working with global partners to give us compute, but at the same time we also want to buy compute, for ourselves, because the… conversations that he’s talking about in the rules, creating the rules and the structure can only be done once you also understand what is happening.

So at Kala, we are also offering compute to different innovators. And if you go to our stand in Hall 14, you are able to interact with different African innovators from the AI village who are building AI innovations. And part of those innovations are innovations that allow for offline access. That is the one thing that we need to be cognizant of. We need to understand how we need to work practically. That’s something that Kala is actively building, actively championing for. So that when we’re having even conversations with government, we are going to them and saying, yes, compute is something that we may not have. But if you approach, let’s say, big tech and you’re talking about offering compute, offering us…

being sovereign, this is what it means. So we are also having conversations with different African governments to talk about what we are learning, what people are building, and now having once they have their understanding now we can continue ensuring that our use cases are well represented. Because if we just take things that are dictated to us without having like a perspective it means that we are building for exclusion. And for us we want to ensure that all voices are well represented, including the people who are offline, who want to use AI for solving use cases in our sectors.

Speaker 1

That’s right, and I think this is resonating greatly with the fact that you’re building for offline, because when we were doing Aadhaar in India and the legal framework for Aadhaar, which underlies all our DPIs, one of the key game changers was for moving from online authentication to offline verification, because we realized that that was a big need. So this is where the global south, I think, needs to learn from . each other because these problems are somewhere similar. Last word perhaps to you Seema on this in terms of your actual experiences in ensuring that you have control over whatever is within your ring fence but what’s outside is something that you trust and you think will further the goal of sovereignty and sovereign AI as you mentioned.

Something from your practical experiences.

Seema

Let me just give you what most of us are doing and why it is pertinent and it’s important. We are building currently we are building for demand so it’s like a gigawatt AI factories huge amount of compute huge amount of data centers it has to be done responsibly in all ways and a lot of money. I think what’s important is how does this work between the government and the enterprise. I think that is the recipe for success. So there are three, four things which I have a take. One, of course, is basically the policies need to evolve along with the infrastructure. They are not based at the same. So that, I think, is important. The second thing is government must lay the sovereign guardrails.

It’s all spoken about, but you don’t have them. So it’s very difficult. Third, I think what is also important in every country to help the industry to build that capacity is also give, you know, not only have a long -term stability of your policy, but also look at commitment so that private enterprises, private industry is confident of building that huge capacity for you. I think that’s very, very key. And last but not the least is definitely look at security and regulatory, not at point -in -time checks, but move it to a continuous verification process. But this… This will ensure your sovereignty is implementable. You can also, you know, kind of enforce it. and get the best results out of it in terms of outcome.

My closing remarks. One, of course, I did speak about before in terms of how you treat this asset. You’ve got to treat it like any other national asset. Second is government needs to extend that hand in becoming an absolute sovereign partner or a public -private partner to the industry. And third is industry needs to really focus on innovation, innovation, scale, time to value, time to market. I think that’s where your soul energy should go. And last but not the least, this is a core accountability for every country. It can’t be one over the other, right? And that will ensure you safeguard your national interest and also do scale and progress without compromising your transformation times and things like that.

So you’re not left behind. See, AI is a journey where we don’t want any country to be left behind. One, lack of… resources, lack of definitions, security, sovereignty, access. I think we need to have that. I really like the theme. It says welfare for all and happiness of all and that should really be the case if it is so very transforming in nature.

Speaker 1

That’s right and I think if we were to quickly wrap up with some takeaways because we see that the purpose of this session was that data sovereignty shouldn’t be just theoretical, a slogan. It has to work in practice and what I took away from the three of you who are actually walking the talk on data sovereignty is A, in terms of the role of the market is essentially to build sovereign AI in whichever country you may be in and build it yourself, store locally and ensure that you have trusted partners when you are partnering with someone because it’s obviously futile to try to even think about doing anything yourself. As far as governments are concerned, again, this is a, I like the word you used, co -accountability, this is a partnership.

And I think government has to build guardrails, but hand in hand with both the bazaar and the samaj, that’s the society and the market. And as far as the samaj is concerned, the society, and Kala mentioned that about, at the end of the day, we mustn’t forget that what we are trying to do is solve real problems for real people. So like she mentioned that the breast tissue in an African woman is different in somewhere else, that’s the person whom we are trying to serve. And I think that that is what is imperative for all of us to do. And I think it’s appropriate to end with what Gandhiji said, that we must think about the last person in the line.

And I think when we are talking about AI, just because we are in a kind of modern technocratic age, we shouldn’t forget that it’s that last person, the man or woman in the queue. The man. The most unfortunate who we must think about. because at the end of the day that is for whom AI is built and that is for whom we are talking about sovereignty so we leave it there thank you very much ladies and gentlemen and thank you to my panelists for a wonderful session thank you

Speaker 5

please could you all please wait for a second I’ll just hand over your memory I request everyone to please settle down we will be bringing the next session very soon thank you

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The moderator opened the session by defining digital sovereignty as “who gets to make the rules” and noting that the debate usually centres on the location of data and compute.”

The knowledge base records the moderator Arghya Sengupta framing the central question as “who gets to make the rules” rather than where data is stored, confirming the report’s description.

Confirmedhigh

“Sunil described migrating the national AI language platform Pashini from a hyperscale public cloud to Yotta’s locally‑controlled data centre.”

Sunil Gupta is reported to have migrated the Bhashini language platform from a public cloud to a sovereign data centre, which aligns with the report’s description of moving a national AI language platform to a locally-controlled facility [S84].

Confirmedmedium

“Sunil argued that sovereignty does not require total isolation but control over strategically essential compute infrastructure.”

The discussion notes that sovereignty in compute is desirable and possible, reflecting Sunil’s view that control, not isolation, is the goal [S21].

Additional Contextmedium

“India’s linguistic diversity creates a need for native‑language voice AI capable of handling regional slang and real‑time responses.”

Other sources highlight India’s multilingual landscape and the importance of voice-first, multilingual AI for health and broader applications, supporting the claim about linguistic diversity driving AI needs [S81] and [S83].

Additional Contextlow

“Using foreign technologies like NVIDIA, Microsoft, Amazon is acceptable if they run inside a sovereign, ring‑fenced compute stack.”

The edge-cloud discussion emphasizes distributing compute and keeping critical workloads within sovereign infrastructure, providing context for using foreign tech inside a ring-fenced stack [S86].

External Sources (92)
S1
Panel Discussion Data Sovereignty India AI Impact Summit — – Nasubo Ongoma- Arghya Sengupta – Sunil Gupta- Nasubo Ongoma
S2
Keynote-Vishal Sikka — -Sunil: Role/Title: Not specified; Area of expertise: Not mentioned (referenced in relation to Airtel)
S3
https://dig.watch/event/india-ai-impact-summit-2026/global-enterprises-show-how-to-scale-responsible-ai — One last round, okay? Again, I’ll start with Sunil. Should we have mandatory watermarking in all the media text and all …
S4
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Great. Thank you. I think we have had a lot of good nuggets from everyone. I think we’ll continue this conversation afte…
S5
Knowledge Café: Youth building the digital future – WSIS+20 Review and Beyond 2025 — – **Speaker 5** – Role/expertise not specified Speaker 5: Sure. So what we talked about as a group is we discussed this…
S7
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued) – session 5 — The Chair’s instrumental role in facilitating consensus-centric discussions has been recognised with gratitude by South …
S8
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — -Moderator- Role: Event moderator (specific title/expertise not mentioned) -Seema Ambasta- Chief Executive Officer, L&T…
S9
https://dig.watch/event/india-ai-impact-summit-2026/keynote-by-mathias-cormann-oecd-secretary-general-india-ai-impact — Thank you so much, Secretary General of OECD. These remarks, we’re very grateful for your remarks. For the next panel on…
S10
Panel Discussion Data Sovereignty India AI Impact Summit — Seema Ambastha provided a framework for understanding sovereignty through strategic control, operational efficiency, and…
S11
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S12
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S13
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S14
WS #43 States and Digital Sovereignty: Infrastructural Challenges — Balancing Sovereignty and Cooperation Ekaterine Imedadze: Yes, sir. Very challenging questions, let me put it this way….
S15
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — This is a reality we cannot ignore. But the key question is this. Will this concentration of power become a permanent st…
S16
Digital sovereignty in Brazil: for what and for whom? | IGF 2023 Launch / Award Event #187 — Audience:Thank you very much, Ana. I think in regards to the second questions from Raul, if that’s going to be a patchwo…
S17
Host Country Open Stage — This paradoxical statement challenges the typical understanding of digital sovereignty as protectionist or isolationist….
S18
AI: Lifting All Boats / DAVOS 2025 — Ring-fenced data solutions can help address data sovereignty concerns
S19
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Hannah Taieb:Real diversity is very important indeed, and it all depends on the models and business models. Algorithms a…
S20
Day 0 Event #270 Everything in the Cloud How to Remain Digital Autonomous — Argentina adopted multi-cloud architecture approach, prioritizing local providers alongside big ones, guaranteeing porta…
S21
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-data-sovereignty-india-ai-impact-summit — I’m just in same breath, I’m telling you same thing. I’m using the best of the foreign technologies. I’m using Nvidia’s …
S22
From KW to GW Scaling the Infrastructure of the Global AI Economy — NVIDIA’s contribution to India’s AI ecosystem includes sharing reference designs for AI factories, open-sourcing control…
S23
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — Sovereignty doesn’t mean isolation – need cooperation, open science and shared global ethics
S24
WS #462 Bridging the Compute Divide a Global Alliance for AI — Beyond physical infrastructure, Jason Slater emphasized that compute deserts are characterized not only by lack of conne…
S25
How Small AI Solutions Are Creating Big Social Change — African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so lan…
S26
Empowering Workers in the Age of AI — Current AI models suffer from significant bias because they are trained primarily on data from developed countries and h…
S27
What is it about AI that we need to regulate? — Addressing the Tension Between Digital Sovereignty and Global Internet InteroperabilityThe tension between digital sover…
S28
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — His discussion of sovereignty defined it not as technological isolation but as maintaining national capability and contr…
S29
WS #180 Protecting Internet data flows in trade policy initiatives — Audience: Hello, I hope you can hear me. My name is Mark Taylor. I’m a senior project manager at the Council of Europe…
S30
Global AI Policy Framework: International Cooperation and Historical Perspectives — The concept includes practical elements such as cloud and data standards that guarantee interoperability and reversibili…
S31
AI Meets Cybersecurity Trust Governance &amp; Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S32
Artificial intelligence (AI) – UN Security Council — In conclusion, the discussions highlighted the importance of fostering transparency and accountability in AI systems. En…
S33
AI as critical infrastructure for continuity in public services — “If you think about linguistic diversity that is there in many of the communities, in many of the countries of this worl…
S34
Harnessing Collective AI for India’s Social and Economic Development — Artificial intelligence | Human rights and the ethical dimensions of the information society | Data governance Professo…
S35
Workshop 2: The Interplay Between Digital Sovereignty and Development — The workshop highlighted that digital sovereignty cannot be achieved through technical or regulatory measures alone but …
S36
Building a Digital Society, from Vision to Implementation — Gary Patterson: Yes. Thanks. Thanks, Chris. So, as we said before, the small nations like Jamaica face these severe cons…
S37
WS #241 Balancing Acts 2.0: Can Encryption and Safety Co-Exist? — These key comments fundamentally shaped the discussion by establishing it as a collaborative problem-solving exercise ra…
S38
Day 0 Event #236 EU Rules on Disinformation Who Are Friends or Foes — However, Shultz offered a pragmatic path forward through recent bipartisan success in banning non-consensual deepfake po…
S39
Discussion Report: Sovereign AI in Defence and National Security — Faisal advocates for a strategic approach where countries focus their limited sovereign resources on the most critical c…
S40
Panel Discussion Data Sovereignty India AI Impact Summit — Both speakers agree that sovereignty should involve strategic partnerships and collaboration rather than complete self-r…
S41
Host Country Open Stage — This paradoxical statement challenges the typical understanding of digital sovereignty as protectionist or isolationist….
S42
Main Topic 3: Europe at the Crossroads: Digital and Cyber Strategy 2030 — Decision-making and governance of digital sovereignty initiatives must be socially driven and transparent to ensure that…
S43
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — The framework advocated for worker-centric AI development that complements rather than replaces human labour, addressing…
S44
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — -Operational sovereignty: Ensuring continuity under external pressure Virkkunen articulated sovereignty as “having choi…
S45
From KW to GW Scaling the Infrastructure of the Global AI Economy — NVIDIA’s contribution to India’s AI ecosystem includes sharing reference designs for AI factories, open-sourcing control…
S46
Al and Global Challenges: Ethical Development and Responsible Deployment — Alfredo Ronchi:Most interesting presentation from the standpoint of China. Thanks a lot for this date. And now we will t…
S47
AI in 2026: Learning to live with powerful systems — Purpose-built models designed for specific domainsbegin to play a more prominent role. In healthcare, education, public …
S48
S49
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Audience:So, if I understood correctly, you plan to feed the chat GPT? or the AI engine with local content, local cultur…
S50
Waves of infrastructure Open Systems Open Source Open Cloud — “what we’re doing in Proximal Cloud”[76]. “The word Proximal brings compute closer to your data”[77]. “The word Proximal…
S51
Advancing Scientific AI with Safety Ethics and Responsibility — And so the fragmentation risk is actually not a technical risk, I would argue, because it’s not just a technical risk, b…
S52
Panel #3: « Gouverner les données : entre souveraineté, éthique et sécurité à l’ère de l’interconnexion » — Emmanuelle Ganne Ça fait beaucoup à couvrir. Quand on parle d’économie des données, peut-être que je vais mettre l’accen…
S53
Multi-stakeholder Discussion on issues about Generative AI — Luciano Mazza de Andrade:Sorry I was off. Thank you very much, Yoshi. Well, I think our colleagues and previous speakers…
S54
India outlines plan to widen AI access — India’s government has set out plans todemocratiseAI infrastructure nationwide. The strategy focuses on expanding access…
S55
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Data sovereignty policies requiring local data storage are essential to drive domestic data center investment and capita…
S56
AI as critical infrastructure for continuity in public services — Data sovereignty requires control over jurisdiction, keys, and infrastructure beyond just local data storage
S57
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-ebba-busch-deputy-prime-minister-sweden — If AI is to become electable in our democracies, policymakers must find a way to translate complexity into tangible bene…
S58
Contents — Beyond the direct and indirect support that government can provide for the development of UK quantum tech businesses, is…
S59
What is it about AI that we need to regulate? — TheOpen Forum on Local AI Policy Pathwaysemphasized the importance of building indigenous technological capabilities. An…
S60
Host Country Open Stage — – Christian Sorby Larsen Silva contends that digital sovereignty means ensuring platforms and tools reflect national va…
S61
Panel Discussion Data Sovereignty India AI Impact Summit — Data sovereignty means controlling who makes the rules and maintaining strategic control, not isolating from global part…
S62
What is it about AI that we need to regulate? — Addressing the Tension Between Digital Sovereignty and Global Internet InteroperabilityThe tension between digital sover…
S63
Keynote-Jeet Adani — Industrial corridors will integrate energy and compute planning. Storage and grid stability will become national priorit…
S64
WS #180 Protecting Internet data flows in trade policy initiatives — Sabhanaz Rashid Diya: Thank you, Ramin, and it’s very good to be here with a number of experts, both on-site and online…
S65
Cross-Border Data Flows: Harmonizing trust through interoperability mechanisms (DCO) — Some nations do not permit healthcare data to be expatriated. There is a need for local storage and analysis mechanism.
S66
Harnessing Collective AI for India’s Social and Economic Development — Artificial intelligence | Human rights and the ethical dimensions of the information society | Data governance Professo…
S67
AI as critical infrastructure for continuity in public services — Trust building must occur at multiple levels simultaneously. While global frameworks provide necessary foundations, trus…
S68
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Trust building requires transparency, explainability, and stakeholder involvement
S69
Toward Collective Action_ Roundtable on Safe &amp; Trusted AI — -What Africans want from AI systems: Panelists emphasized the need for empowerment and agency rather than dependency, eq…
S70
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — Building trust with regulators requires sustained periods of respectful, honest, transparent relationships and knowledge…
S71
Workshop 2: The Interplay Between Digital Sovereignty and Development — The workshop highlighted that digital sovereignty cannot be achieved through technical or regulatory measures alone but …
S72
Building a Digital Society, from Vision to Implementation — ## Strategic Partnerships as Critical Success Factors Gary Patterson: Yes. Thanks. Thanks, Chris. So, as we said before…
S73
Comprehensive Discussion Report: Governance Frameworks for Reducing Digital Divides in African and Francophone Contexts — The conversation’s evolution from technical infrastructure concerns to questions of sovereignty, value creation, and equ…
S74
Closing Session  — Minister Tijani’s comment solidified the proactive framework as the summit’s core achievement and elevated the discussio…
S75
WS #241 Balancing Acts 2.0: Can Encryption and Safety Co-Exist? — These key comments fundamentally shaped the discussion by establishing it as a collaborative problem-solving exercise ra…
S76
From KW to GW Scaling the Infrastructure of the Global AI Economy — NVIDIA’s contribution to India’s AI ecosystem includes sharing reference designs for AI factories, open-sourcing control…
S77
Local, Everywhere: The blueprint for a Humanitarian AI transformation — In addition, most humanitarian use cases, such as decision support, scenario planning, knowledge search, language assist…
S78
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Collaboration. A collaboration, honestly, is not just a transactional process. It begins here, right? The will to unders…
S79
Network Session: Digital Sovereignty and Global Cooperation | IGF 2023 Networking Session #170 — Audience:Yeah, Alexandre Savnin, Zafri University. So, I was the first person who was saying that there is a tension bec…
S80
Artificial General Intelligence and the Future of Responsible Governance — Mr. Simonas Satunas offered a compelling metaphor, comparing the situation to a 19th-century prophet predicting travel f…
S81
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — Amish points out that most global AI models operate in English, making Indian‑language capability crucial for the countr…
S82
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And it’s a core technology that a country like India must understand. from the foundational level. Otherwise, we will be…
S83
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Voice technology and multilingual capabilities were highlighted as crucial horizontal solutions for healthcare AI in Ind…
S84
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-future-sovereign-infrastructure-and-innovation-at-scale — Hello, good afternoon. Good afternoon. Good afternoon. My name is Sunil Gupta. I am co -founder and CEO of IOTA. So we r…
S85
Building Scalable AI Through Global South Partnerships — And we make government accountable for a lot of it. Just as we’re accountable for the technical side. The other really k…
S86
https://dig.watch/event/india-ai-impact-summit-2026/heterogeneous-compute-for-democratizing-access-to-ai — That’s the edge cloud. And as you go deeper from there onwards, then you have the data centers. It then mitigates the ov…
S87
Microsoft at 50 – A journey through code, cloud, and AI — Microsoft, the American tech giant, wasfounded50 years ago, on 4 April 1975, by Harvard dropout Bill Gates and his child…
S88
ITU-T X-SERIES RECOMMENDATIONS — Migrating to the cloud often implies moving large amounts of data and major configuration changes (e.g., network address…
S89
Accelerating an Inclusive Energy Transition | IGF 2023 Open Forum #133 — Neil Yorke-Smith:Well, hello. Good afternoon, everybody. Or good morning from the Netherlands. It’s nice to be here and …
S90
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Bilel Jamoussi:Since you mentioned Meta, I’ll go to Melinda and ask you about Meta has made significant contributions to…
S91
Next Steps for Digital Worlds — In conclusion, the Metaverse and virtual reality offer exciting possibilities for connectivity and advancements in vario…
S92
https://dig.watch/event/india-ai-impact-summit-2026/from-kw-to-gw-scaling-the-infrastructure-of-the-global-ai-economy — And the last piece, which is the application, right? I’m sure you would have visited the booth downstairs on the Hall 5….
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
2 arguments190 words per minute1213 words381 seconds
Argument 1
Sovereignty is about who makes the rules and controls digital destiny, not total self‑sufficiency
EXPLANATION
Speaker 1 frames sovereignty as the authority to set the rules governing digital systems rather than a goal of complete self‑reliance. Control over the digital destiny of a nation is emphasized over merely owning every component.
EVIDENCE
Speaker 1 states that “the key question of sovereignty is a question of who gets to make the rules” and later adds that “as far as control is concerned, we should try to have control” indicating that rule-making and control, not total self-sufficiency, define sovereignty [5][48-49].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The India AI Impact Summit panel frames sovereignty as rule-making and control rather than full self-reliance [S1], and other analyses stress the need to balance sovereignty with cooperation rather than isolation [S14][S23].
MAJOR DISCUSSION POINT
Definition and Scope of Sovereignty
AGREED WITH
Sunil, Seima
Argument 2
Co‑accountability among market, government, and society is needed to implement sovereignty in practice
EXPLANATION
Speaker 1 argues that effective sovereignty requires shared responsibility among the private sector, the state, and civil society. The focus is on collaborative action rather than unilateral control.
EVIDENCE
Speaker 1 says “the underlying point … sovereignty is only partly what we say, but more importantly, I think is what we do… co-accountability” and later summarises the session by stressing the need for market, government and society to work together [130-133][214-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Co-accountability is highlighted in the summit discussion as essential for practical sovereignty [S1], and multiple sources underline the importance of collaborative governance and guardrails [S14][S17][S23].
MAJOR DISCUSSION POINT
Governance
AGREED WITH
Seima, Sunil
S
Sunil
5 arguments183 words per minute1158 words379 seconds
Argument 1
Sovereignty is often confused with isolation; true sovereignty balances control with collaboration
EXPLANATION
Sunil points out that many equate sovereignty with complete self‑isolation, but real sovereignty means deciding what to control and what to collaborate on. Isolation is therefore a misconception.
EVIDENCE
Sunil notes “sovereignty is also confused with we will do everything ourselves… sovereignty for sure does not mean we become isolated” and stresses the need for collaboration across technology stacks [15-22].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Commentary on digital sovereignty repeatedly notes that true sovereignty requires strategic collaboration and not isolation [S14][S15][S17][S23].
MAJOR DISCUSSION POINT
Definition and Scope of Sovereignty
AGREED WITH
Speaker 1, Seima
Argument 2
Compute infrastructure must reside within national borders to ensure control and support native‑language services
EXPLANATION
Sunil argues that core compute resources—where data is processed, stored and models are trained—must be physically located inside the country to guarantee sovereign control and to enable services in local languages and slang.
EVIDENCE
He explains that “compute infrastructure … has to be within your country… that is where your data is getting processed… that is where your models are being made” and links this to delivering voice-based AI in native languages [25-31].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sunil’s own remarks stress domestic compute for control and local language AI [S1], and ring-fenced, locally-hosted solutions are advocated as a way to guarantee sovereignty [S18][S22][S20].
MAJOR DISCUSSION POINT
Infrastructure Layer – Sovereign Compute and Data Storage
AGREED WITH
Speaker 1, Seima
DISAGREED WITH
Seima
Argument 3
Foreign technologies can be run inside a locally controlled, ring‑fenced data centre to maintain sovereignty
EXPLANATION
Sunil demonstrates that while foreign tools (e.g., NVIDIA, Microsoft, Amazon) are used, they are deployed inside a domestically owned, air‑gapped environment, preserving sovereign control over the stack.
EVIDENCE
He describes how NVIDIA’s software was open-sourced and moved into the Indian data centre, and how “we are using … technologies within my ring-fenced walls, within my GPU and CPU compute infrastructure” with full access control [142-152].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel notes that foreign tools can be deployed inside air-gapped, ring-fenced environments to preserve sovereign control [S1][S18][S21][S22].
MAJOR DISCUSSION POINT
Infrastructure Layer – Sovereign Compute and Data Storage
Argument 4
Language diversity requires sovereign models tailored to native languages and slang
EXPLANATION
Sunil stresses that India’s multilingual population necessitates AI models that understand and generate responses in local languages and colloquial expressions, which can only be achieved with sovereign, locally trained models.
EVIDENCE
He cites the need for “voice-based AI … in their own native language with my own slang” and notes India’s mix of Hindi, English, Malayalam, Kannada etc. as the driver for such models [28-31].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ensuring linguistic diversity through sovereign data sets and models is highlighted as a priority for culturally relevant AI [S19][S26][S25].
MAJOR DISCUSSION POINT
Design Layer – Local Relevance, Use Cases, and Offline Capability
AGREED WITH
Nasubo, Seima
Argument 5
Use best foreign technologies while keeping access control locally to avoid dependence on a single provider
EXPLANATION
Sunil argues that leveraging world‑class hardware and software is acceptable provided the deployment remains under national control, preventing lock‑in to any single vendor.
EVIDENCE
He lists using NVIDIA, Microsoft, Amazon, and Azure technologies “within my ring-fenced walls” and stresses that “no third party is able to log into my system and control” the environment [144-154].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sunil emphasizes leveraging world-class hardware and software within a controlled domestic environment, echoing multi-cloud strategies that mix local and global providers while preserving autonomy [S1][S21][S20][S22].
MAJOR DISCUSSION POINT
Trust, Supply Chain, and Partnership vs. Dependence
AGREED WITH
Nasubo, Seima
N
Nasubo
5 arguments168 words per minute679 words241 seconds
Argument 1
Sovereignty cannot be achieved by complete isolation; trusted partnerships are essential
EXPLANATION
Nasubo contends that while Africa lacks compute capacity, it can still achieve sovereignty by forming trusted partnerships and leveraging its rich data and use‑case knowledge. Isolation is not a viable path.
EVIDENCE
He notes “we are disadvantaged … we don’t have compute … but we have data and use cases” and stresses the need to “define the rules … with trusted partnerships” [57-65][168-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple sources argue that sovereignty must coexist with trusted international partnerships rather than isolation [S14][S17][S23][S24][S25].
MAJOR DISCUSSION POINT
Definition and Scope of Sovereignty
AGREED WITH
Sunil, Seima
Argument 2
Limited local compute is a disadvantage, but data and use cases can drive design; some compute capacity is still needed
EXPLANATION
Nasubo points out that Africa’s compute share is roughly 1 % of global capacity, yet abundant data and sector‑specific use cases can guide AI design, while advocating for incremental local compute acquisition.
EVIDENCE
He states “compute capacity is like at 1 % … we are at disadvantage … but we have data, we have use cases” and later mentions building models for health, breast cancer, etc. [57-58][59-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of compute deserts note that data and domain expertise can guide AI design while advocating incremental local compute acquisition and partnerships for capacity [S24][S25][S22].
MAJOR DISCUSSION POINT
Infrastructure Layer – Sovereign Compute and Data Storage
Argument 3
AI models must reflect local data and context (e.g., breast tissue differences in African women) to be effective
EXPLANATION
Nasubo illustrates that AI for health must be trained on region‑specific data, such as the distinct breast‑tissue composition of African women, to ensure accuracy and relevance.
EVIDENCE
He describes work on “breast cancer … data sets from our lived context … composition of the breast tissue for African women, it’s different” [58-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The bias of models trained on non-local data and the need for region-specific datasets are documented as essential for accurate AI in health and other domains [S26][S19][S25].
MAJOR DISCUSSION POINT
Design Layer – Local Relevance, Use Cases, and Offline Capability
AGREED WITH
Sunil, Seima
Argument 4
Offline access is crucial where connectivity is low; solutions must function without constant internet
EXPLANATION
Nasubo emphasizes that many African regions have only about 50 % connectivity, so AI solutions must be capable of offline operation to reach the underserved population.
EVIDENCE
He notes “digital connectivity is everywhere in Africa, it’s up to like 50 % … we need offline access” and cites building innovations that allow for offline use at the AI village stand [165-168][170-173].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Compute-desert studies highlight limited connectivity as a barrier, underscoring the importance of offline-capable AI solutions for underserved regions [S24].
MAJOR DISCUSSION POINT
Design Layer – Local Relevance, Use Cases, and Offline Capability
Argument 5
Collaboration with global partners can provide compute while maintaining sovereign control
EXPLANATION
Nasubo explains that African innovators can obtain compute from global providers while simultaneously investing in locally owned hardware, ensuring that sovereignty is preserved through shared understanding of rules and structures.
EVIDENCE
He says “we are working with global partners to give us compute … we also want to buy compute for ourselves … this is what sovereignty means” and describes offering compute to innovators at Kala’s stand [168-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Partnership models that combine external compute resources with domestic governance are presented as a path to sovereignty [S22][S23][S14].
MAJOR DISCUSSION POINT
Trust, Supply Chain, and Partnership vs. Dependence
S
Seema
8 arguments148 words per minute1247 words503 seconds
Argument 1
Sovereignty means strategic ownership, visibility, and assurance rather than full ownership of every component
EXPLANATION
Seima argues that sovereignty is achieved through clear ownership structures, visibility into those structures, and robust assurance mechanisms, not by owning every piece of hardware or software outright.
EVIDENCE
She outlines three pillars – “ownership”, “visibility into ownership structures”, and “treat digital assets like any other precious asset with policies and guardrails” [78-85].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion on strategic ownership and visibility aligns with broader views that sovereignty is about assurance and control, not total nationalization [S14][S23][S1].
MAJOR DISCUSSION POINT
Definition and Scope of Sovereignty
AGREED WITH
Speaker 1, Sunil, Seima
Argument 2
Critical national infrastructure should be treated as precious assets with policies ensuring security and guardrails
EXPLANATION
Seima stresses that critical digital infrastructure must be protected like traditional national assets, requiring policies, guardrails, and air‑gapped environments to prevent external compromise.
EVIDENCE
She references “sovereign infrastructure model”, “air-gapped, ring-fence environments”, and the need for “policies, guardrails … not compromised externally” [84-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Treating digital infrastructure as a precious asset with guardrails is advocated in several sources addressing infrastructure challenges and security policies [S14][S15][S18][S20].
MAJOR DISCUSSION POINT
Infrastructure Layer – Sovereign Compute and Data Storage
AGREED WITH
Sunil, Speaker 1, Seima
Argument 3
Policies must evolve with infrastructure to enable designs that serve local needs
EXPLANATION
Seima notes that policy frameworks need to keep pace with rapid infrastructure development so that designs can be responsive to local requirements and emerging technologies.
EVIDENCE
She states “policies need to evolve along with the infrastructure” and highlights the importance of continuous policy-technology alignment [188-191].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for adaptive policy frameworks that keep pace with rapid infrastructure development is emphasized in sovereignty analyses [S14][S20][S23].
MAJOR DISCUSSION POINT
Design Layer – Local Relevance, Use Cases, and Offline Capability
AGREED WITH
Sunil, Nasubo, Seima
Argument 4
Trust must be engineered and verified; global partnerships are needed but must operate within sovereign boundaries
EXPLANATION
Seima argues that trust cannot rely on paperwork; it must be built into systems and verified, while global collaborations should respect sovereign limits and be governed by clear guardrails.
EVIDENCE
She says “trust is not paper-based … trust can only be engineered, and it needs to be verified” and later adds that “you need to forge very good technology global partnerships … with guardrails” [109-112][118-120].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Engineering trust into systems and governing partnerships with clear guardrails is highlighted as essential for sovereign AI ecosystems [S23][S18][S22].
MAJOR DISCUSSION POINT
Trust, Supply Chain, and Partnership vs. Dependence
Argument 5
Supply‑chain transparency and trusted hardware/components are vital for sovereignty
EXPLANATION
Seima highlights that sovereignty extends beyond data to include hardware, chipsets, network gear, and AI provenance, all of which must be sourced through transparent, trusted supply chains.
EVIDENCE
She notes “today, data sovereignty goes well beyond data … it goes into hardware, chipsets, network components, AI provenance … supply-chain trust” [113-116].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Supply-chain transparency, including trusted hardware and chipsets, is identified as a core component of digital sovereignty challenges [S14].
MAJOR DISCUSSION POINT
Trust, Supply Chain, and Partnership vs. Dependence
AGREED WITH
Sunil, Nasubo, Seima
Argument 6
Government must set sovereign guardrails, ensure policy stability, and share co‑accountability with industry and society
EXPLANATION
Seima calls for stable, long‑term policies and clear guardrails from governments, coupled with a partnership model where industry and civil society share responsibility for sovereign outcomes.
EVIDENCE
She mentions “policies need to evolve”, “government must lay the sovereign guardrails”, “long-term stability of policy”, and “co-accountability” in her closing remarks [188-196][200-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Co-accountability and stable, long-term policy guardrails are repeatedly called for in discussions of sovereign governance [S1][S14][S23].
MAJOR DISCUSSION POINT
Governance
AGREED WITH
Speaker 1, Seima, Sunil
Argument 7
Governments should not nationalize everything but must ensure strategic control and assurance
EXPLANATION
Seima clarifies that the aim is not to own every component but to achieve assurance through strategic control, avoiding full nationalization while protecting national interests.
EVIDENCE
She states “the goal is not to nationalize … the goal is assurance … that’s your strategic ownership question” [94-98].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses stress that sovereignty requires strategic control rather than full nationalization, supporting a balanced approach to ownership [S23][S14].
MAJOR DISCUSSION POINT
Governance
Argument 8
Continuous verification and security oversight are required rather than point‑in‑time checks
EXPLANATION
Seima advocates for ongoing security and regulatory monitoring instead of occasional audits, ensuring that sovereign digital assets remain protected over time.
EVIDENCE
She calls for “continuous verification process” rather than “point-in-time checks” to keep sovereignty implementable [197-199].
MAJOR DISCUSSION POINT
Governance
Agreements
Agreement Points
Sovereignty is defined by the ability to set rules and maintain control rather than achieving total self‑sufficiency.
Speakers: Speaker 1, Sunil, Seima
Sovereignty is about who makes the rules and controls digital destiny, not total self‑sufficiency Sovereignty is often confused with isolation; true sovereignty balances control with collaboration Sovereignty means strategic ownership, visibility, and assurance rather than full ownership of every component
All three speakers stress that sovereignty means having the authority to decide how digital systems are governed and to keep critical parts under national control, while rejecting the notion that a country must build everything itself [5][48-49][15-22][78-85].
POLICY CONTEXT (KNOWLEDGE BASE)
This view echoes the consensus at the India AI Impact Summit that sovereignty means strategic control while leveraging global expertise rather than full self-reliance [S40] and aligns with the nuanced definition emphasizing collaboration and interoperability [S41] as well as European tech-sovereignty framing of operational autonomy [S44].
Effective sovereignty requires co‑accountability and partnership among market, government and civil society.
Speakers: Speaker 1, Seima, Sunil
Co‑accountability among market, government, and society is needed to implement sovereignty in practice Government must set sovereign guardrails, ensure policy stability, and share co‑accountability with industry and society Use best foreign technologies while keeping access control locally to avoid dependence on a single provider
Speaker 1 frames sovereignty as a shared responsibility, Seima calls for guard-rails and joint accountability, and Sunil highlights the need for partnership rather than dependence, showing a common view that multi-stakeholder collaboration is essential [130-133][214-218][144-154].
POLICY CONTEXT (KNOWLEDGE BASE)
Multi-stakeholder governance is highlighted in the EU Digital Strategy calling for socially driven, transparent accountability mechanisms [S42] and reinforced by calls for democratic cooperation in AI infrastructure [S57]; the India summit also stressed partnership across sectors [S40].
AI systems must be tailored to local languages, cultures and data contexts to be effective.
Speakers: Sunil, Nasubo, Seima
Language diversity requires sovereign models tailored to native languages and slang AI models must reflect local data and context (e.g., breast tissue differences in African women) to be effective Policies must evolve with infrastructure to enable designs that serve local needs
Sunil stresses multilingual AI, Nasubo gives a health-care example of region-specific data, and Seima notes that policy must keep pace with locally relevant designs, indicating consensus on the need for culturally and linguistically appropriate AI [28-31][58-60][188-191].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on cultural and linguistic diversity stress feeding AI with local content and models [S49]; policy-harmonisation work stresses adapting AI policies to local needs [S48]; domain-specific purpose-built models are promoted as more practical and easier to govern [S47]; offline, simple-interface solutions further underline localisation [S43].
Sovereignty can be achieved through trusted global partnerships and transparent supply‑chains rather than full isolation.
Speakers: Sunil, Nasubo, Seima
Use best foreign technologies while keeping access control locally to avoid dependence on a single provider Sovereignty cannot be achieved by complete isolation; trusted partnerships are essential Supply‑chain transparency and trusted hardware/components are vital for sovereignty
All three speakers agree that leveraging foreign technology is acceptable if it is deployed within a sovereign-controlled environment and that transparent, trusted supply chains and partnerships are necessary to maintain sovereignty [144-154][57-65][113-116].
POLICY CONTEXT (KNOWLEDGE BASE)
Both speakers at the India AI Impact Summit argued for strategic partnerships over isolationist self-reliance [S40]; European tech-sovereignty literature frames sovereignty as choice in partnerships, not forced dependencies [S44]; broader democratic consensus notes no nation can build resilient AI alone [S57].
Domestic compute and data storage are essential for sovereign control of AI services.
Speakers: Sunil, Speaker 1, Seima
Compute infrastructure must reside within national borders to ensure control and support native‑language services Infrastructure layer – sovereignty in compute is not only desirable but possibly possible Critical national infrastructure should be treated as precious assets with policies ensuring security and guardrails
Sunil argues that core compute must stay inside the country, Speaker 1 affirms that sovereign compute is feasible and desirable, and Seima stresses that critical digital infrastructure needs protective policies, showing a shared view on the importance of domestic compute and storage [25-27][34-41][48-49][84-92].
POLICY CONTEXT (KNOWLEDGE BASE)
Indian policy mandates local data storage to drive domestic data-center investment [S55] and broader AI-as-critical-infrastructure guidance stresses jurisdictional and key control beyond mere storage [S56]; the Proximal Cloud concept emphasizes bringing compute close to data for national sovereignty [S50]; NVIDIA’s open-sourced control-plane for local inferencing illustrates partnership-enabled domestic compute [S45].
Similar Viewpoints
Both highlight that isolation is a misconception and that sovereignty must be pursued through selective control combined with trusted external partnerships [15-22][57-65].
Speakers: Sunil, Nasubo
Sovereignty is often confused with isolation; true sovereignty balances control with collaboration Sovereignty cannot be achieved by complete isolation; trusted partnerships are essential
Both stress that foreign technology can be used safely only when it is placed inside a sovereign‑controlled, trust‑engineered environment [144-154][109-112][118-120].
Speakers: Sunil, Seima
Use best foreign technologies while keeping access control locally to avoid dependence on a single provider Trust must be engineered and verified; global partnerships are needed but must operate within sovereign boundaries
Both recognize that practical sovereignty must address real‑world constraints—shared responsibility and the need for offline capability—to reach underserved populations [130-133][180-181][165-168][170-173].
Speakers: Speaker 1, Nasubo
Co‑accountability among market, government, and society is needed to implement sovereignty in practice Offline access is crucial where connectivity is low; solutions must function without constant internet
Unexpected Consensus
Offline capability as a core requirement for sovereign AI solutions
Speakers: Speaker 1, Nasubo
Offline verification was a key lesson from Aadhaar implementation Offline access is crucial where connectivity is low; solutions must function without constant internet
While most speakers focused on control, partnership or language, only Speaker 1 and Nasubo explicitly linked sovereignty to the need for offline functionality, revealing an unexpected convergence on this technical requirement [180-181][165-168][170-173].
POLICY CONTEXT (KNOWLEDGE BASE)
The Open Forum on Local AI Policy Pathways highlighted offline solutions for communities lacking reliable internet [S43]; smaller, purpose-built models are noted for easier governance and potential offline deployment [S47].
Overall Assessment

The panel shows strong consensus that digital sovereignty is about rule‑making and control, not full self‑sufficiency; it must be realized through collaborative governance, locally hosted compute, culturally relevant AI, and trusted partnerships with transparent supply chains. All speakers align on these pillars, indicating a shared vision for policy and implementation.

High consensus across all speakers, suggesting that future initiatives should prioritize domestic compute infrastructure, co‑accountability frameworks, local data/model development, and engineered trust mechanisms to achieve practical sovereignty.

Differences
Different Viewpoints
Degree of ownership and physical control required for sovereign compute
Speakers: Sunil, Seima
Compute infrastructure must reside within national borders to ensure control and support native‑language services Governance should not aim to nationalise every component; strategic ownership and assurance are sufficient
Sunil argues that the compute stack – where data is processed, stored and models are trained – has to be physically located inside the country to guarantee sovereign control and to enable voice-based AI in native languages [25-27]. Seima counters that sovereignty does not require owning every hardware or software element, but rather achieving strategic control and assurance through policies and guard-rails, without full nationalisation [94-98].
POLICY CONTEXT (KNOWLEDGE BASE)
The Proximal Cloud model defines sovereignty as physical proximity and control of compute resources to the nation or region [S50]; European tech-sovereignty discussions stress operational sovereignty and the right to choose partnership structures, implying varying ownership levels [S44]; NVIDIA’s provision of local inferencing control planes further illustrates technical ownership options [S45].
Feasibility of achieving sovereign compute in low‑resource regions
Speakers: Nasubo, Sunil
Limited local compute is a disadvantage; sovereignty must rely on trusted partnerships while gradually acquiring compute capacity Sovereign compute can be built domestically, as demonstrated by large Indian data‑centre deployments
Nasubo points out that Africa’s share of global compute is about 1 % and that the continent is disadvantaged, so sovereignty must be pursued through partnerships and incremental local acquisition rather than full self-sufficiency [57-58][165-168]. Sunil presents the Indian experience where compute, storage and model training are established locally, enabling sovereign AI services [38-42][135-142].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s AI access plan aims to expand compute capacity beyond major hubs, addressing low-resource constraints [S54]; offline, low-bandwidth solutions for underserved populations demonstrate practical feasibility [S43]; reference designs for AI factories are shared to enable local build-out in resource-constrained settings [S45].
Extent to which nationalisation versus partnership should be pursued for critical digital infrastructure
Speakers: Seima, Sunil
The goal is not to nationalise everything but to ensure strategic control and assurance through policies and guard‑rails Sovereign AI can be achieved by using the best foreign technologies inside a domestically controlled, ring‑fenced environment
Seima stresses that the aim is not to own every component but to secure strategic ownership and assurance, avoiding full nationalisation [94-98]. Sunil, while acknowledging the use of foreign technologies, emphasizes that they must be deployed inside a ring-fenced, locally controlled data centre to maintain sovereignty, suggesting a more ownership-centric approach to infrastructure [142-152].
POLICY CONTEXT (KNOWLEDGE BASE)
The India summit’s emphasis on strategic collaboration over full nationalisation [S40] and the Host Country Open Stage’s call for interoperability alongside control [S41] provide a policy backdrop; European perspectives stress partnership choice rather than forced dependence [S44]; global cooperation arguments reinforce the partnership route [S57].
Unexpected Differences
Necessity of substantial local compute for AI model development versus reliance on data and use‑case expertise
Speakers: Nasubo, Sunil
Sovereignty can be pursued with minimal local compute if rich data and use‑cases are available Sovereign AI requires domestic compute infrastructure to process data and train models
Nasubo suggests that even with only 1 % of global compute, Africa can advance AI sovereignty by leveraging its data and use-cases, implying that large local compute is not a prerequisite [57-58][165-168]. Sunil, however, asserts that core compute infrastructure must be located within the country to ensure control and to build models for national needs, indicating that substantial domestic compute is essential [25-27][38-42]. This contrast between data-centric versus compute-centric pathways was not anticipated given the overall consensus on partnership.
POLICY CONTEXT (KNOWLEDGE BASE)
AI 2026 forecasts suggest purpose-built, smaller models can achieve performance without massive local compute, relying on domain data and expertise [S47]; policy-harmonisation literature notes that adapting policies to local data needs can offset compute gaps [S48]; calls for indigenous capability building acknowledge both compute and data dimensions [S59]; NVIDIA’s local inferencing tech reduces the need for large-scale compute clusters [S45].
Overall Assessment

The panel largely agrees that digital sovereignty hinges on rule‑making, trusted partnerships and locally relevant AI. The main points of contention revolve around how much physical ownership and domestic compute are required versus how much strategic control and external collaboration suffice, and whether low‑compute regions can achieve sovereignty primarily through data and partnerships. These disagreements highlight the need for nuanced policy frameworks that balance domestic infrastructure investment with open, secure global partnerships.

Moderate – while there is broad consensus on the principles of sovereignty, the speakers diverge on implementation specifics (ownership vs strategic control, extent of local compute, and the role of nationalisation). The implications are that policy makers must craft flexible strategies that accommodate differing national capacities and avoid a one‑size‑fits‑all approach.

Partial Agreements
All speakers concur that sovereignty cannot be achieved in isolation; they agree on the need for partnerships, trust and shared responsibility, but differ on the balance between domestic control and external reliance. Speaker 1 frames sovereignty as rule‑making and co‑accountability [130-133][214-218]; Sunil stresses using foreign tech inside domestic rings‑fence [144-152]; Nasubo highlights partnerships to obtain compute while building local capacity [168-176]; Seima calls for engineered trust and global partnerships with guard‑rails [109-112][118-120].
Speakers: Speaker 1, Sunil, Nasubo, Seima
Sovereignty requires co‑accountability among market, government and society Trusted partnerships and global collaborations are essential while maintaining sovereign control
Both agree that AI solutions must be tailored to local linguistic and contextual realities. Sunil emphasizes native‑language voice AI for India’s multilingual population [28-31]. Nasubo illustrates the need for region‑specific health data, such as breast‑tissue differences in African women, to build accurate models [58-60]. They differ on the primary driver (language diversity vs health data) but share the goal of locally relevant AI.
Speakers: Sunil, Nasubo
Local language and context‑specific AI models are essential for impact Design must reflect local data and contexts (e.g., health use cases) to be effective
Takeaways
Key takeaways
Sovereignty is about who sets the rules and controls the digital destiny, not about total self‑sufficiency; it requires a balance between control and collaboration. Compute and data storage should reside within national borders to ensure strategic control and to support native‑language, real‑time AI services. Foreign technologies (e.g., NVIDIA, Azure, AWS) can be used, but must be deployed inside a locally controlled, ring‑fenced environment to avoid dependence on a single external provider. AI models must be designed around local data, contexts, and use‑cases (e.g., language diversity, specific medical data) and should include offline capability where connectivity is limited. Trust is a core pillar: it must be engineered, verified, and supported by transparent, traceable supply‑chains and hardware components. Governance requires clear, stable policies, sovereign guardrails, and a co‑accountability model among government, industry, and society. Treat digital infrastructure as a national asset, applying the same security, oversight, and continuous verification standards as other critical assets.
Resolutions and action items
Governments should develop and publish sovereign guardrails and policies that evolve alongside AI infrastructure. Industry should adopt a public‑private partnership approach, building sovereign compute capacity while leveraging best‑of‑breed foreign technologies within locally controlled data centres. Implement continuous security and regulatory verification processes rather than one‑off checks. Encourage the creation of regional compute hubs (e.g., in Africa) that can provide both online and offline AI services for local innovators. Promote open‑source or open‑sourced components (as demonstrated by NVIDIA) to bring critical software under local control.
Unresolved issues
Specific mechanisms for verifying and certifying trust in global supply‑chain components remain undefined. How low‑resource countries will acquire sufficient compute capacity without excessive reliance on external providers was not concretely addressed. Details of policy frameworks, ownership thresholds, and enforcement mechanisms were discussed conceptually but not finalized. Funding models and long‑term financial commitments required for building sovereign AI infrastructure were not resolved.
Suggested compromises
Use foreign hardware and software within domestically owned, ring‑fenced data centres to retain control while benefiting from advanced technology. Adopt a hybrid model of sovereignty: retain strategic control over compute and data, but collaborate with global partners for capacity and expertise. Treat digital assets as national assets with guardrails, without fully nationalising every component, allowing private sector innovation to thrive.
Thought Provoking Comments
Sovereignty does not mean we become isolated and do everything ourselves; it’s about controlling what we need and collaborating where appropriate. Compute infrastructure must be within the country to process and store data, but we can still use foreign technology within our own ring‑fenced environment.
Redefines sovereignty from a protectionist stance to a nuanced balance of control and collaboration, introducing the concrete criterion of ‘sovereign compute’ as essential for national digital destiny.
Set the foundational framework for the discussion, prompting other panelists to address both infrastructure and design layers. It shifted the conversation from abstract notions of sovereignty to concrete technical requirements.
Speaker: Sunil
Even though Africa has only about 1 % of global compute capacity, we have abundant data and specific use‑cases. By building tools that reflect our lived context—like breast‑cancer models for African women—we can define the rules ourselves despite limited compute.
Challenges the assumption that lack of compute precludes sovereignty, emphasizing data, local expertise, and contextual relevance as the true drivers of autonomous AI development.
Introduced a new perspective that sovereignty can be achieved through localized data and problem‑focused models, prompting the group to consider design and application layers over sheer hardware ownership.
Speaker: Nasubo
Sovereignty is not about owning every component of the supply chain; it’s about strategic control, visibility, and trust. We need guardrails and policies that treat digital assets like precious national assets, ensuring assurance without full nationalization.
Clarifies sovereignty as strategic control and trust rather than outright ownership, adding nuance about governance, visibility, and the role of public‑private partnerships.
Shifted the dialogue toward governance mechanisms and trust frameworks, leading participants to discuss policy evolution, continuous verification, and the balance between industry innovation and state oversight.
Speaker: Seema
We migrated the national AI language platform Pashini from a hyperscale cloud to our own data centre, and even open‑sourced a critical NVIDIA component so it runs entirely within our control. We use the best foreign tech, but inside a ring‑fenced environment we control.
Provides a concrete, real‑world example of achieving sovereign AI through strategic partnerships, open‑source adaptation, and infrastructure control, illustrating the earlier abstract concepts.
Validated the earlier theoretical points with a practical case study, reinforcing the feasibility of the ‘use best tech, keep control’ model and prompting agreement from other speakers on partnership over dependence.
Speaker: Sunil
In Africa, connectivity is only about 50 %, so we must design compute solutions that work offline. We’re building platforms that allow innovators to run AI locally without constant internet, ensuring inclusion of offline users.
Introduces the critical operational challenge of limited connectivity, expanding the sovereignty discussion to include offline capability and accessibility for underserved populations.
Redirected the conversation toward implementation challenges in low‑connectivity environments, leading to references to offline verification in Aadhaar and highlighting the need for adaptable infrastructure.
Speaker: Nasubo
Policies must evolve alongside infrastructure; we need continuous verification rather than point‑in‑time checks, and the government must provide stable, long‑term guardrails while industry focuses on innovation and scale.
Highlights the dynamic nature of governance for sovereign AI, stressing the importance of adaptive policy, ongoing security validation, and the symbiotic role of government and industry.
Cemented the earlier governance discussion, prompting the moderator’s summary about co‑accountability and reinforcing the call for a partnership model between state and market.
Speaker: Seema
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from a vague slogan of ‘data sovereignty’ to a concrete, multi‑layered framework. Sunil’s opening definition anchored the debate in the necessity of sovereign compute, while Nasubo’s emphasis on local data and use‑cases reframed sovereignty as a function of contextual relevance rather than raw hardware. Seema added depth by distinguishing strategic control and trust from outright ownership, introducing governance mechanisms. Sunil’s real‑world migration example demonstrated how these principles can be operationalized, and Nasubo’s focus on offline capability highlighted practical challenges in low‑connectivity regions. Finally, Seima’s call for evolving policies and continuous verification tied the technical and design considerations back to sustainable governance. Collectively, these comments redirected the conversation toward actionable strategies—balancing partnership with control, leveraging global technology within national boundaries, and ensuring inclusive, trustworthy AI deployment.

Follow-up Questions
Is it realistic for a nation to be both sovereign in its AI infrastructure and remain connected to global ecosystems?
Understanding the balance between maintaining control over critical AI resources while leveraging global collaboration is essential for policy and technical design.
Speaker: Speaker 1 (to Sunil)
How can countries with limited compute capacity, such as many African nations, design AI solutions that function offline or with intermittent connectivity?
Addressing low‑connectivity environments is crucial to ensure AI benefits reach populations without reliable internet access.
Speaker: Speaker 1 (to Nasubo)
What policies and guardrails need to evolve in tandem with AI infrastructure to safeguard sovereignty?
Dynamic regulatory frameworks are required to keep pace with rapid AI development and protect national digital assets.
Speaker: Seema
How should ownership, visibility, and trust be defined and enforced across the supply chain of critical digital infrastructure?
Clear definitions of ownership and transparent supply‑chain visibility are needed to prevent external geopolitical leverage and ensure security.
Speaker: Seema
What mechanisms can enable continuous verification of security and regulatory compliance rather than point‑in‑time checks?
Ongoing monitoring is vital to maintain the integrity of sovereign AI systems over time.
Speaker: Seema
What is the optimal size and architecture of AI models (e.g., 20‑100 billion parameters) to satisfy the majority (≈95 %) of national use cases without resorting to trillion‑parameter frontier models?
Identifying cost‑effective model scales can guide investment decisions and reduce dependence on massive external models.
Speaker: Sunil
How can critical software components (e.g., NVIDIA’s NVCF) be open‑sourced or otherwise brought under local control to eliminate foreign lock‑in?
Localizing essential software ensures operational sovereignty and reduces reliance on external vendors.
Speaker: Sunil
How can nations build trusted global partnerships while preserving sovereign control over AI infrastructure?
Balancing collaboration with external tech providers against the need for autonomous control is a key strategic challenge.
Speaker: Seema
What approaches are needed to develop AI systems that accurately handle native languages, regional slang, and multilingual contexts at population scale?
Effective local‑language AI is essential for inclusive adoption in linguistically diverse societies.
Speaker: Sunil
How can third‑party technologies be integrated into a ring‑fenced environment while guaranteeing they remain trustworthy and cannot be accessed or controlled externally?
Ensuring that imported technologies operate securely within sovereign boundaries is critical for data protection.
Speaker: Sunil
What should a national digital infrastructure (analogous to power grids or telecom networks) look like for AI, and what guardrails are needed to protect it?
Conceptualizing AI infrastructure as a core national asset helps shape investment, governance, and security strategies.
Speaker: Seema
How can AI development processes ensure inclusion of offline or marginalized populations, preventing exclusion in model design and deployment?
Inclusive design guarantees that AI solutions serve all citizens, not just those with reliable connectivity.
Speaker: Nasubo
What public‑private partnership models can provide long‑term policy stability and financial commitment to encourage private industry to build sovereign AI capacity?
Stable, incentivizing frameworks are needed to attract private investment in large‑scale AI infrastructure.
Speaker: Seema
How can operational efficiency be achieved while maintaining the desired degree of sovereignty over AI systems?
Finding the right balance between efficient operation and sovereign control is essential for sustainable AI deployment.
Speaker: Seema
How can AI governance rules be co‑created with society (samaj) and the market (bazaar) to ensure broad accountability and relevance?
Inclusive rule‑making promotes legitimacy, aligns AI development with public interest, and strengthens sovereign oversight.
Speaker: Speaker 1

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Internet Inclusive AI Unlocking Innovation for All

Open Internet Inclusive AI Unlocking Innovation for All

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, moderated by Rahul Mathan, opened by stressing that AI should not be confined to a handful of companies in a single postal code and that democratizing the technology requires new infrastructure [15-21]. Matthew Prince explained that today’s AI is expensive because it relies on massive numbers of NVIDIA GPUs originally designed for gaming and cryptocurrency, which consume huge power and are costly [22-30]. He added that only a tiny global talent pool knows how to build and run large models, further driving high salaries and limiting broader participation [31-33]. Prince argued that rising enrollment in computer-science and AI courses, expanding chip production, and competition among startups will drive down hardware costs, making AI models more of a commodity [44-53][54-57]. He predicted that within five years a frontier-level specialized model could be built for under $10 million, a dramatic drop from today’s billions-dollar investments [60-62].


Rajan Anandan countered that India does not need to chase AGI; instead it is deploying billion-parameter models optimized for Indic languages, such as SARVAM, which already outperform global voice-AI at a fraction of the cost [75-83][84-92]. He emphasized building a sovereign AI stack, citing recent investments in Indian GPU and memory startups and alliances with firms like Paxilica to reduce dependence on foreign chip suppliers [108-119][120-128]. When asked about open-weight models, Rahul noted security concerns, while Prince defended openness, saying AI-doom narratives may be a strategy for incumbent firms to capture regulation and that regulation should target behavior rather than model access [165-172][176-204]. Rajan agreed that open source is essential but argued that the economics of training large models make fully open releases difficult, and that lowering inference costs is the key to affordable AI for India’s billion users [217-227][240-248].


Prince also warned that AI could amplify cyber-attacks, but highlighted that his company uses machine learning to detect threats faster than humans and expects overall online security to improve over the next decade [264-282]. He warned that dominant search engines’ control over web indexing gives them a data advantage, urging regulators to ensure equal data access and anticipating a new internet business model that rewards content creators rather than traffic [345-368][395-404]. The discussion concluded that achieving AI democratization will depend on cheaper hardware, open-source ecosystems, sovereign data and chip strategies, and nuanced regulation that balances innovation with security [58-62][217-227][176-204][345-368].


Keypoints


Why AI is currently hard and expensive, and how those barriers might fall.


Matthew explains that AI’s cost is driven by a reliance on a single chip supplier (NVIDIA) and massive power needs, plus a tiny pool of specialized talent [22-31]. He notes that enrollment in CS and AI courses is soaring, which should broaden expertise [44-46]. He also predicts that chip competition and economies of scale will drive down per-unit costs, making frontier-level models affordable (≈ $10 M) within five years [58-62].


India’s distinct AI roadmap: low-cost, language-focused models and a sovereign tech stack.


Rajan stresses that India does not need trillion-parameter AGI; instead it needs “highly performant, extremely low-cost” models of 1-200 B parameters for 1.4 billion users [75-81]. He cites home-grown models (Sarvam) that outperform global voice-AI at a fraction of the cost [82-84] and outlines rapid growth in Indian chip design and GPU startups, as well as the push for a sovereign hardware and compute stack [108-119]. He argues that India will win on the application layer, leveraging massive smartphone penetration and local-language demand [121-132].


The open-source / open-weights dilemma.


Rahul raises the tension between the need for open models to enable rapid innovation and the security risks of releasing highly capable weights [165-174]. Matthew argues that restricting access is a business strategy masquerading as safety concerns and that a more open ecosystem will ultimately prevail [176-204]. Rajan adds that the economics of trillion-dollar model training make pure openness infeasible, yet open models remain “absolutely critical” for the ecosystem [217-252].


Regulation, safety, and the narrative of AI risk.


Matthew suggests that “AI doomers” may be motivated by a desire to capture regulatory advantage, warning that over-regulation could stifle competition [180-197]. Rahul points out the parallel with nuclear-style regulation proposals and the broader societal fear of AI-driven cyber threats [208-212]. Both agree that human-like legal frameworks (criminal code) may be more appropriate than engineering-centric rules [199-202].


Data access, web crawling, and the future internet business model.


Matthew highlights the asymmetry in web indexing (Google sees far more pages than competitors) and warns that AI could upend the traditional traffic-based monetisation model of the internet [345-403]. He calls for new compensation mechanisms for content creators and suggests that the next five years will see a “new business model” that rewards knowledge creation rather than mere traffic [395-404].


Overall purpose / goal


The panel’s aim was to explore how to democratise artificial intelligence-making the technology, infrastructure, models, and data accessible beyond the current concentration in a few “postal-code” companies-while addressing the technical, economic, regulatory, and societal challenges that this transition entails, with a particular focus on India’s role and opportunities.


Overall tone


The discussion begins with a technical, analytical tone, outlining AI’s cost drivers. It then shifts to a national-strategic, optimistic tone as Rajan outlines India’s emerging capabilities. Mid-conversation the tone becomes critical and cautionary, debating open-source risks and regulatory capture. Toward the end it moves to a forward-looking, hopeful tone, envisioning new internet business models and collaborative solutions. Throughout, the speakers remain collegial but the emphasis oscillates between optimism about rapid progress and concern over concentration of power and safety.


Speakers

Announcer


– Role/Title: Event announcer / moderator


– Areas of expertise: (not specified)


– Sources: [S3][S4][S5]


Rahul Matthan


– Role/Title: Moderator; Partner at TriLegal (board member, Bangalore office), leads technology, media & telecom practice


– Areas of expertise: Legal insight, policy, technology, media, telecom, high-value TMT transactions


– Sources: [S9][S10][S11]


Matthew Prince


– Role/Title: Co-founder and CEO of Cloudflare


– Areas of expertise: Internet infrastructure, cloud security, AI, web performance, networking


– Sources: [S6][S7][S8]


Rajan Anandan


– Role/Title: Managing Director of Peak15 Partners (formerly founder of Sequoia Capital India in Southeast Asia)


– Areas of expertise: Technology investment, AI, semiconductor ecosystem, startup ecosystem, digital sovereignty, venture capital


– Sources: (information derived from transcript)


Audience


– Role/Title: Audience members (questioners)


– Areas of expertise: Varied; examples include


– Yuv – individual from Senegal [S12]


– Professor Charu – public administration scholar [S13]


– Dr. Nazar – (role not clearly specified) [S14]


– Sources: [S12][S13][S14]


Additional speakers:


(None identified beyond those listed above)


Full session reportComprehensive analysis and detailed insights

Moderator Rahul Mathan opened the session by recalling Matthew Prince’s closing remark from his keynote – that the transformative power of artificial intelligence should not be confined to “a handful of companies in the same postal code” – and asked the panel to discuss what infrastructure would be needed to democratise AI given today’s technical and economic barriers [15-21][14].


Matthew Prince began by explaining why AI is currently hard and expensive. He noted that modern AI workloads require massive numbers of GPUs, a market dominated by NVIDIA, whose chips were originally designed for gaming consoles and later repurposed for Bitcoin mining rather than for AI, making them power-hungry and costly [23-30]. Prince also highlighted the scarcity of specialised talent – only a tiny global pool of engineers can design, train and operate large models, driving up salaries and limiting broader participation [31-33].


He then pointed to several forces that could erode these barriers. Enrolment in computer-science and AI-theory programmes has surged worldwide, expanding the talent pipeline [44-46]. The silicon market, after successive shortages, is now experiencing a “glut”, and a growing number of startups, incumbents and hyperscalers are entering GPU production, which should drive down per-unit compute costs [50-54]. As models become more of a commodity, Prince argued that the cost of building frontier-level specialised models could fall to “$10 million or less” within five years [58-62].


Rajan Anandan shifted the focus to India’s distinct AI strategy. He stressed that India does not need to chase artificial general intelligence; instead the priority is to develop “highly performant, extremely low-cost models” of one to two-hundred billion parameters that can serve 1.4 billion people [75-81]. He cited the home-grown SARVAM system, which already delivers state-of-the-art speech-to-text and text-to-speech in Indic languages at a fraction of the cost of global competitors [82-84].


To sustain this trajectory, Rajan outlined the need for a sovereign AI stack. India accounts for roughly 20 % of the world’s semiconductor designers [108-109] and now hosts 35-40 semiconductor startups ranging from low-power 20 nm system-on-chips to GPU designers such as Agrani and memory firms like C2I, both of which have received fresh investment [110-113]. He argued that a “sovereign AI stack” – covering chips, compute and data – is essential because “our friends are no longer friends, or sometimes they are, sometimes they aren’t” [113-116]; strategic alliances such as the recent partnership with Paxilica are part of this approach [117-119]. He also noted that Indian conglomerates Adani and Reliance announced a combined $100 billion commitment to AI infrastructure at the model layer this week [120-124].


Rajan further emphasized voice-AI economics, noting that current Indian voice-AI costs about 3 rupees per minute, while SARVAM can already deliver sub-rupee rates; to achieve mass adoption the cost must fall to roughly 5-10 paisa per minute [239-245][240-245].


He highlighted a startup, Cloud Physician, which has amassed proprietary ICU data from tier-2/3 towns and used it to build a dozen specialised healthcare models now being commercialised in the U.S. [250-255]. Rajan also pointed out that India has collected less than 1 % of the data needed for AGI, underscoring the opportunity for data-collection startups and the importance of smart data regulation [260-267].


The panel then debated the open-source/open-weights dilemma. Rahul warned that releasing highly capable models as open weights could enable “malicious fine-tuning” and other security threats [165-174]. Prince responded that attempts to restrict access are often a “business strategy” masquerading as safety concerns, noting that incumbents may deliberately amplify AI-doom narratives to capture regulation and preserve market dominance [176-197], and he maintained that a more open ecosystem will ultimately prevail [198-204]. Rajan agreed that openness is “absolutely critical” for the ecosystem [217-221] but cautioned that the economics of training trillion-dollar models make fully open releases untenable, suggesting new economic pathways are needed to reconcile openness with investment recovery [222-232][217-227].


Regulation and safety were further explored. Prince suggested that regulators should focus on the behaviour of systems – applying criminal-code principles to AI rather than trying to control deterministic outputs [199-202]. He also warned that the “AI-doom” narrative may be a tool for regulatory capture, urging caution against over-regulation that could stifle competition [180-197]. Rahul compared AI oversight to nuclear regulation, proposing an “IAEA for AI” [208-212], while Prince reiterated that regulation should target system behaviour [199-202]. The discussion also covered AI’s dual security role: AI can amplify phishing, social-engineering and sophisticated breaches (e.g., the SalesLoft incident) [266-276], yet Cloudflare’s machine-learning-driven threat detection shows how AI can make the internet more secure, with Prince predicting that “in ten years we are more secure online than we are today” [277-286].


Data-access inequality was identified as another source of asymmetry. Prince warned that Google indexes far more of the web than competitors – roughly six pages to Microsoft’s one, three-and-a-half pages to OpenAI’s one, and ten pages to Anthropic’s one – giving it a decisive training advantage [345-353]. He argued that either regulators must force Google to share its index on equal terms [358-366] or the industry must devise mechanisms such as “pay-to-crawl” to level the playing field. He also cited DeepSeek’s breakthrough pruning algorithm, which efficiently discards large portions of the computation tree, allowing models to run on far fewer chips and illustrating how constraints can drive efficiency [380-387].


When the audience posed questions, several themes resurfaced. On trustworthiness, Prince replied that AI is already “more trustworthy than most humans”, citing self-driving cars that are statistically safer than 99.99 % of human drivers, and suggested that trust should be measured against human performance rather than an idealised perfection [450-466][467-470]. On creator compensation, he explained that scarcity (e.g., blocking AI crawlers) forces publishers to negotiate higher licensing fees, as demonstrated by Reddit’s 7× higher payout compared with the New York Times [474-477]. Rajan highlighted the rapid growth of consumer-AI startups in India, noting that “India today has more consumer AI startups than the US” and that recent seed investments are targeting education, healthcare and entertainment for the country’s 900 million internet users [479-486].


In sum, the panel converged on three pillars for AI democratisation: (1) falling hardware costs and model commoditisation – exemplified by Prince’s $10 million frontier-model prediction and India’s emerging chip ecosystem; (2) open-source/open-weights as essential for a healthy ecosystem, tempered by the economics of trillion-dollar training runs; and (3) proportionate regulation that addresses data monopolies, security risks and the shift away from traffic-based monetisation. Unresolved issues include how to fund fully open models while preventing malicious fine-tuning, designing robust creator-attribution and compensation frameworks, finalising India’s sovereign AI-stack roadmap, and defining the next internet business model that rewards knowledge creation rather than mere traffic.


Session transcriptComplete transcript of the session
Announcer

Very few individuals have done more to bring revolutionary and transformative technology into the hands of millions than Matthew and Rajan. Matthew Prince is the co -founder and CEO of CloudFlare, a World Economic Forum on Foreign Relations, Forum Technology Pioneer, and a Council on Foreign Relations member. He has degrees from Harvard, Chicago, and Trinity College, and co -created Project Honeypot, the largest community tracking online fraud and abuse. Matthew’s founding mission for CloudFlare was to help build a better Internet, a goal that has become increasingly critical in the age of artificial intelligence. Rajan Anandan is one of India’s most influential technology leaders and investors, currently serving as Managing Director of Peak15 Partners, formerly Sequoia. He is the founder of Sequoia Capital India in Southeast Asia, where he focuses on backing founders building transformative, technology -led businesses.

With decades of experience across entrepreneurship, investing, and global technology leadership, Rajan has played a pivotal role in shaping India’s startup and digital ecosystem. Orchestrating this conversation is Rahul Mathan, who brings the perfect blend of legal insight, policy depth, and the ability to ask the questions everyone else is thinking. Rahul is a board member and partner in TriLegal’s Bangalore office and has their technology, media, and telecom practice. He has extensive experience advising on high -value TMT transactions in the country. He has worked with companies across sectors, from telecom majors to Internet and data service providers, offering advice on regulatory matters and operational issues. So please join me in welcoming three awesome leaders on stage, and with that, the stage is yours.

Thanks, Rahul.

Rahul Matthan

And since I haven’t worked with you, I’m going to square that circle. Matthew, I just heard your keynote up in the big 3 ,000 -seater hall, and you ended with a very powerful statement, which is that that this wonderful AI technology should not be built by a handful of companies in the same postal code. And that, in many ways, seems to be the driving motivation for having this discussion. But it’s easier said than done in that AI is a very big and complicated stack. And a lot of that stack actually involves complex hardware. And it’s hard, really, to move that hardware around the Internet. So if we are to democratize AI and if we are to come up with the infrastructure construct that would democratize AI, what would that look like?

And what is your idea, your vision for how this would be, if not now, but sometime soon?

Matthew Prince

Yeah, so let’s talk first about why AI is hard and expensive today. So the first thing is AI requires lots and lots and lots of chips. And the fifth thing is AI requires lots and lots of chips. And the sixth thing is AI requires lots and lots of chips. largely produced today by one manufacturer, NVIDIA, that use a ton of power and are very, very expensive. They were never built to do this. If we’re totally honest, the NVIDIA chips were built to power gaming consoles, right? And then for a while to mine Bitcoin and then magically to create a superintelligence. But if you had started with, let’s create the superintelligence, you would have designed those chips somewhat differently today.

That’s challenge one that keeps AI very, very hard. Challenge two is that it requires a real specialized set of knowledge. There’s a very small set of people in the world that know how to build these models and how to run these systems. And so you have to ask, why is that not something where everyone knows it? If you had known that you could specialize in this in school and literally make $100 million, there’s a year. We would all have studied AI, right? And yet if you go back just five years ago, the people who were studying AI were kind of the weirdos. Why was that the case? Well, because AI was one of these fields that kind of had promise in the 70s and had promise in the 80s and had promise in the 90s.

And then everyone was kind of like, you know what? We’re tired of this. And so the AI professor was kind of shunted off to the side. And so if those are the things that today make AI extremely expensive, the question is, are those things permanent states or are they going to change? Well, we can measure one of them already. Already, if you look in enrollment in computer science programs across the world, it is up dramatically, even though supposedly there’s no future for computer scientists, in just the last two years. And then secondly, the enrollment in specifically AI theory courses is off the top. Every university that used to sort of shutter their course is now standing it up and building it like.

like crazy. And so I think that over time, we’re gonna have more and more people who are able to do this. And so having to pay enormous salaries for those people, that’s probably not going to be the future. On the chip side, you know, if you have literally a company going from being an obscure gaming company, to the most valuable company in the world, obviously, a whole bunch of people are going to chase after that. And if you look at the history of silicon, anytime there has been a silicon shortage, it turns into a silicon glut over time. And with GPUs, it’s kind of had hit after hit after hit. I think that what we’re seeing, at least is that both from startups, as well as incumbent players, as well as from the hyperscalers and other players, they’re getting involved.

There are so many people who are making this this silicon, that no matter what the price per unit of work done is going to come down. The other thing that I think is encouraging is that if we look at the actual AI models themselves, it doesn’t appear that this is necessarily a one company is running away with it. It’s sort of like Google gets a lead, and then Anthropic passes them, and then OpenAI passes them, and then someone else passes them, and then Google does it again, and it keeps leapfrogging itself. That, to me, suggests that the actual model making is more likely in a steady state in the future to be something like a commodity.

And if that’s the case, if the cost of creating the models is going to come down, if the models themselves are more commodities, I think that we can’t assume that the literally hundreds of billions, if not trillions of dollars that are going into building the leading AI companies today, that that might come crashing down. And my prediction would be that you’ll be able to build models. Models that will be on the frontier. They’ll be more specialized, but they’ll be on the frontier. for tens of millions of dollars in the not -so -distant future. I’ll put a date out there. In five years, you’ll be able to build a frontier -like model within a specialty for $10 million or less.

Rahul Matthan

Rajan, about a year ago, one of these companies and one of these postal codes were here, and you asked a question, you know, what will it take for India to compete? And you were told it’s hopeless, don’t compete. And yet at the summit, we’ve come out with a model that’s, by all accounts, haven’t yet played with it competitive. So what Matthew is saying seems to be working out, but he’s putting a five -year timeline. I would argue that perhaps we could be more aggressive with that timeline. So what is your view on this as someone who is actually, you know, in India, working with some of these really smart people who are working under constraints?

but are yet putting out some fairly impressive models, interesting use cases. What’s life like at the other end of the absolute front? I mean, what people call the absolute frontier of these models. What is life at the other end of it where there are many different applications, many different use cases, and many different types of models? Yeah.

Rajan Anandan

Firstly, hi, everyone. Great to have all of you here. So I think the first thing is, look, Matthew, I don’t know whether you’ve been following a company called Sarvam. I think, firstly, it’s important that India is not trying to get to AGI. With 1 .4 billion humans and a million Indians turning 18 every month, AGI is not the thing that we need. Our focus really is to uplift 1 .4 billion Indians. And I think our ecosystem, our innovators, our government, our investors, our technologists, our engineers, our engineers, our engineers, our engineers, our engineers, are all of the view that we have. But to really do that, you don’t need trillion or five trillion parameter models. What you need are highly performant, extremely low cost models that are a billion parameters to maybe 100 or 200 billion parameters.

And actually, for the amount that you mentioned, we have launched 30 and 100 billion parameter models that are SOTA in Indic languages. In fact, I don’t know whether you know this, Matthew, but if you look at voice AI in Indic languages, SARVAM today is both SOTA in speech -to -text and text -to -speech and is a fraction of the cost of the global models, including the global leader in voice AI. So I think what you’re going to see is the government’s actually – and by the way, the reason SARVAM is able to do that, because of tremendous support from the government, but it’s not just SARVAM. There are 12 large and small language models. By the way, just – I don’t know, I don’t know.

a clarification because when Sarvam launched, I think somebody said India is really good at small language models. The last time I checked Chakchipiti and Gemini, anything above 30 billion is actually a large language model. So we are actually in the large language model race. Just a clarification there. So basically, we have 12 companies, actually 11 companies and Bharat Chen, which is part of IIT Bombay, that are building these models. I think this number goes to 15, 20 very, very quickly. And I would say actually well within this year, Matthew, in many, many things that India needs, right? We need to uplift 100 million farmers, and for that, we need to build basically models that work on feature fonts, right, in local languages.

That’s done, right? That was actually launched on Wednesday and so on. So I think that’s the first thing. And now when you ask this other question of, look, true frontier, which is, you know, the frontier today, let’s call it a few trillion, maybe it’ll go to 5 trillion, 10 trillion. I think it’s going to be very, if you define frontier that way, part of it is also definition of frontier and more importantly, what’s the objective, right? If you define that frontier that way, I think it’s going to, it’s not going to be, Indians are not going to be able to do it with these set of architectures. But what we are going to do is to Matthew, to the point that you made, which is, look, LLMs are the most inefficient compute machines ever.

I mean, this is the, you know, these are not efficient architectures, right? We believe that this is the beginning, not the end. We believe there’s going to be many more to come after transformers. And I think that is where the bets are going to be made. In fact, I mean, you know, Jan Lukun is here and he sort of said that, look, you know, this is not sort of the, this is not going to lead us to AGI. We don’t really want to get to AGI, but even if you just look at where AI is going to go. So I’d say at the model layer this week, India entered the race, but we are going to play this race differently, which is IE.

We’re not going to try to build trillion parameter models and we’ll do it at super, super low cost. Now, coming to the chip player, that’s harder for India, but I don’t know whether you know this, Matthew, but 20 % of the world’s semiconductor designers are in India. Four years ago, we had no semiconductor startups. Today, we have about 35 to 40. They span the spectrum from low -power, call it 20 nanometer chips, these all SoCs, all the way up through – actually, two weeks ago, we announced an investment in a GPU company. It’s a very seasoned team, Intel, AMD, a company called Agrani. Monday, this week, we announced an investment in a company called C2I, which is going to make memory, focus on memory.

So we’ll see even at the chip player, because what is very clear to us and I think to many in India is we have to have the sovereign stack. Our friends are no longer friends, or sometimes they’re friends, or sometimes they’re not. And as India, we just need to have the sovereign stack. And we can’t – we, of course, are going to have alliances. And today, I think a very important alliance. This alliance was announced with Paxilica and so on. But we’ve got to actually have a sovereign stack. So whether it’s the chip layer, whether it’s the compute layer, I think it’s great that both Adani, Reliance announced $100 billion investments into AI infra this week at the model layer.

Where we have excelled is on the application layer. And what I can tell you is at the application layer, I joined Google in 2011. At the time, India had 10 million connected smartphone users, no venture capital, and no unicorns. Today, we have a lot of venture capital. We have 900 million smartphone users. We don’t have enough capital, but we have enough venture capital, and we have 125 unicorns. At the application layer, I can confidently say, whether it’s consumer, whether it’s enterprise, Indian companies will win. Because the traditional formats of consumer consumption, which is called search, or now Gemini, ChatGPT, et cetera, will scale to, in my view, will probably scale to 200 or 300 million. They’re not going to scale to a billion Indians.

To scale to a billion Indians, you’ve got to. You’ve got to have image, you’ve got to have video, you’ve got to have highly local language, and it’s got to be. ultra, ultra low cost. So I do think we have a shot. I think what you’ve just described, we ship this week. I think by the end of the year, we’re going to ship 100x more because, Matthew, what’s happened in India is we don’t, we can’t, we’re trying to do things differently. Payments is an example, which I think the whole world knows about. And you’ll see that playing out in many other things. And I’ll last end with this. Matthew, 2015, India had two space tech

Matthew Prince

And I would just say, don’t sell yourself short. Like you may not, India may not need AGI, but India may still build AGI. Right. And I think that the thing that actually might end up holding back the biggest AI companies is that they are so unresourced constrained. If you look at what was the biggest over the last two years, innovation that really drove AI forward, it wasn’t anything that Google or OpenAI or anyone else did. It was actually DeepSeek, and DeepSeek’s ability to say that within the constraints of the chips that they had access to, that they would – they had two incredible innovations. They would prune the tree more efficiently, and they’d be able to process that pruned tree much more quickly.

I wish DeepSeek had been an Indian company, not a Chinese company. It would have ended a little bit more constrained. I think we have a thought room. But it is – I actually think those places with constraints, and I would not be scared away if you’re an Indian AI company by hearing the hundreds of billions of dollars that the big U .S. AI companies are pouring in. That seems like an asset. That seems like an advantage that they have. But in some ways, it’s blinding them to what will be the real innovations that cause AI to become more efficient, that cause AI to become – more scalable, and there is no way that the long -term solution to this is you have to turn up a mothballed nuclear power plant.

So we’re going to get more efficient, and I would bet that that efficiency comes from places just like this.

Rahul Matthan

So if I can push back to both of you, I think there’s – not that I disagree with any of this, but to say that – so one of the things that DeepSeek did was it came up with this reasoning model. I guess other people were working on it, but they did a really good job of doing reasoning really well and really powerfully. I mean the real – the DeepSeek innovation was being able to say that you’ve got to build this giant tree if you’re building an AI model. And they were able to say probabilistically there’s a whole bunch of branches on the tree that we can ignore. Like there’s a bunch of things in your life that have happened to you that your brain is just really good at forgetting about, whereas there are a few just salient moments that have formed who you are as a person.

What DeepSeek did is did a better job of pruning that. The big US AI models don’t have to do that because they can just – well, let’s just buy another H200, right? And let’s just keep throwing more money at the problem. By having the constraints and the specialization. in this case, the memory constraints, it forced DeepSeek to come up with a better pruning algorithm, which allowed them to then just be able to deliver AI at a much, much, much more efficient level. And I suspect Sarvam did something similar because, I mean, I spoke to Pratyush and he said that one of the things that the big guys have been coming around when we told him was how do you do this with 15 people?

And it’s certainly some of those constraints that they’re at work. I wanted to talk along similar lines on this idea of open source, open weights, perhaps as a stick with open weights because open source is a controversial definition. A lot of the early models at that time, there was a lot more open weight stuff coming out. Of late, that’s gone down. And the power of open weights, of open weights models and perhaps open sources different is that you can actually, I mean, develop open weights and you can actually develop open weights and you can actually can actually tinker around with the model. and customize it to their use case. But increasingly, we’ve seen a sort of a drop -off other than the Chinese models, Kimi and Qen, which are still open -weight.

I wanted to just discuss among the two of you, perhaps from different perspectives, maybe the use case perspective and perhaps just the whole internet infrastructure perspective, how important open -weight is. And some of the backroom chatter I’ve been getting is as these models get more performant, it becomes increasingly dangerous to put out highly performant models as open -weights because it’s something that OpenAI called malicious fine -tuning and the fact that as these models become better, it’s easier to undo the fine -tuning guardrails that have been established so these models don’t do bad things. And so that’s why it won’t be delivered. So I know that the ecosystem needs open -weights because not everyone has the time to do the training and the time to do the training.

And someone just perhaps wants to do the pre -training and get the model. out. But I’m also hearing from the other side that open weights has this fundamental security challenge. And I know, but we go to security separately, but just on open weights, what’s the, you know, what’s the way We thread this needle between these two things?

Matthew Prince

Well, I think that, okay, let’s, let’s, let’s, I’m going to tell a story. I don’t know that 100 % of the story is right, but I think it adds up to something that approximates what’s right. Let’s imagine that over time, you are one of these major model makers, you’re open AI, you’re, you’re anthropic, you’re, you’re Google. And you look at this and you say, huh, if we keep playing this out, then this is a commodity. And the only way that we win is if we restrict as many people from getting into the game as we possibly can. So how do you do that? I mean, one of the best ways to do that is just to scare everyone.

that if everybody has this technology, that the world is going to end. And so the next time you come across an AI doomer and they say, if everyone has this, the world is going to end, just keep pushing them. Just be like, okay, and then what happens? And then what happens? And then what happens? And basically, you know, the most likely, the scariest scenario is these things can design very bad maybe pathogens or other malicious vectors, biological vectors, that could then get synthesized and spread around society, to which I say, well, then shouldn’t we be regulating the synthesizers, not regulating the sort of technology that’s out there? But, again, it gets to be very hand -wavy.

But if you think about it as a strategy, if you believe that these ultimately are commodities, then what you want to do is actually regulate. Regulate them in order to make it so that yours is the only company that can be safely trusted to handle this. And I think that that’s, again, somewhat cynically, a lot of the explanation for why the people that are building these horribly dangerous, scary things keep telling you how horribly dangerous and scary they are. I’ve never seen another industry that has done that. You never – you don’t see like the automobile industry be like, you know, this could plow through a crowd of people and kill and be used in a mass murder event, right?

You don’t – that just doesn’t make any sense. And so the only way that I can make sense of that world is if, from a business perspective, if it’s actually trying to do some sort of regulatory capture. And so I am pretty discounting on what the risks are here. I tend to think that more open is going to win, and I tend to think that the Chinese approach right now is the smartest approach to take on what looks like this enormous kind of just money machine which the U .S. is creating. And so it’s – I think that as India thinks about how it’s going to regulate AI – I would be careful about listening to sort of the AI doomers.

I would be especially careful about trying to regulate the output of what is fundamentally at least a pseudo non -deterministic system. We have built machines that act like humans, and yet we think we can regulate them like machines. The better way to regulate them is actually more like humans. Look to the criminal code, not the engineering code, in order to figure out what that regulation should look like. And so I am very much pro -open. I think we should think about what these risks are and what these dangers are. We should definitely be testing and looking for those. But I tend to think that they are somewhat overblown. And if you want to understand why they are somewhat overblown, I would argue it is because it’s a strategy in order to keep the people , they’re currently in the lead going forward.

Rahul Matthan

I think you may be absolutely right, because on that big stage that you were at just a short while ago, yesterday, there was a call by one of these companies for an IAEA for AI, that it should be regulated like nuclear technology. And the other example I keep giving is, you know, at the turn of this last century, people were just walking around the streets and getting electrocuted, because electricity is highly dangerous. And yet we sit in this room, which literally the walls are buzzing with electricity, and we’re completely safe. And this is the nature of all technologies. But on the positive side, Rajan, all the AI deployers that we have in India, a large number of them are relying on open source.

And if the open source pipeline starts, to diminish, where are they going to go? I mean, Sarvam can certainly deliver these models, but how important is it actually to the community of people that are, I mean, AI and chat GPT and all this is all well and good, but it’s really those applications, the voice applications that people need. How dependent are they on open source? What can we do to continue to keep this open?

Rajan Anandan

Look, I mean, firstly, as Matthew said, look, if you invest a trillion dollars, okay, you can’t give it away for free. It’s as simple as that. It’s just economics. So you can position it any which way you want, but fundamentally it’s about economics, right, and how do you build a business, especially if you have to invest so much. No, look, open source is absolutely critical. You know, I think the – I mean, Lama is the most recent example, right, where the reality is if you’re going to launch the next state -of -the -art version of Lama, it’ll be closed because otherwise how are they going to monetize this, right? And especially if you have – you know, they’re spending $80 billion, $100 billion.

There are other ways to make money. They don’t make money. Even for them, $100 billion a year is kind of a lot, you know. So anyway, coming back, look, it’s super important to the ecosystem. I don’t know. I don’t have the answer to that. I think in March. In March, you’ll see one of the big companies make a big announcement, massive, massive announcement on their commitment to open source. But it is, you know, I think the only way you do this is there has to be a different path, okay, because if you’re going to have to invest hundreds of millions of dollars to build a new model or billions of dollars to build a new model or tens of billions of dollars, that doesn’t, that’s not really open.

You can’t keep those models open, right? So to the first question that you ask in Matthew’s response, you really need to have a different way of doing this, right? So actually, by the way, if you look at voice AI, for instance, I’ll give you some data. You know, India has very low labor costs. If you look at human cost of voice today in India, it’s five rupees a minute to about 20 rupees a minute. Five rupees a minute is the lowest you can get. Amex would be probably 40 or 50 rupees a minute. Today’s voice AI costs about three rupees a minute. So already, and that’s why you’re beginning to see voice AI really begin to take off in India, right?

But even with today’s SOTA model, you can get to about maybe a rupee. Right now, the question is, even at a rupee, now you’re one, you’re twenty one fifth the cost of humans. So it’s going to really take off. But if you want to make voice the primary medium through which one point four billion Indians will access AI, that’s still too expensive. Right. You’ve got to get it down to maybe five paisa or ten paisa. And that’s actually not about open source. It’s about compute and it’s about the cost of inference. So if you ask me, open source is really, really important. Important. But we have to find a way to get the cost of inference down.

Obviously, model size, all of these things matter. And, you know, we can talk about that as well. But a short answer is, look, it’s really important. But it is not clear to me how you do this, especially in the current game that we’re in. Because anybody that wants to be at the frontier, the way a frontier is defined today actually has to go out and invest. Right. And honestly, I don’t know how the Chinese are doing it because, you know, it’s a bit opaque as to exactly how much are they investing. You’re right. It’s kind of a hedge fund.

Matthew Prince

Which is basically what deep seek is and they have this on the side META is the fascinating question here because it took me a really long time to understand META’s strategy like why are they doing all this VR why are they doing all of this AI what they learned was the lesson that if you are caught on the wrong side of a platform shift and you then become beholden to some other platform where in the past they were on the web and that was fine and social worked and no one controlled the underlying platform and then the platform shifted and all of a sudden it was on mobile and they were beholden to both Apple and Google that put them on a back foot and it really limited their business.

So they are so desperate to whatever the next platform shift is stay in front of that platform shift and so for a while that looked like it might be VR that was less likely today although never count these technologies out The real next platform shift is almost certainly going to be AI. And so if you control the social graph, which is an unreplicable kind of asset that they have, they need to be – they need to make sure that whatever the next platform is, that they control that or at least have an equal seat at the table of everyone else. And so if they continue to invest in open source and you’re like, why are they spending so much money in order to do this?

It’s to make sure that as the next platform shifts, that it’s going to be – that they aren’t in the same back -footed position that they were with Apple and Google. That would be my analysis of META.

Rahul Matthan

I don’t want to comment either way, that AI is going to accelerate cyber attacks because agentic swarms, et cetera, can do things much more dangerous. So what’s the evidence that you have for this?

Matthew Prince

Yeah, so I think this is sort of a long -term good news, short -term scary headline story. The long -term good news – let’s start with short -term scary headline. There are going to be a whole bunch of scary headlines of bad things that AI does. There will be a story about an Indian family who lost all their money because they wired it to some criminals that made it seem like their daughter had been kidnapped. I mean, we’re already seeing the level and sophistication of phishing scams go through the roof in terms of what is being done. And so the bad guys are going to use that to attack. The other thing that we’re seeing is – so there was an example.

There was a company called SalesLock. It had a program called Drift, a piece of software that was connected into hundreds of thousands of Salesforce instances. SalesLoft got breached by a Russian hacker. The Russian hacker didn’t understand how Salesforce worked, so they kind of fumbled around for a really long time. Had they just used AI, which is what we’re now seeing a lot of North Korean and Chinese hackers do, they would have been able to just be instantly knowledgeable on how to get as much information out of Salesforce as quickly as possible, and the breach could have been orders of magnitude worse. So those are the bad stories, and there’s going to be real hardship and real pain that’s caused from it.

The counter to that is that folks like us, I was just with Nikesh from Palo Alto Networks, Jay from Zscaler is here. We’re all using AI. We’re all using AI in our own systems to make them smart. In fact, Cloudflare, we would have never described ourselves this way, but the whole theory of the company was let’s get as much Internet traffic flowing through a machine learning system to be able to predict where security threats were in the same way that three years ago we all looked at ChatGPT and were like, whoa, that’s amazing. internally about three years ago was the first time the system said, bloop, here’s a new threat that no human has ever identified before.

And that went from being something that happened once in the first 15 years of CloudFlare’s history to now it’s happening on an incredibly regular basis where the machine learning is able to win. And so I think the good news is that the good guys will always have more data than the bad guys do. Again, a caveat to regulation preventing us from using it in order to do cybersecurity in various ways. But largely, we’re able to do that. And I think that we will actually use AI in order to stay ahead of these threats. That’s what we’re seeing. It’s going to require some change in any part of your life where you are today relying on what someone looks like or what they sound like in order to verify who they are and give them access to anything, secure, confidential.

That’s got to change. And so the simple thing that you should all do with your immediate family at your next holiday meal. is decide on a family password. And that seems silly, but I guarantee you at some point, some hacker is going to call up and say, hey, your son or your dad or your grandmother or whatever needs money. And if you say, hey, what’s the family password? And they say, I don’t know, Aardvark, you’ll know that it’s a scam, right? So it’s a simple thing that you can do. And it’s going to be these simple things, which I think are going to get translated. And I think businesses have got to go away from, oh, the person looked right, so we let them in the door.

Like that can’t happen in the cyber world. And so we’re going to have to lock systems down. There are going to be some scary stories, but I would predict again that in 10 years, we are more secure online than we are today.

Rahul Matthan

Rajan, I wanted to talk about data because a lot of the conversation is around how the models that we have, CERN accepted, are models that are largely built in the West and therefore are Western systems. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question.

And I think that’s a very important part of the question. And I think that’s a very important part of the question. and I know there’s something of a land grab going on for the data so as far as the data companies in India are concerned the companies are actually hoovering up the data, annotating it, making it ready what’s the business model for them? Are they feeding this all up back to that one pin code in the US or what’s the negotiating part and I ask this question because we have a lot of data in this country but I know that there are countries in Africa where the deal is already done and the data is out the door there are deals I know for 25 years worth of medical data out of Africa in exchange for setting up an EHR system because that’s a deal they’ve done.

And I was wondering whether we are thinking about this in a nuanced way and then I know Matthew you’ve got some ideas on crawling, I’ll come to you on that. What is actually happen on the ground with this?

Rajan Anandan

I think the first thing is we don’t have as many I mean there are there are you know initiatives or NGOs like AI for Bharat that are collecting data but if you look at the leading global data companies they’re not Indian right India probably has a handful of startups that are that are that are actually in the quote unquote business of AI data for AI so first and foremost I mean because these companies are global you’re absolutely right all the data that Indians are generating is actually going to you know going to those sort of few handful of companies now that being said look firstly you know for for Indian companies to actually keep the data here we have to have model companies right like you know what I mean otherwise you have to sell it because if you’re in the data business you have to sell it to somebody but I think the benefit we have is you know like honest like you know we’ve only collected probably one less than one percent of the data we actually need if you really want to get to AGI right like if you look at but physical intelligence and things like that.

And India really has a competitive advantage. In fact, we’ve been looking for startups we could find, fund that would basically do all kinds of data collection for robotics and things like that. So that’s number one. I think number two, we’re also beginning to see companies that are leveraging their proprietary data in a very, very interesting way. I’ll just give you one example. So we have a company called Cloud Physician. It’s an Indian startup. They run these remote ICUs in tier two, tier three towns. In India, they’ve been doing that for four or five years. They’ve got this extraordinary amount of proprietary data that they’ve now used to actually build about a dozen or so specialized models in healthcare.

And now they’re actually taking those models to market in the U .S. And the kind of data that they have, which they’ve collected over four or five years for a different, for a sort of a healthcare delivery business, if you will, has been very valuable. So, you know, in our portfolio, we only have a handful of companies in different spaces that are using. And data is an advantage to actually build a final proposition, which is usually tied to some sort of domain model or something like that. But I do think, you know, we need probably some sort of – firstly, we need a lot more innovation around this. I’m surprised we don’t have more companies that are actually trying to sort of build businesses around this India’s data advantage.

And second, we need to have – I do think we need to have some smart regulation. I don’t know where the regulatory framework is on data. I think that’s going to be super, super important. I do know, like, AI for Bharat, et cetera, are being quite thoughtful about who they share data with, which is great. So, yeah, that’s sort of where it is. But it’s a huge opportunity for India. I don’t – you know, my real view is, like, look, basically, you know, it’s all the data on the Internet. That’s accessible to everybody. You just need, like, literally large amounts of capital. Most of the data that we need to get to AGI we don’t have yet.

And we have 1 .4 billion people. Well, created.

Rahul Matthan

But, Matthew, you – You wanted to intervene in that whole thing. You had – is something called – maybe you didn’t call it, maybe the media started to call it pay -to -crawl, and you may have something more sophisticated like an AI audit or something like that. What’s the idea behind that? Because that’s also part of this democratization of AI as I see it.

Matthew Prince

So firstly, correcting a little bit of a misconception, all the money in the world and you still can’t even crawl the Internet. So how much less of the Internet does Microsoft see than Google? Microsoft Bing, they’ve thrown a ton of money at it. For every six pages that Google sees, Microsoft sees one. OpenAI knows how much of an advantage that is. So every 3 .5 pages that Google sees, OpenAI sees one. But that means that two -thirds of the Internet is hidden to – the most sophisticated model. Anthropic, it’s almost 10 to 1 in terms of what’s there. And so if you want to ask why did Gemini just leapfrog open AI, I don’t think it’s the chips. I don’t think it’s the researchers.

I actually think it’s the data. And I think getting access to data is important. And so if we want to have a level playing field, there’s a real risk that Google is going to leverage the monopoly position they had indexing the Internet yesterday in order to win in the AI market tomorrow. And that’s something that we’re really concerned about. And I think we have to do one of two things. We either have to bring Google down and say that they have to play by the same rules as all the other AI companies. That’s something that you could do from a regulatory perspective. And that’s something that the U .K. is looking into. Canada is looking into.

Australia is looking into. The alternative is how do we give all the other AI companies the same access that Google does. And that’s, I think, an opportunity to also solve some of the democratization challenges out there. One of the things I really worry about. is that AI is going to disrupt the fundamental internet business model. The fundamental internet business model was create content, drive traffic, and then sell things, subscriptions, or ads. That was it. I don’t care if you’re B2B, B2C. I don’t care if you’re a media company. That was it. Create great content, drive traffic, sell things, subscriptions, or ads. AI doesn’t work that way. So just take a media company. If AI scrapes your ads and takes it, let’s say it’s the New York Times, or the Times of India, or whatever it is, you can now go to your AI and just say, show me all, summarize all the articles from the New York Times that would be of interest to me.

And you’re going to read it there. Now, that’s great for you as a user. It’s better as a user experience. So it’s going to win. But now the Times of India isn’t selling a subscription or an ad. Now the New York Times isn’t getting anyone to click on an ad. And that’s going to make it harder. And to make this clear how much harder it’s gotten. Ten years ago, for every two pages that Google scraped on the Internet, they sent you one unique visitor. And then you could monetize that visitor again by selling them things, subscriptions, or ads. Today, what is it? 50 to 1. Actually, excuse me, 30 to 1 in Google’s case, 50 to 1 in Bing’s case. That’s the good news.

In OpenAI’s case, it’s 3 ,500 to 1. In Anthropic’s case, it’s half a million to 1. They take half a million pages for every one page they give back. So AI takes, but it doesn’t always give back. And if the currency of the Internet has been traffic, that traffic is gone. And it’s getting harder and harder to then make money through the traditional business model of the Internet. So one of two things happens. One is, well, the Internet just dies. But that’s not going to happen because the AI companies need the content. They need the information. They need the things that are out there. And so the Internet. The alternative is a new business model emerges. So what happens?

and that’s what’s going to happen over the course of the next five years a new business model is going to emerge for the internet and think how exciting that is think how rare new business models for something as grand and as large as the internet are how often they emerge almost never and yet we’re all going to live through it and that’s an incredible opportunity and i don’t know quite what it is but it has to be some way that the people who are creating the content and creating the value get compensated for the things that they are creating and what the encouraging version of this is to think about the music industry the entire music industry 22 years ago was valued at 8 billion us dollars which is a lot of money but it’s not that much money because that was the beatles and rolling stones and like everything right why was it that well because napster and grokster and kazan all these things had commoditized they were basically taking a music and musicians weren’t getting paid for it and they were getting paid for it and they were getting paid for it and they weren’t getting paid for the music anymore what changed one day steve jobs walked on stage and he said it’s going to be 99 cents per song right itunes launched almost 22 years ago to this day and that wasn’t the business model that won but at least was a business model and it started the conversation and that evolved into what is the business model that won which is something closer to spotify which is now i don’t know what it is in india but in the u .s it’s like ten dollars a month and what’s incredible is that spotify last year sent over 12 billion dollars to musicians more than the entire music industry was worth 22 years ago and that’s just spotify there’s apple music and and uh title and tiktok and youtube and tons of people there’s more money going into music creation today than at any other time in human history by an order of magnitude now different winners and losers and we can debate whether or not the right people are winning the right people are losing but there is more money going into music creation today than any time in human history and so as we figure out what the next business model of the internet is going to be, let’s try not to make it one that’s worse.

Let’s try and learn the lessons because traffic was always a terrible proxy for quality. So let’s actually find something that is a proxy for quality and let’s reward the people who are creating that. And the good news is, I think that’s what everyone in this room wants, but it’s what Sam wants. It’s what Daria wants. It’s even what Elon probably wants. And that’s the sort of thing that is actually going to drive not only a healthier internet ecosystem, but I actually think that a lot of what’s wrong with the world today is that we have monetized traffic. And what that has meant is we have monetized basically making people emotional or angry or whatever it gets to click on things, which is part of what’s driven society apart in a lot of ways.

I think if instead what we monetize and what we reward is the creation of human knowledge. That’s what the AI companies want. That’s what we all want. And I think that’s what we can actually do to actually bring our society back together in.

Rahul Matthan

I want to turn it over to the audience for questions. I don’t want to be the only one asking questions. I’ll take – hands are going. I’m going to take three questions at a time.

Matthew Prince

I like Indian audiences. They ask questions. Like you go to the UK and everyone just sits on their hands.

Rahul Matthan

No, no. Indian audiences are very, very – now we’ll have to shut them up because we don’t have time. I’m going to take this one. I’m going to take that one. I’m going to take this one, right? So first up here, yeah? And I have a rule, a question, not a statement. So it has to end with your voice going up a little bit. Then I know it’s a question.

Audience

Sir, this is for you. You’ve touched upon a lot of interesting topics across domains. First of all, I remember you talking about the deterministic AI outcomes. Now AI having crossed the threshold –

Rahul Matthan

Give me the question.

Audience

Okay. So how – So what, in your view, would make AI trustworthy? Is it something to do with explainability, deterministic AI, and what would be the pathways?

Rahul Matthan

Let me get a couple more. Otherwise, we won’t get through. The lady at the back there. So one is, I’ll keep track of it. How do we make AI more trustworthy?

Audience

My question is for Matthew. So you mentioned about the paper crawl. We see robots .txt getting ignored. My question for you is, what makes you believe that AI companies would be equally invested in a creator -based compensation when AI creates the Internet and is not giving back attribution or compensation?

Rahul Matthan

Trustworthy, and how do creators get paid? And attribution. I think she also wants to do attribution. This gentleman here.

Audience

Hi. My question is to Rajan. So, Rajan, what do you think you were explaining about the consumer and vertical part in the application layer? So what do you think? Where are we in terms of investment from a venture capital side point of view in terms of how can we match the Y Combinator and AI 16Z level in terms of investments?

Matthew Prince

Great. So AI is already more trustworthy than most humans. The simple fact is that AI is a better driver than ninety nine point nine nine percent of humans that are on the road today. Literally, since I started talking within a kilometer of where we’re sitting, there was an accident where between two cars. I mean, we just know that’s happening. We’re sitting in Delhi. Right. You will not be able to find any news about that anywhere on any publication anywhere on Earth. And yet, if one of those two cars had been a self -driving car, it would have been front page news around the world. There are expectations for AI are too high. We have built a system that acts like humans and we need to think of it as acting like humans.

The smartest CEO that I know in terms of doing this is Robin Vince at BNY Mellon. In their case, they actually have AI employees. The AI employees get an employee number. They get an email address. They get a quarterly review. They can get fired if they don’t do a good job. They can get promoted if they do a good job. I asked if there are any AIs that are supervising humans. He said, not yet, but it’s inevitable. That’s the way to think of it, right, is that they act like humans because they are like humans. And, again, we are all fallible, and we’re all going to make mistakes, but already we see in certain disciplines like driving, AI is better than human beings are.

In terms of getting paid, I think the empirical evidence is that when you’ve actually seen – forget robots at TXC. That’s like a no trespassing sign. Anyone can ignore it. When you actually block the AI agents, which is what we have done, then they come to the table. And so with big publishers like Condé Nast, DotDash, Meredith, and others, where starting July 1st, we said all of the AI companies are blocked, they actually came to the table, and they were able to get paid. Things done. In the case of Reddit, Reddit was willing to block everyone. including even Google. And as a result, they got the public number is seven times as much for the Reddit corpus licensing that than the New York Times did, even though the two corpuses are about the same.

So again, I think that the first step in any market is having some level of scarcity. As long as you’re making it easy for anyone to take your data, then you’re not going to get paid for it.

Rajan Anandan

Yeah, I think on the question on consumer AI, I don’t know, very few people know this, but India today has more consumer AI startups than the US. In fact, on Tuesday this week at the Pitchfest, just our firm, one firm, we announced five new seed investments in AI companies. Four out of the five are consumer AI companies, right? And the reason is, and we think this is going to explode because we have 900 million Indians on the internet, 850 million of them are active every day, seven hours a day on the internet. and every space has potential for tremendous innovation, right? If you take education, education hasn’t been accessible to a large part of online education because it’s just been too expensive, right?

But today with AI, you can have a 99 rupees a month plan with an AI tutor across. In fact, the fastest growing AI education company in the world is in India and nobody’s really heard of it because actually, fortunately, these guys are all just being stealth and just building, which is very good. So I think it’s a great time to be building consumer. Actually, it’s a great time building AI companies, but especially in consumer AI, we’re going to see some breakouts. Look, the world’s leading consumer AI companies in education, healthcare, entertainment, et cetera, will be either here or in China. They won’t be in the Western world because we just need it.

Rahul Matthan

The one beautiful thing about this summit is there have been so many wonderful, rich, diverse conversations. This is one of them. Matthew, Rajan, thank you so much. Thank you all for being such a good audience. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (22)
Factual NotesClaims verified against the Diplo knowledge base (2)
Additional Contexthigh

“Modern AI workloads require massive numbers of GPUs, and the GPU market is dominated by NVIDIA, whose chips are power‑hungry and costly.”

The knowledge base notes that AI requires many chips and that the market is largely supplied by a single manufacturer, NVIDIA, highlighting the dominance and scarcity of GPUs [S1] and discussing the NVIDIA monopoly [S86]; however it does not mention the chips’ origins in gaming consoles or Bitcoin mining, so that detail is not corroborated.

Confirmedhigh

“Only a tiny global pool of engineers can design, train and operate large AI models, driving up salaries and limiting broader participation.”

The knowledge base confirms a very limited pool of experts worldwide, citing a tiny pool of specialists and estimating roughly 1,000 engineers capable of training extremely large models, which aligns with the report’s statement [S89] and [S51].

External Sources (102)
S1
Open Internet Inclusive AI Unlocking Innovation for All — Hi. My question is to Rajan. So, Rajan, what do you think you were explaining about the consumer and vertical part in th…
S2
https://dig.watch/event/india-ai-impact-summit-2026/open-internet-inclusive-ai-unlocking-innovation-for-all — Very few individuals have done more to bring revolutionary and transformative technology into the hands of millions than…
S3
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Giordano Albertazzi — -Announcer: Role/Title: Event announcer/moderator; Area of expertise: Not mentioned
S4
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Takahito Tokita Fujitsu — -Announcer: Role as event announcer/host, expertise/title not mentioned
S5
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Cristiano Amon — -Announcer: Role/Title: Event announcer/moderator; Areas of expertise: Not mentioned
S6
https://dig.watch/event/india-ai-impact-summit-2026/open-internet-inclusive-ai-unlocking-innovation-for-all — Very few individuals have done more to bring revolutionary and transformative technology into the hands of millions than…
S7
Protecting Democracy against Bots and Plots — In summary, Cloudflare utilizes AI and machine learning to anticipate and address threats and vulnerabilities, while pro…
S8
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Matthew Prince Cloudflare — -Matthew Prince- CEO, Cloudflare (formerly a professor who taught history) -Moderator- Event moderator/host Thank you….
S9
Fireside Conversation: 01 — -Rahul Matthan: Role/Title: Partner at Tri Legal, conversation moderator; Areas of expertise: Legal matters (implied fro…
S10
Open Internet Inclusive AI Unlocking Innovation for All — Very few individuals have done more to bring revolutionary and transformative technology into the hands of millions than…
S11
Keynote-Rishad Premji — -Rahul Mattan: Role/Title: Discussion moderator; Area of expertise: Not specified
S12
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S13
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S14
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S15
Multi-stakeholder Discussion on issues about Generative AI — He said that their current hardware technology is too energy consuming and expensive. This signifies the significance o…
S16
Defending the Cyber Frontlines / Davos 2025 — – Matthew Prince: CEO of Cloudflare Matthew Prince: Absolutely. So Cloudflare’s, our mission is to help build a bette…
S17
Global Perspectives on Openness and Trust in AI — Corporate rhetoric has become sophisticated in adopting inclusion language while ultimately promoting closed platforms a…
S18
UK NCSC: AI will escalate the frequency and impact of cyberattacks — The UK’s National Cyber Security Centre (NCSC), a division of GCHQ, hasissuedan assessment focusing on the imminent infl…
S19
The intellectual property saga: The age of AI-generated content | Part 1 — The intellectual property saga: AI’s impact on trade secrets and trademarks | Part 2 The intellectual property saga: app…
S20
Certifying humanity: Labeling content amid AI flood — For much of the public debate around artificial intelligence, attention has been fixed on capability: how powerful model…
S21
Keynote interview with Geoffrey Hinton (remote) and Nicholas Thompson (in-person) — Machines could potentially outperform humans in cognitive tasks
S22
Enhancing rather than replacing humanity with AI — People’s judgment remains crucial, particularly for decisions that involve values, context, or individual circumstances.
S23
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — India focuses on smaller models for specific use cases rather than chasing trillion-parameter models
S24
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Absolutely, Ankit, just trying to, this is something which I know two years back when we said that I’m putting 8000 GPUs…
S25
Semiconductors — Governments worldwide are increasingly recognizing the strategic importance of semiconductors. Policies are being develo…
S26
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S27
What policy levers can bridge the AI divide? — – **Affordability**: Internet costs exceeding recommended percentages of income Lacina Kone: H.E. Mr. Solly Malatsi to …
S28
Laying the foundations for AI governance — – The four fundamental obstacles identified by the moderator: time, uncertainty, geopolitics, and power concentration A…
S29
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Anne Flanagan: Hello, apologies that I’m not there in person today. I’m in transit at the moment, hence my picture on yo…
S30
Building Trustworthy AI Foundations and Practical Pathways — “India has scale, India has linguistic diversity, but India also has a lot of different things.”[63]. “In many regions o…
S31
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S32
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And these vision models are actually very good for document digitization. They’re very good at language layout understan…
S33
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “What raw material is needed for AI?”[9]. “sovereign AI comes to India, we’ll have the control”[56]. “Indian government …
S34
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Cost reduction in technology deployment In sum, this analysis illustrates that open source software serves not merely a…
S35
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — – **Satish** – Has long background in open source, presently part of ICANN and DotAsia organization Audience: My name i…
S36
Cybersecurity regulation in the age of AI | IGF 2023 Open Forum #81 — Gallia Daor:Sure. Thank you. So indeed, in 2019, the OECD was the first intergovernment organization to adopt principles…
S37
From summer disillusionment to autumn clarity: Ten lessons for AI — Additionally, the EU’s long-negotiated AI Act imposes strict rules on AI systems (e.g. high-risk systems must meet safet…
S38
Scaling Enterprise-Grade Responsible AI Across the Global South — “And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agen…
S39
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S40
Governance of the Domain Name System and the Future Internet — 1100    Discussion of presentation by Drake 1600Net neutrality, privacy and innovative business models Day 1 – Monday …
S41
Slow politics for fast digital developments — For example, in data policy, many governments are lobbying for wide access to Internet data in order to ensure the prote…
S42
Policy Meets Tech – Journey Diary — Convergence issues: How can the risks from this business model be mitigated in light of newer models linked to artificia…
S43
INCREASING ACCESS TO DATA ACROSS THE ECONOMY — –  Primary research to fill gaps in the existing evidence base on the issues that prevent data sharing, and in particul…
S44
The open-source gambit: How America plans to outpace AI rivals by democratising tech — The AI openness approach will spark a heated debate around the dual nature of open-source AI. The benefits are evident i…
S45
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Bilel Jamoussi:Great. Thank you. Google has a history of both open source contributions and proprietary developments. Ar…
S46
Open Internet Inclusive AI Unlocking Innovation for All — The discussion revealed sophisticated understanding of the tensions surrounding open-source AI development. Prince offer…
S47
WS #208 Democratising Access to AI with Open Source LLMs — Audience: Is it working now? Yes, perfect. Hi. Thank you very much for your panel and the interesting discussion th…
S48
High Level Session 3: AI &amp; the Future of Work — A significant tension emerged around data ownership and worker compensation. Actor and entrepreneur Joseph Gordon-Levitt…
S49
Socially, Economically, Environmentally Responsible Campuses | IGF 2023 Open Forum #159 — Data ownership and governance concerns are major obstacles in today’s digital landscape. Microsoft recognizes the growin…
S50
Submission by the South Centre to the Draft Issues Paper on Intellectual Property Policy and Artificial Intelligence (WIPO/IP/AI/2/GE(20/1) — Among the possible reasons against new rights in data: data may be already sufficiently protected under existing …
S51
Discussion Report: Sovereign AI in Defence and National Security — Faisal advocates for a strategic approach where countries focus their limited sovereign resources on the most critical c…
S52
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — India’s strategy focuses on deploying smaller, sector‑tailored models that consume less energy and cost, rather than pur…
S53
The State of the model: What frontier AI means for AI Governance — ## Presentation Interruption and Conclusion ## Technical Challenges and Limitations ### Current System Problems ### D…
S54
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — In addition to their internal initiatives, Microsoft recognises the need for active participation from private companies…
S55
Can we test for trust? The verification challenge in AI — Painter describes how frontier safety policies create a framework for companies to set conditional red lines based on sp…
S56
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — – **Satish** – Has long background in open source, presently part of ICANN and DotAsia organization Audience: My name i…
S57
The Global Power Shift India’s Rise in AI &amp; Semiconductors — High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different …
S58
India faces AI challenge as global race accelerates — China’sDeepSeekhas shaken the AI industry by dramatically reducing the cost of developing generative AI models. While gl…
S59
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Florian Ostmann:Thank you, Matilda. So with that set out in terms of what kinds of standards we are focused on and why w…
S60
State of play of major global AI Governance processes — SHAN Zhongde:Thank you very much. It’s a very important initiative worldwide. And we are going to promote development. S…
S61
How to make AI governance fit for purpose? — – Jennifer Bachus- Chuen Hong Lew – Jennifer Bachus- Shan Zhongde- Chuen Hong Lew Innovation should be prioritized ove…
S62
Laying the foundations for AI governance — – **Industry perspective on regulation**: Companies, particularly startups, actually want regulation but need clarity an…
S63
Open Internet Inclusive AI Unlocking Innovation for All — “largely produced today by one manufacturer, NVIDIA, that use a ton of power and are very, very expensive”[2]. “And so i…
S64
The Dawn of Artificial General Intelligence? / DAVOS 2025 — Yoshua Bengio: Yeah, I have a comment about values and trying to make AI behave morally. This question has been studi…
S65
What policy levers can bridge the AI divide? — – **Affordability**: Internet costs exceeding recommended percentages of income Lacina Kone: H.E. Mr. Solly Malatsi to …
S66
Setting the Rules_ Global AI Standards for Growth and Governance — This comment helped explain the seemingly paradoxical situation of competitors collaborating on standards by revealing t…
S67
From Innovation to Impact_ Bringing AI to the Public — Yes. So do you think the whole banking system will become redundant? Because today if I have to make a transaction, I’ll…
S68
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — Addressing potential concerns about technological nationalism, Mazumdar-Shaw emphasised that “sovereignty is not isolati…
S69
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And these vision models are actually very good for document digitization. They’re very good at language layout understan…
S70
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S71
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Bilel Jamoussi:Great. Thank you. Bilel Jamoussi:Great. Thank you. Google has a history of both open source contribution…
S72
Democratizing AI: Open foundations and shared resources for global impact — El-Assady emphasised the crucial distinction between “open source” and “open weight” models. Unlike models that merely s…
S73
Driving Social Good with AI_ Evaluation and Open Source at Scale — And obviously just because you open source the software doesn’t mean that the data that’s produced with it is open data….
S74
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — Audience: My name is Satish and I have a long background in open source. I am presently part of ICANN and DotAsia organi…
S75
Scaling Enterprise-Grade Responsible AI Across the Global South — “And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agen…
S76
From summer disillusionment to autumn clarity: Ten lessons for AI — Additionally, the EU’s long-negotiated AI Act imposes strict rules on AI systems (e.g. high-risk systems must meet safet…
S77
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S78
WS #187 Bridging Internet AI Governance From Theory to Practice — Vint Cerf: First, I have to unmute. So thank you so much, Alex. I always enjoy your line of reasoning. Let me suggest a …
S79
WS #82 A Global South perspective on AI governance — AUDIENCE: Ends up. We cannot hear. Rely on ISO 31,000 is what they see as the kind of framework for risk assessments…
S80
A tipping point for the Internet: 10 predictions for 2018 — Figure 1. Current Internet business model In the current Internet business model (Figure 1) user data is collected, pro…
S81
Slow politics for fast digital developments — For example, in data policy, many governments are lobbying for wide access to Internet data in order to ensure the prote…
S82
AI and Digital in 2023: From a winter of excitement to an autumn of clarity — At theeconomic level, the internet business model is based on data. The role of tech companies which process user data, …
S83
Digital business models — In this business model, user data is the core economic resource. When searching for information and interacting on the i…
S84
The Future of the Internet: Navigating the Transition to an Agentic Web — – Aman Bhutani- Malte Kosub Competition and Market Structure Development | Economic | Sustainable development Leurent…
S85
9821st meeting — Ecuador:Mr. President, I thank the United States for convening this important meeting. I also thank the Secretary Genera…
S86
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Losing grip of the NVIDIA monopoly NVIDIA’s dominance is not as unshakeable as it appeared at the start of the year, du…
S87
The challenges of introducing Generative AI into the marketplace — I have been hearing a lot about the shortage of powerful GPUs for AI lately. It seems like the demand is much bigger tha…
S88
From KW to GW Scaling the Infrastructure of the Global AI Economy — Good morning to all of you. As Rakesh has already introduced, two companies are planning for a lot of things together. A…
S89
How Multilingual AI Bridges the Gap to Inclusive Access — Capacity development | Artificial intelligence Data, talent, and compute constraints in building multilingual models H…
S90
!” — To summarize, one would normally expect technological change to increase youth wage inequality – and to a lesser extent …
S91
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Examples of complex dialysis machines sitting idle due to lack of trained nurses, aircraft manufacturers lacking mainten…
S92
Agenda item 5: Day 2 Afternoon session — He pointed out that capacity building should be tailored to these specific gaps. The importance of civil society and reg…
S93
Ray Dalio warns of global breakdown behind market turmoil — Billionaire investorRay Daliohas warned that the recent market turbulence is part of a larger global crisis. The turmoil…
S94
Debating Education / DAVOS 2025 — The discussion revealed significant obstacles to reforming higher education institutions. Lawrence H. Summers provocativ…
S95
AI Meets Cybersecurity Trust Governance &amp; Global Security — The discussion revealed tension between regulatory and market-based approaches to AI security. Tiwari argued that “polic…
S96
Keynote-Bejul Somaia — “When intelligence becomes abundant, when a founding team of five can do the work that previously required 50, when ever…
S97
https://dig.watch/event/india-ai-impact-summit-2026/keynote-rishad-premji — Government initiatives to train 10 million young people in AI, along with industry partnerships with universities, are e…
S98
National Strategy for Artificial Intelligence — Subjects that can be classified as artificial intelligence are part of several study programmes, but are most common in …
S99
AI adoption soars in the UK but skills gap looms — AI adoption in the UK hasgrown rapidly, rising by 33% over the past year. According to a new report from AWS, 52% of UK …
S100
Shaping Investment: Spurring Investment in Cyber Sector Start-Ups — Shoaib Yousuf:It’s absolutely an opportunity. It’s absolutely an opportunity. However, the challenge is the scalability …
S101
https://dig.watch/event/india-ai-impact-summit-2026/indias-roadmap-to-an-agi-enabled-future — It is same that in primary sector, there will be a lot of silicon.
S102
The reality behind AI hype — As governments and tech leaders gather at global forums such as the AI Impact Summit in New Delhi, one assumption domina…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Matthew Prince
8 arguments186 words per minute4453 words1431 seconds
Argument 1
AI hardware dependence on NVIDIA GPUs makes AI expensive and inefficient (Matthew Prince)
EXPLANATION
Matthew explains that current AI systems rely heavily on NVIDIA GPUs, which are costly, power‑hungry, and were originally designed for gaming and cryptocurrency mining rather than AI workloads. This hardware dependence drives up the expense and complexity of building AI models.
EVIDENCE
He notes that AI requires “lots and lots of chips” largely produced by NVIDIA, which consume a lot of power and are very expensive, and that these chips were never built for AI but for gaming consoles and Bitcoin mining before being repurposed for superintelligence [23-30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple sources note that current AI hardware is dominated by NVIDIA GPUs, which consume a lot of power and are very expensive, raising concerns about cost and efficiency [S1][S15].
MAJOR DISCUSSION POINT
Hardware dependence on NVIDIA GPUs inflates AI costs
AGREED WITH
Rajan Anandan
DISAGREED WITH
Rajan Anandan
Argument 2
Chip prices will fall and AI models will become commoditized, enabling frontier models for $10 M in five years (Matthew Prince)
EXPLANATION
Matthew predicts that as more companies enter the silicon market and supply constraints ease, the cost per unit of AI compute will drop, turning AI models into commodities. He forecasts that frontier‑level specialized models could be built for $10 million within five years.
EVIDENCE
He cites the historical pattern of silicon shortages turning into gluts, the entry of many startups and incumbents into GPU production, and the expectation that unit costs will decline [50-54]. He also points to the competitive dynamics among model builders (Google, Anthropic, OpenAI) suggesting a commodity market, and then states his prediction of building frontier models for $10 million in five years [60-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince himself asks whether the high-cost, high-power hardware situation is permanent or will change, hinting at future price reductions and commoditization [S1].
MAJOR DISCUSSION POINT
Future drop in chip prices will commoditize AI models
AGREED WITH
Rajan Anandan
DISAGREED WITH
Rajan Anandan
Argument 3
The “AI doom” narrative is used to capture regulation; greater openness will ultimately benefit competition (Matthew Prince)
EXPLANATION
Matthew argues that the alarmist “AI doom” story is a strategic tool for incumbent firms to push for regulatory capture that limits competition. He believes that more openness, including open‑weight models, will ultimately favor competition and reduce the power of dominant players.
EVIDENCE
He describes how companies scare the public about existential risks to restrict entry, likening it to regulatory capture, and asserts that being more open will win in the long run, noting the Chinese approach as smarter and warning against AI doomers influencing regulation [176-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of corporate rhetoric describe how firms adopt inclusive language while ultimately promoting closed platforms, providing a counter-perspective to Prince’s openness argument [S17].
MAJOR DISCUSSION POINT
AI doom narrative serves regulatory capture; openness benefits competition
AGREED WITH
Rajan Anandan
DISAGREED WITH
Rajan Anandan, Rahul Matthan
Argument 4
AI will amplify phishing, social‑engineering, and cyber‑attack capabilities, creating short‑term security headlines (Matthew Prince)
EXPLANATION
Matthew warns that AI will make phishing and social‑engineering attacks more sophisticated and widespread, leading to alarming headlines in the near term.
EVIDENCE
He gives examples of increasingly sophisticated phishing scams, a breach at SalesLock where a Russian hacker could have used AI to quickly understand Salesforce, and predicts a surge in such AI-enabled attacks [266-277].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UK National Cyber Security Centre warns that AI will significantly increase the frequency and impact of cyber-attacks, supporting concerns about AI-enabled phishing and social engineering [S18].
MAJOR DISCUSSION POINT
AI will intensify cyber‑attack threats
Argument 5
AI‑driven threat detection can make networks more secure than human‑only defenses (Matthew Prince)
EXPLANATION
Matthew counters the previous point by highlighting that AI can also strengthen security, as machine‑learning systems can detect novel threats faster and at scale, giving defenders an advantage over attackers.
EVIDENCE
He describes Cloudflare’s ML system that processes massive internet traffic to predict security threats, noting that such detections have become regular and more effective than earlier human-only methods [278-283].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cloudflare reports stopping over 220 billion attacks daily using AI-driven machine-learning systems, illustrating how AI improves cyber-defense [S16].
MAJOR DISCUSSION POINT
AI improves cyber‑defense capabilities
Argument 6
AI will erode traffic‑based monetization; a new model that directly compensates content creators is needed (Matthew Prince)
EXPLANATION
Matthew explains that AI’s ability to scrape and summarize content reduces the value of web traffic, undermining traditional ad‑based revenue models. He calls for a new business model that rewards creators directly for the knowledge they generate.
EVIDENCE
He outlines how AI can ingest entire news sites and deliver summaries, eliminating the need for users to visit the original site, which collapses traffic-based monetization; he then draws a parallel with the music industry’s shift from piracy to streaming, arguing a similar transformation is required for the internet [368-410].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince argues that AI’s ability to ingest and summarize content will collapse traffic-based revenue, calling for new creator-centric business models; this transformation is discussed in the broader context of internet economics [S1].
MAJOR DISCUSSION POINT
Need for creator‑centric internet revenue model
Argument 7
Creating scarcity (e.g., licensing agreements) can force AI firms to pay for copyrighted corpora (Matthew Prince)
EXPLANATION
Matthew suggests that by making data scarce—through licensing blocks or scarcity mechanisms—content owners can compel AI companies to pay for access to copyrighted material, thereby ensuring compensation for creators.
EVIDENCE
He recounts how blocking AI agents forced companies like Reddit to negotiate licensing deals that paid seven times more than the New York Times, illustrating that scarcity can drive payments for data use [474-477].
MAJOR DISCUSSION POINT
Scarcity can be leveraged for creator compensation
DISAGREED WITH
Rajan Anandan, Audience
Argument 8
AI systems can be more trustworthy than most humans in safety‑critical tasks, suggesting that AI may outperform human judgment in certain domains.
EXPLANATION
Prince argues that AI already demonstrates higher reliability than the vast majority of human operators in areas such as autonomous driving, indicating that AI can be a more dependable agent in specific contexts.
EVIDENCE
He states that “AI is already more trustworthy than most humans” and gives the example that a self-driving car would be safer than 99.99 % of human drivers, illustrating AI’s superior safety performance [450-452].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Expert commentary notes that machines can outperform humans in many cognitive tasks and that AI can be safer than the vast majority of human operators, especially in autonomous driving [S21][S22].
MAJOR DISCUSSION POINT
AI reliability versus human performance
R
Rajan Anandan
11 arguments195 words per minute2620 words802 seconds
Argument 1
India should prioritize low‑cost, smaller language models for local use rather than pursuing AGI (Rajan Anandan)
EXPLANATION
Rajan argues that India’s priority should be building affordable, high‑performing models of up to a few hundred billion parameters tailored to Indic languages, rather than chasing trillion‑parameter AGI systems.
EVIDENCE
He notes that India does not need AGI, cites SARVAM’s 30-100 billion-parameter models that are state-of-the-art in Indic languages and cost-effective compared to global models, and emphasizes the need for low-cost models for 1.4 billion people [75-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Anandan emphasizes that India does not need AGI and should focus on affordable, high-performing Indic models; this view is echoed in discussions about India’s strategy to build smaller models [S1][S23].
MAJOR DISCUSSION POINT
Focus on affordable, local language models over AGI
AGREED WITH
Matthew Prince
DISAGREED WITH
Matthew Prince, Rahul Matthan
Argument 2
Building a sovereign stack—including domestic chip design and GPU investments—will reduce reliance on foreign hardware (Rajan Anandan)
EXPLANATION
Rajan proposes that India develop its own semiconductor and GPU ecosystem, investing in domestic startups and partnerships, to achieve a sovereign AI stack less dependent on external suppliers.
EVIDENCE
He mentions that 20 % of global semiconductor designers are Indian, the growth from zero to 35-40 semiconductor startups, recent investments in GPU company Agrani and memory firm C2I, and the need for a sovereign stack despite alliances, also noting large Indian corporate AI infra investments [107-119].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s push to scale domestic GPU capacity to 50-60 k units and policy initiatives to develop a local semiconductor ecosystem provide concrete backing for a sovereign AI stack [S24][S25].
MAJOR DISCUSSION POINT
Sovereign AI hardware stack for India
AGREED WITH
Matthew Prince
DISAGREED WITH
Matthew Prince
Argument 3
Open‑source is essential for the ecosystem, but the massive investment required to train models makes fully open models economically unsustainable (Rajan Anandan)
EXPLANATION
Rajan acknowledges the critical role of open‑source for AI development but warns that the billions of dollars needed to train large models make it financially impossible to keep them fully open without a new economic model.
EVIDENCE
He states that open-source is “absolutely critical” yet cites the $80-100 billion spending on models, arguing that such scale of investment cannot be sustained with fully open models and that a different path is needed [217-232].
MAJOR DISCUSSION POINT
Economic limits to fully open AI models
DISAGREED WITH
Matthew Prince, Rahul Matthan
Argument 4
Most Indian data currently flows to a few global data firms; India needs home‑grown data‑collection startups to retain value locally (Rajan Anandan)
EXPLANATION
Rajan points out that the majority of Indian‑generated data is captured by a handful of foreign data companies, and stresses the need for domestic startups that collect and retain data within India to preserve economic value.
EVIDENCE
He observes that only a few Indian startups are in the AI-data business, that global firms currently own most Indian data, and argues for building more local data-collection companies, citing initiatives like AI for Bharat and the need for model companies to keep data in-country [313-319].
MAJOR DISCUSSION POINT
Need for domestic data‑collection ecosystem
DISAGREED WITH
Matthew Prince, Audience
Argument 5
Proprietary domain data (e.g., remote‑ICU telemetry) can be leveraged to build specialized, exportable AI models (Rajan Anandan)
EXPLANATION
Rajan gives an example of an Indian health‑tech startup that uses its own proprietary ICU data to create specialized AI models, which are then commercialized internationally, demonstrating the value of domain‑specific data assets.
EVIDENCE
He describes Cloud Physician, an Indian startup that runs remote ICUs, has amassed extensive proprietary data over several years, built about a dozen specialized healthcare models, and is now selling those models in the U.S. market [319-327].
MAJOR DISCUSSION POINT
Domain data can fuel exportable AI models
Argument 6
Smart regulation is required to govern data sharing and protect national interests (Rajan Anandan)
EXPLANATION
Rajan calls for thoughtful regulatory frameworks around data to ensure that data sharing benefits the nation while safeguarding privacy and strategic interests.
EVIDENCE
He mentions the need for “smart regulation,” references AI for Bharat’s careful data-sharing policies, and stresses that a regulatory framework will be crucial for leveraging data responsibly [329-333].
MAJOR DISCUSSION POINT
Need for smart data regulation
AGREED WITH
Matthew Prince, Rahul Matthan
DISAGREED WITH
Matthew Prince, Rahul Matthan
Argument 7
India’s AI ecosystem now hosts more consumer AI startups than the US, backed by growing venture capital activity (Rajan Anandan)
EXPLANATION
Rajan asserts that India currently has a larger number of consumer‑focused AI startups than the United States, with strong venture‑capital backing, indicating a vibrant domestic AI scene.
EVIDENCE
He notes that India has “more consumer AI startups than the US,” cites a recent Pitchfest where his firm announced five seed AI investments (four consumer), and highlights 900 million internet users with high daily engagement as a market driver [479-488].
MAJOR DISCUSSION POINT
India leads in consumer AI startup activity
Argument 8
Major Indian conglomerates are committing billions to AI infrastructure, signaling strong domestic investment (Rajan Anandan)
EXPLANATION
Rajan highlights recent multi‑billion‑dollar commitments from Indian giants like Adani and Reliance to build AI infrastructure, underscoring significant domestic financial commitment to AI.
EVIDENCE
He references the announcement that Adani and Reliance each pledged $100 billion into AI infrastructure at the model layer, indicating substantial domestic investment [120-122].
MAJOR DISCUSSION POINT
Large Indian corporate AI infrastructure investments
Argument 9
India’s domestic large‑language‑model ecosystem is expanding rapidly, with a growing number of companies building Indic models, indicating a fast‑moving local AI landscape.
EXPLANATION
Rajan points out that dozens of Indian firms, including academic institutions, are already developing large language models in Indic languages, and the number of participants is expected to rise quickly.
EVIDENCE
He mentions that there are 12-15 companies (including IIT Bombay) building large language models and predicts the count will reach 15-20 very quickly [89-91].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports highlight dozens of Indian firms and academic institutions actively developing large language models in Indic languages, confirming rapid ecosystem growth [S23].
MAJOR DISCUSSION POINT
Rapid growth of indigenous LLM development
Argument 10
Achieving mass adoption of voice AI in India requires driving inference cost down to a few paisa per minute, far below current rates, highlighting the need for ultra‑low‑cost compute.
EXPLANATION
Rajan explains that while current voice AI costs about three rupees per minute, reaching a price of five to ten paisa per minute is essential to serve a billion‑plus users, and this challenge is rooted in compute and inference efficiency rather than open‑source availability.
EVIDENCE
He provides current cost figures (three rupees per minute) and the target cost (five to ten paisa), emphasizing that lowering inference cost is the key to scalability [239-245].
MAJOR DISCUSSION POINT
Cost reduction for scalable voice AI deployment
Argument 11
Future AI breakthroughs will likely come from new architectures beyond transformers, so investing in research on post‑transformer models is essential for staying competitive.
EXPLANATION
Rajan asserts that transformer‑based large language models are highly inefficient and represent only the beginning of AI development, predicting that the next wave of breakthroughs will involve alternative architectures.
EVIDENCE
He describes LLMs as “the most inefficient compute machines ever” and states that “we believe there will be many more [architectures] to come after transformers” [99-102].
MAJOR DISCUSSION POINT
Strategic focus on next‑generation AI architectures
R
Rahul Matthan
1 argument158 words per minute1775 words673 seconds
Argument 1
Open‑weight models raise security concerns such as malicious fine‑tuning and misuse (Rahul Matthan)
EXPLANATION
Rahul raises the issue that releasing models with open weights can enable malicious actors to fine‑tune them for harmful purposes, creating security risks that need to be addressed.
EVIDENCE
He notes that as models become more performant, open-weight releases increase the danger of “malicious fine-tuning,” making it easier to bypass guardrails, and that this is a fundamental security challenge for the ecosystem [170-175].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Security assessments warn that open-weight releases can be repurposed for malicious fine-tuning, increasing the risk of AI-enabled attacks [S18].
MAJOR DISCUSSION POINT
Security risks of open‑weight AI models
AGREED WITH
Matthew Prince, Rajan Anandan
DISAGREED WITH
Matthew Prince, Rajan Anandan
A
Audience
3 arguments196 words per minute177 words54 seconds
Argument 1
Trustworthiness may depend on explainability and deterministic behavior, prompting calls for clearer standards (Audience)
EXPLANATION
An audience member asks how AI can become trustworthy, questioning whether explainability and deterministic outcomes are necessary and calling for clearer standards.
EVIDENCE
The audience explicitly asks, “what would make AI trustworthy? Is it something to do with explainability, deterministic AI, and what would be the pathways?” [432-433].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Debates on AI trust emphasize the need for explainability, labeling, and deterministic outcomes as part of emerging standards for trustworthy AI [S20][S21][S22].
MAJOR DISCUSSION POINT
Defining AI trustworthiness standards
Argument 2
Concerns about how creators will receive attribution and payment when AI repurposes their work (Audience)
EXPLANATION
An audience member questions how AI companies will compensate and attribute content creators when AI systems use their work without direct payment.
EVIDENCE
The audience asks, “what makes you believe that AI companies would be equally invested in a creator-based compensation when AI creates the Internet and is not giving back attribution or compensation?” [440-442].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The intellectual-property discourse around AI-generated content underscores challenges of attribution and compensation for creators, aligning with calls for new creator-centric models [S19][S1].
MAJOR DISCUSSION POINT
Creator attribution and compensation in AI
DISAGREED WITH
Matthew Prince, Rajan Anandan
Argument 3
Questions arise on how India can match US‑level VC funding (e.g., Y Combinator, AI 16Z) to scale these startups (Audience)
EXPLANATION
An audience member seeks insight into how India can achieve venture‑capital funding comparable to leading US accelerators and funds to support its AI startups.
EVIDENCE
The audience asks, “Where are we in terms of investment from a venture capital side point of view in terms of how can we match the Y Combinator and AI 16Z level in terms of investments?” [447-449].
MAJOR DISCUSSION POINT
Scaling Indian AI funding to US levels
A
Announcer
3 arguments147 words per minute266 words108 seconds
Argument 1
Matthew Prince and Rajan Anandan have been instrumental in delivering transformative technology to millions worldwide.
EXPLANATION
The announcer emphasizes that very few people have done as much as Matthew and Rajan to bring revolutionary and transformative technology into the hands of a massive global audience.
EVIDENCE
He explicitly states this claim in the opening line of the session, noting their outsized impact on technology diffusion [1].
MAJOR DISCUSSION POINT
Impact of individual leaders on technology democratization
Argument 2
Matthew Prince’s background as Cloudflare CEO and his extensive academic and entrepreneurial credentials position him as a key architect of a better internet.
EXPLANATION
The announcer lists Prince’s role as co‑founder and CEO of Cloudflare, his degrees from top universities, and his work on Project Honeypot, framing him as a leader in building a more secure and accessible internet.
EVIDENCE
These details appear in sentences describing his positions, education, and founding mission to help build a better Internet [2-4].
MAJOR DISCUSSION POINT
Leadership and expertise driving internet infrastructure
Argument 3
Rajan Anandan’s experience as a founder of Sequoia Capital India and his leadership in the Indian startup ecosystem make him a pivotal figure in shaping India’s digital future.
EXPLANATION
The announcer highlights Anandan’s decades of entrepreneurship, investing, and technology leadership, noting his role in founding Sequoia Capital India and influencing the country’s startup and digital landscape.
EVIDENCE
His influence is described through statements about his background, the founding of Sequoia Capital India, and his pivotal role in India’s startup ecosystem [5-7].
MAJOR DISCUSSION POINT
Influence of venture capital leadership on national digital development
Agreements
Agreement Points
Future reduction in AI hardware costs and commoditization of models will enable broader AI democratization
Speakers: Matthew Prince, Rajan Anandan
AI hardware dependence on NVIDIA GPUs makes AI expensive and inefficient (Matthew Prince) Chip prices will fall and AI models will become commoditized, enabling frontier models for $10 M in five years (Matthew Prince) Building a sovereign stack—including domestic chip design and GPU investments—will reduce reliance on foreign hardware (Rajan Anandan) Achieving mass adoption of voice AI in India requires driving inference cost down to a few paisa per minute, far below current rates (Rajan Anandan)
Both speakers acknowledge that current AI hardware is a cost barrier but expect that chip prices will decline and domestic semiconductor efforts will create a sovereign stack, making AI models cheaper and more accessible (Matthew: [23-30][50-54][60-62]; Rajan: [107-119][239-245]).
POLICY CONTEXT (KNOWLEDGE BASE)
The trend of decreasing hardware costs and the push for smaller, sector-tailored models is highlighted in India’s AI strategy and global cost-reduction discussions, indicating a path toward broader democratization [S52][S58].
Open‑source / open‑weight models are essential for ecosystem health, yet their economic sustainability is uncertain
Speakers: Matthew Prince, Rajan Anandan
The “AI doom” narrative is used to capture regulation; greater openness will ultimately benefit competition (Matthew Prince) I tend to think that more open is going to win (Matthew Prince) Open‑source is absolutely critical (Rajan Anandan) The massive investment required to train large models makes fully open models economically unsustainable (Rajan Anandan)
Both agree that openness is critical for AI progress, but recognize that the huge training costs pose a challenge to keeping models fully open (Matthew: [176-207][198-204]; Rajan: [217-232]).
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on the benefits versus economic challenges of open-source AI are documented in multiple analyses, noting both ecosystem value and sustainability concerns [S44][S45][S46][S56].
India should prioritize low‑cost, locally‑relevant AI models over pursuing massive AGI systems
Speakers: Rajan Anandan, Matthew Prince
India should prioritize low‑cost, smaller language models for local use rather than pursuing AGI (Rajan Anandan) Chip prices will fall and AI models will become commoditized, enabling frontier models for $10 M in five years (Matthew Prince)
Rajan stresses focusing on affordable, high-performing Indic models, while Matthew predicts that model costs will drop dramatically, making such low-cost models feasible (Rajan: [75-82]; Matthew: [60-62]).
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs emphasize India’s focus on smaller, energy-efficient models rather than trillion-parameter AGI, aligning with strategic recommendations for a cost-effective AI roadmap [S52][S57].
Effective regulation of data and AI is needed to ensure fair competition and protect national interests
Speakers: Matthew Prince, Rajan Anandan, Rahul Matthan
AI models need data; regulation may be used to level the playing field (Matthew Prince) Smart regulation is required to govern data sharing and protect national interests (Rajan Anandan) Open‑weight models raise security concerns such as malicious fine‑tuning and misuse (Rahul Matthan)
All three highlight the necessity of regulatory frameworks-whether to address data monopolies, ensure sovereign control, or mitigate security risks from open models (Matthew: [358-366]; Rajan: [329-333]; Rahul: [170-175]).
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for clear data ownership, multi-stakeholder governance, and balanced regulation to safeguard competition are reflected in industry and policy discussions [S49][S61][S62].
Similar Viewpoints
Both see openness as a strategic lever for competition and ecosystem health, while warning that commercial pressures may limit pure openness (Matthew: [176-207]; Rajan: [217-232]).
Speakers: Matthew Prince, Rajan Anandan
The “AI doom” narrative is used to capture regulation; greater openness will ultimately benefit competition (Matthew Prince) Open‑source is absolutely critical (Rajan Anandan)
Both anticipate a future where hardware costs decline, either through market dynamics or domestic chip development, facilitating cheaper AI deployment (Matthew: [50-54][60-62]; Rajan: [107-119]).
Speakers: Matthew Prince, Rajan Anandan
Chip prices will fall and AI models will become commoditized, enabling frontier models for $10 M in five years (Matthew Prince) Building a sovereign stack—including domestic chip design and GPU investments—will reduce reliance on foreign hardware (Rajan Anandan)
Both acknowledge security risks associated with advanced AI, whether through malicious use or open‑weight exploitation (Matthew: [266-277]; Rahul: [170-175]).
Speakers: Matthew Prince, Rahul Matthan
AI will amplify phishing, social‑engineering, and cyber‑attack capabilities, creating short‑term security headlines (Matthew Prince) Open‑weight models raise security concerns such as malicious fine‑tuning and misuse (Rahul Matthan)
Unexpected Consensus
Both speakers see a viable path for India to compete in AI despite current hardware constraints
Speakers: Matthew Prince, Rajan Anandan
Chip prices will fall and AI models will become commoditized, enabling frontier models for $10 M in five years (Matthew Prince) Building a sovereign stack—including domestic chip design and GPU investments—will reduce reliance on foreign hardware (Rajan Anandan)
While Rajan emphasizes building a sovereign hardware stack to overcome dependence, Matthew predicts market-driven price drops will make frontier AI affordable for India, indicating an unexpected alignment that India can achieve competitiveness through both domestic policy and global market trends (Matthew: [60-62]; Rajan: [107-119]).
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses suggest India can remain competitive by leveraging cost-effective models and emerging semiconductor capabilities, despite hardware limitations [S52][S57][S58].
Overall Assessment

The discussion reveals substantial convergence on three fronts: (1) the expectation that AI hardware costs will fall, enabling cheaper, locally‑relevant models; (2) the shared belief that open‑source is vital but faces economic limits; (3) the consensus that regulation—both of data and AI safety—is essential. These agreements suggest a common strategic direction toward democratizing AI through cost reductions, open ecosystems, and thoughtful policy, especially for emerging markets like India.

High consensus on the need for cheaper hardware, open‑source importance, and regulatory frameworks, implying coordinated efforts among industry leaders, investors, and policymakers could accelerate inclusive AI deployment.

Differences
Different Viewpoints
Timeline and cost to achieve frontier AI models
Speakers: Matthew Prince, Rajan Anandan, Rahul Matthan
Chip prices will fall and AI models will become commoditized, enabling frontier models for $10 M in five years (Matthew Prince) India should prioritize low‑cost, smaller language models for local use rather than pursuing AGI (Rajan Anandan)
Matthew predicts that within five years frontier-level specialized models can be built for $10 million as chip prices fall and AI becomes a commodity [60-62][50-54]; Rajan claims India can launch high-performing, low-cost models within the year to meet local needs, emphasizing affordable Indic models [91-92][75-82]; Rahul pushes for an even more aggressive schedule, suggesting the timeline could be shortened further [66-68].
POLICY CONTEXT (KNOWLEDGE BASE)
Uncertainties around the timeline and expense of frontier AI are highlighted in governance reports that outline technical challenges and safety verification needs [S53][S55].
Openness of AI models versus economic feasibility of fully open‑source large models
Speakers: Matthew Prince, Rajan Anandan, Rahul Matthan
The “AI doom” narrative is used to capture regulation; greater openness will ultimately benefit competition (Matthew Prince) Open‑source is essential for the ecosystem, but the massive investment required to train models makes fully open models economically unsustainable (Rajan Anandan) Open‑weight models raise security concerns such as malicious fine‑tuning and misuse (Rahul Matthan)
Matthew argues that the AI-doom story is a strategic tool for regulatory capture and that increased openness will ultimately win competition [176-207]; Rajan acknowledges open-source importance but warns that billions of dollars needed for training make fully open models financially untenable, calling for a different economic path [217-232]; Rahul highlights that open-weight releases create security risks by enabling malicious fine-tuning, underscoring a trade-off between openness and safety [170-175].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between open-source benefits and commercial viability is explored in several sources discussing market shifts and security implications of open models [S44][S45][S46][S56].
Sovereign AI hardware stack versus reliance on market‑driven price reductions
Speakers: Rajan Anandan, Matthew Prince
Building a sovereign stack—including domestic chip design and GPU investments—will reduce reliance on foreign hardware (Rajan Anandan) AI hardware dependence on NVIDIA GPUs makes AI expensive and inefficient (Matthew Prince) Chip prices will fall and AI models will become commoditized, enabling frontier models for $10 M in five years (Matthew Prince)
Rajan pushes for a sovereign stack, investing in Indian semiconductor and GPU startups to lessen dependence on foreign suppliers [107-119]; Matthew points out current AI’s heavy reliance on NVIDIA GPUs, which are costly and power-hungry, but expects competition and silicon gluts to drive down prices, making models cheaper [23-30][50-54].
POLICY CONTEXT (KNOWLEDGE BASE)
Strategic guidance recommends focusing sovereign resources on critical control points while acknowledging the role of market-driven price reductions in hardware availability [S51][S52][S58].
Purpose and approach of regulation in AI
Speakers: Matthew Prince, Rajan Anandan, Rahul Matthan
The “AI doom” narrative is used to capture regulation; greater openness will ultimately benefit competition (Matthew Prince) Smart regulation is required to govern data sharing and protect national interests (Rajan Anandan) Open‑weight models raise security concerns such as malicious fine‑tuning and misuse (Rahul Matthan)
Matthew sees the AI-doom narrative as a strategy for regulatory capture that would limit competition, arguing that openness is the better path [176-207]; Rajan advocates for smart regulation to manage data sharing and safeguard national interests, emphasizing the need for clear frameworks [329-333]; Rahul stresses that open-weight models pose security threats, suggesting regulation may be needed to mitigate malicious use [170-175].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on regulatory purpose-balancing innovation with safeguards-are reflected in multistakeholder standards work and industry calls for clear, proportionate rules [S59][S61][S62].
Data ownership, creator attribution and compensation
Speakers: Matthew Prince, Rajan Anandan, Audience
Creating scarcity (e.g., licensing agreements) can force AI firms to pay for copyrighted corpora (Matthew Prince) Most Indian data currently flows to a few global data firms; India needs home‑grown data‑collection startups to retain value locally (Rajan Anandan) Concerns about how creators will receive attribution and payment when AI repurposes their work (Audience)
Matthew proposes using scarcity-such as licensing blocks-to compel AI companies to pay for copyrighted corpora, citing the Reddit licensing deal that yielded higher payments than the New York Times [474-477]; Rajan notes that most Indian-generated data is captured by foreign firms and calls for domestic data-collection startups to keep value within the country [313-319]; an audience member questions how creators will be attributed and compensated when AI systems use their content without direct payment [440-442].
POLICY CONTEXT (KNOWLEDGE BASE)
Issues of data ownership and fair compensation for creators are highlighted in discussions on worker compensation and data governance frameworks [S48][S49][S50].
Unexpected Differences
Whether India needs to pursue AGI
Speakers: Rajan Anandan, Matthew Prince
India should prioritize low‑cost, smaller language models for local use rather than pursuing AGI (Rajan Anandan) And I would just say, don’t sell yourself short. Like you may not, India may not need AGI, but India may still build AGI. (Matthew Prince)
Rajan explicitly states that India does not need AGI and should focus on affordable local models, while Matthew counters that India could still build AGI and that constraints can spur innovation, a contrast not anticipated given their shared interest in AI development [75-78][139-141].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses suggest India should prioritize applied, low-cost AI solutions rather than a direct pursuit of AGI, aligning with strategic recommendations [S52][S57].
Feasibility of fully open‑source large models
Speakers: Rajan Anandan, Matthew Prince
Open‑source is essential for the ecosystem, but the massive investment required to train models makes fully open models economically unsustainable (Rajan Anandan) The “AI doom” narrative is used to capture regulation; greater openness will ultimately benefit competition (Matthew Prince)
Both champion openness, yet Rajan warns that the scale of investment makes fully open models financially impossible, whereas Matthew believes openness will ultimately win and sees the AI-doom narrative as a barrier, revealing an unexpected split on the practicality of open-source at scale [217-232][176-207].
POLICY CONTEXT (KNOWLEDGE BASE)
Feasibility concerns for fully open-source large models are raised in analyses of economic sustainability and security risks associated with open releases [S44][S45][S46][S56].
Overall Assessment

The discussion reveals several substantive disagreements: the timeline and economic path to affordable frontier AI, the role and sustainability of open‑source models, the need for a sovereign hardware stack versus reliance on market price declines, divergent views on the purpose and design of regulation, and contrasting positions on data ownership and creator compensation. While participants share the overarching goal of democratizing AI and enhancing security, they diverge sharply on how to achieve these outcomes.

High – The speakers often articulate opposing strategies (e.g., market‑driven commoditization vs sovereign stack, openness vs economic feasibility), indicating that consensus on policy and investment directions is limited. This fragmentation could slow coordinated action on AI democratization, regulation, and data governance, requiring further dialogue to align on shared objectives.

Partial Agreements
All three agree that AI should become more accessible; Matthew envisions market‑driven price drops making frontier models affordable, Rajan focuses on affordable Indic models to serve 1.4 billion people, and Rahul asks what infrastructure construct would democratize AI [21-22][60-62][75-82].
Speakers: Matthew Prince, Rajan Anandan, Rahul Matthan
Chip prices will fall and AI models will become commoditized, enabling frontier models for $10 M in five years (Matthew Prince) India should prioritize low‑cost, smaller language models for local use rather than pursuing AGI (Rajan Anandan) And what is your idea, your vision for how this would be, if not now, but sometime soon? (Rahul Matthan)
All agree that AI introduces new security challenges and that safeguards are needed; Matthew highlights AI‑driven threat detection improving defenses, Rajan calls for smart regulation of data, and Rahul points out security risks of open‑weight models [278-283][329-333][170-175].
Speakers: Matthew Prince, Rajan Anandan, Rahul Matthan
AI‑driven threat detection can make networks more secure than human‑only defenses (Matthew Prince) Smart regulation is required to govern data sharing and protect national interests (Rajan Anandan) Open‑weight models raise security concerns such as malicious fine‑tuning and misuse (Rahul Matthan)
Takeaways
Key takeaways
AI today is expensive because it relies on a narrow hardware base (NVIDIA GPUs) and a small pool of specialized talent; both hardware costs and talent scarcity are expected to improve over time. Model development is moving toward commoditization; frontier‑level models could be built for $10 M within five years, making them accessible to more organizations. India’s strategy should focus on low‑cost, smaller (1‑200 B parameter) models optimized for local languages and use‑cases rather than pursuing AGI. Building a sovereign AI stack—including domestic chip design, GPU and memory investments, and strategic alliances—is seen as essential for reducing dependence on foreign hardware. Open‑source / open‑weights are critical for ecosystem health, but the massive training costs make fully open models economically unsustainable; a balance is needed. The “AI doom” narrative may be used to capture regulation; greater openness could foster competition, but security concerns (malicious fine‑tuning) remain. Most Indian data currently flows to a few global firms; creating home‑grown data‑collection and domain‑specific data businesses is necessary to retain value locally. AI will both amplify cyber‑threats (e.g., sophisticated phishing) and improve defensive capabilities through AI‑driven threat detection. Traditional internet monetization (traffic‑based ads/subscriptions) will be disrupted by AI; a new model that directly compensates content creators is required. India’s AI ecosystem is experiencing rapid VC activity and large corporate investments, positioning it to lead in consumer‑focused AI applications.
Resolutions and action items
Rajan announced investments in two Indian hardware startups: Agrani (GPU company) and C2I (memory company) to advance the sovereign stack. Rajan indicated that within the year India will ship many more low‑cost, high‑performance language models for local applications (e.g., farmer tools, voice AI). Matthew suggested that regulators should consider leveling the data‑crawling playing field (e.g., requiring Google to share index data) to promote fairness. Both speakers agreed on the need for smart regulation of data and AI, though specific policies were not defined.
Unresolved issues
How to sustain open‑weight models financially while preventing malicious fine‑tuning and other security risks. Concrete mechanisms for compensating and attributing content creators when AI repurposes their work. Specific regulatory frameworks for data collection, sharing, and sovereign AI stacks in India. Exact timeline and roadmap for achieving the $10 M frontier model target and for broader AI democratization. How India can match US‑level venture‑capital funding (e.g., Y Combinator, AI 16Z) to scale its AI startups. Details on the future internet business model that will replace traffic‑based monetization.
Suggested compromises
Create a degree of scarcity (e.g., licensing agreements) to force AI firms to pay for copyrighted corpora while still allowing broader access to data. Focus on low‑cost, domain‑specific models for Indian needs rather than competing directly on trillion‑parameter AGI models. Encourage openness in AI research and tools while accepting that the most advanced models may remain partially closed due to investment recovery needs. Pursue a sovereign hardware stack but maintain strategic alliances with global partners to avoid isolation.
Thought Provoking Comments
AI requires lots and lots of chips, largely produced today by one manufacturer, NVIDIA, which were never built for AI workloads. This hardware monopoly makes AI very expensive and hard to democratize.
He pinpoints the fundamental hardware bottleneck that underlies the cost and accessibility challenges of AI, moving the conversation from abstract policy to a concrete technical constraint.
His observation reframed the discussion to focus on supply‑side hardware issues, prompting both Rahul and Rajan to consider how chip diversification and sovereign stacks could address democratization.
Speaker: Matthew Prince
In five years, you’ll be able to build a frontier‑like model within a specialty for $10 million or less.
Provides a bold, data‑driven forecast that challenges the assumption that AI will remain prohibitively expensive, suggesting a rapid cost decline.
This prediction set a timeline that both Rahul and Rajan used to benchmark India’s progress, shifting the tone from pessimistic to optimistic about near‑term feasibility.
Speaker: Matthew Prince
India is not trying to get to AGI. With 1.4 billion people we need highly performant, ultra‑low‑cost models of a few hundred billion parameters, not trillion‑parameter AGI. We already have 30‑100 billion‑parameter models that are state‑of‑the‑art for Indic languages.
He reframes the AI race from a global AGI competition to a localized, purpose‑driven strategy, emphasizing scale, cost, and language relevance over raw parameter counts.
Rajan’s comment redirected the conversation toward practical, region‑specific solutions, prompting Matthew to discuss constraints‑driven innovation and leading Rahul to probe open‑source and data issues.
Speaker: Rajan Anandan
Constraints can be a catalyst for breakthrough innovation – DeepSeek’s efficient pruning algorithm shows that limited compute can produce superior models, something big, well‑funded companies might overlook.
He challenges the notion that more money and bigger models are the only path forward, highlighting how scarcity can drive creative technical solutions.
This insight encouraged the panel to view India’s resource constraints as potential advantages, influencing Rajan’s optimism about Indian startups and sparking discussion on efficiency versus scale.
Speaker: Matthew Prince
The AI‑doom narrative is a strategic move to capture regulation; companies scare everyone to keep competitors out and protect their lead. More openness will ultimately win.
He critically examines the motives behind AI risk rhetoric, suggesting it may serve corporate interests rather than public safety, and advocates for openness.
This comment shifted the debate from pure safety concerns to the politics of regulation, prompting Rajan to acknowledge the economic realities of open‑source and leading Rahul to explore the balance between openness and security.
Speaker: Matthew Prince
Google indexes far more of the web than any other AI company – for every page Google sees, Microsoft sees one, OpenAI sees one, Anthropic sees one in ten. This data monopoly gives them a huge advantage and must be addressed through regulation or equal access.
He uncovers a hidden competitive edge rooted in data access, expanding the conversation beyond hardware to the importance of web crawling dominance.
The point introduced a new topic about data equity, causing the panel to discuss potential regulatory interventions and the need for a level playing field, which Rajan linked to sovereign data strategies.
Speaker: Matthew Prince
AI will fundamentally disrupt the internet’s business model that relies on traffic and ads. We need a new model that compensates creators for knowledge, similar to how the music industry evolved from piracy to streaming royalties.
He connects AI’s impact to broader economic structures, using the music industry analogy to illustrate how new value capture mechanisms can emerge.
This macro‑level insight broadened the scope of the discussion, leading to audience questions about creator compensation and prompting Matthew to elaborate on scarcity‑driven licensing deals.
Speaker: Matthew Prince
AI is already more trustworthy than most humans – for example, self‑driving cars are statistically safer than 99.99 % of human drivers. Trust should be measured against human performance, not an idealized perfection.
He reframes the trust debate by providing empirical evidence that AI can outperform humans, challenging the prevailing fear‑based narrative.
This comment shifted the tone from caution to confidence, influencing the audience’s follow‑up questions on trustworthiness and prompting Rajan to highlight the rapid growth of consumer AI startups in India.
Speaker: Matthew Prince
Overall Assessment

The discussion was steered by a series of pivotal remarks that moved it from abstract concerns about AI monopolies to concrete strategies for democratization. Matthew Prince’s technical and strategic insights about hardware bottlenecks, cost trajectories, data monopolies, and the political use of AI risk reframed the conversation around tangible levers for change. Rajan Anandan’s counter‑point—focusing on India’s unique needs, low‑cost models, sovereign stacks, and a thriving consumer AI ecosystem—shifted the dialogue from a global, US‑centric view to a regional, application‑driven perspective. Together, these comments opened new sub‑topics (efficiency‑driven innovation, open‑source vs security, new internet business models, and trustworthiness) and prompted the participants and audience to explore regulatory, economic, and societal implications, ultimately shaping a nuanced, forward‑looking debate on how AI can be made accessible, responsible, and beneficial at scale.

Follow-up Questions
How can we keep AI models open‑weight while mitigating security risks such as malicious fine‑tuning?
Balancing openness for innovation with safety is critical for responsible AI deployment and for preventing misuse of powerful models.
Speaker: Rahul Matthan
What is the importance of open‑source/open‑weight models for the AI ecosystem, and how can we sustain openness given commercial pressures?
Open models drive community innovation, but large‑scale funding models tend to close them; understanding how to preserve openness is essential for a democratized AI future.
Speaker: Rahul Matthan
What evidence supports the claim that AI will accelerate cyber‑attacks?
Concrete evidence is needed to shape security policies, industry defenses, and regulatory responses to emerging AI‑enabled threats.
Speaker: Rahul Matthan (to Matthew Prince)
What is the business model for Indian data‑collection companies – are they feeding data back to US AI firms or negotiating different terms?
Clarifying data flows and monetisation models is vital for data sovereignty, economic benefit for India, and fair compensation for local data assets.
Speaker: Rahul Matthan (to Rajan Anandan)
What is the idea behind “pay‑to‑crawl” or an AI audit as a mechanism for democratising AI?
Access to web data underpins AI training; a transparent crawl‑payment or audit system could level the playing field between dominant search engines and other AI developers.
Speaker: Rahul Matthan (to Matthew Prince)
How can AI be made more trustworthy – through explainability, deterministic behaviour, or other pathways?
Trustworthiness is a prerequisite for widespread adoption, regulatory approval, and user confidence in AI systems.
Speaker: Audience (directed to Matthew Prince)
How can we ensure creator‑based compensation and attribution when AI consumes internet content without giving credit?
Fair remuneration for content creators addresses ethical, legal, and economic concerns as AI models increasingly rely on existing media.
Speaker: Audience (directed to Matthew Prince)
Where does India stand in venture‑capital investment for consumer AI compared with US benchmarks (e.g., Y Combinator, a16z), and how can we close the gap?
Adequate funding is essential for Indian startups to compete globally and to scale innovative consumer AI solutions.
Speaker: Audience (directed to Rajan Anandan)
How can inference costs for voice AI be reduced to a few paisa per minute so that it becomes affordable for billions of Indians?
High inference costs limit adoption; lowering them is key to achieving mass‑scale AI‑driven services in India.
Speaker: Rahul Matthan (to Rajan Anandan)
What steps are needed for India to build a sovereign AI stack (chips, compute, data) and reduce dependence on foreign technology?
A sovereign stack enhances national security, economic independence, and the ability to tailor AI solutions to local needs.
Speaker: Rajan Anandan
What new internet business model will emerge to compensate content creators in an AI‑driven world where traditional traffic‑based monetisation erodes?
Understanding the next revenue paradigm is crucial for sustaining media, journalism, and creative industries as AI changes content consumption.
Speaker: Matthew Prince
How does Google’s indexing advantage affect AI competition, and what regulatory or technical measures could level the field?
If a few firms control the majority of web data, they gain an outsized AI advantage; addressing this is important for fair competition.
Speaker: Matthew Prince
Should AI be regulated similarly to nuclear technology (e.g., an IAEA‑style body), and what would be the implications?
Exploring a high‑level regulatory framework could help manage existential risks while enabling safe development.
Speaker: Rahul Matthan (referencing earlier remarks)
What further innovation is needed around data collection and data‑as‑a‑service companies in India to leverage the country’s data advantage?
Developing a robust data‑industry can fuel domain‑specific AI models and reduce reliance on external data providers.
Speaker: Rajan Anandan
What smart regulatory approaches are required for data usage, sharing, and ownership in the Indian AI ecosystem?
Effective regulation can protect privacy, encourage innovation, and ensure that data benefits the Indian economy.
Speaker: Rajan Anandan
What research is needed into post‑transformer AI architectures that could be more efficient than current models?
Current large language models are compute‑inefficient; new architectures could lower costs and broaden accessibility.
Speaker: Rajan Anandan
What research is needed to lower compute and inference costs (e.g., memory, chip design) for AI workloads in India?
Cost reductions are essential for scaling AI services to a massive user base and for maintaining competitiveness.
Speaker: Rajan Anandan
What research is needed to understand the impact of AI on internet traffic patterns and the viability of existing ad‑based revenue models?
AI changes how content is accessed; studying these shifts will inform new sustainable business models for the web.
Speaker: Matthew Prince
What security implications arise from open‑weight models, and how can they be mitigated?
Open models can be repurposed for malicious ends; identifying safeguards is vital for safe open‑source AI development.
Speaker: Rahul Matthan
How can AI be leveraged defensively to stay ahead of cyber threats, and what research is required to optimise this?
Using AI for proactive security can counteract AI‑enabled attacks; research is needed to maximise effectiveness and reduce false positives.
Speaker: Matthew Prince

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.