Skilling and Education in AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on leveraging artificial intelligence as a tool for development and equality in India, examining both opportunities and challenges in AI adoption across various sectors. The conversation began with identifying four key areas where AI can make significant impact: agriculture, small businesses, education, and healthcare, with particular emphasis on how AI could help smallholder farmers reduce crop losses from 40-50% to 20-30% through pest identification and localized solutions.


A central theme emerged around the need to build “trust infrastructure” alongside digital infrastructure, as people will only adopt AI technologies they can understand and trust. The speakers highlighted India’s unique advantages, including high digital trust levels (70% compared to 25-30% in the US), widespread connectivity, and successful digital public infrastructure like UPI. However, they acknowledged that AI will inevitably create inequalities, as algorithms reflect historical biases and access to advanced AI tools remains uneven globally.


The discussion revealed comprehensive skilling initiatives already underway, including AI awareness programs, vocational training integration, and stackable micro-credentials that can adapt to rapidly changing skill requirements. Speakers emphasized moving from traditional digital literacy to “work literacy” with bite-sized, consumable content that reduces friction to learning. They also addressed the challenge of preparing workers for environments where they collaborate with physical AI and autonomous systems.


Infrastructure development emerged as crucial, with investments in data centers, subsea cables, and compute capacity to reduce dependence on external resources. The panelists concluded by identifying key priorities for 2030: building trust infrastructure, providing every Indian with an AI assistant, embedding ethics in AI education, ensuring affordable compute access, and creating economic models for AI diffusion to make artificial intelligence truly serve as an equalizer in Indian society.


Keypoints

Major Discussion Points:

AI Applications for Development: The discussion highlighted four key sectors where AI can drive significant impact in India – agriculture (helping smallholder farmers reduce crop losses from 40-50% to 20-30%), small businesses (enabling one-person operations through AI assistance), education and skill building, and healthcare.


Trust Infrastructure as Critical Challenge: A major theme was the need to build “trust infrastructure” alongside digital infrastructure. Speakers emphasized that people will only adopt AI if they understand and trust it, addressing concerns about algorithmic transparency, data usage, and decision-making processes.


AI as a Force for Inequality: The discussion acknowledged that AI will likely increase inequality through various mechanisms – algorithms reflecting historical biases, unequal access to tools and resources, geographic disparities, and concentration of AI development in few companies/countries.


Skilling and Workforce Transformation: Extensive discussion on preparing India’s workforce for AI integration, including creating stackable micro-credentials, embedding AI modules in existing training programs, and developing AI literacy from basic awareness to advanced engineering skills.


India’s Unique Advantages and Infrastructure Needs: The conversation highlighted India’s strong starting position with high digital trust levels (70% vs 25-30% in the US), robust digital public infrastructure, and the opportunity to create AI solutions specifically for Indian contexts and languages.


Overall Purpose:

The discussion aimed to explore how India can leverage AI as an equalizer rather than a divider, focusing on practical strategies for implementation across key sectors while addressing challenges around trust, inequality, and workforce preparation.


Overall Tone:

The tone was cautiously optimistic throughout. Speakers acknowledged both the tremendous opportunities AI presents for India’s development and the significant challenges that must be addressed. The conversation maintained a balanced, pragmatic approach – neither dismissing concerns nor dampening enthusiasm – while emphasizing the need for thoughtful, human-centered implementation strategies.


Speakers

Speakers from the provided list:


Speaker 1: Professor (specific title/role mentioned in transcript), expert in AI applications, agriculture, and trust infrastructure


Speaker 2: CEO of NSDC (National Skill Development Corporation), expert in workforce skilling and AI training programs


Neena Pahuja: Former executive member of NCBT (National Council for Vocational Training), expert in certification standards and AI skilling frameworks


Rakesh Kaul: Expert in digital infrastructure and AI accessibility


Speaker 3: Representative from a technology company involved in AI infrastructure development, data centers, and connectivity solutions


Moderator: Discussion facilitator


Additional speakers:


Arunji: Mentioned by the Moderator as CEO of NSDC, appears to be the same person as Speaker 2


Neenaji: Mentioned by the Moderator, appears to be referring to Neena Pahuja


Anandji: Mentioned by the Moderator in reference to skilling programs, but does not appear to speak directly in the transcript


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion examined how India can leverage artificial intelligence as a transformative force for development and equality, bringing together perspectives from academia, government agencies, and industry to address both the immense opportunities and significant challenges facing AI adoption across Indian society.


AI’s Transformative Potential Across Key Sectors

The conversation began with a Professor’s detailed analysis of four critical sectors where AI can drive substantial impact in India. Agriculture emerged as the highest-priority area, not only because it employs the largest workforce but also due to its significant productivity gaps. The Professor highlighted how smallholder farmers in the global south typically lose 40-50% of their crops to pests, representing a massive economic opportunity. AI-powered pest identification systems that provide solutions in local languages using locally available ingredients could reduce these losses to 30% or 20%, creating substantial income improvements for farmers. This represents a compelling use case where the human incentive for adoption is clear and immediate.


Small businesses were identified as the second major opportunity, where AI can enable entrepreneurs to operate as one-person enterprises by providing capabilities traditionally requiring multiple employees. AI can handle market research, analysis, and various business functions, democratising access to sophisticated business tools. Education and skill building, along with healthcare, rounded out the four priority sectors, each offering unique opportunities for AI to address systemic challenges and improve outcomes for millions of Indians.


The Critical Challenge of Trust Infrastructure

A central theme that emerged was the Professor’s concept of “trust infrastructure” as perhaps the most significant barrier to AI adoption. Unlike previous technology rollouts where access, connectivity, or device availability were primary constraints, the AI revolution faces a fundamentally different challenge. India has successfully built robust digital public infrastructure and has achieved widespread connectivity and device penetration. However, the opacity of AI systems creates a unique trust gap that must be bridged.


The discussion revealed that users need to understand what happens inside AI “black boxes” to feel comfortable accepting AI-generated decisions and outputs. This trust challenge manifests in multiple ways: concerns about hiring algorithms and their fairness, questions about medical diagnoses provided by AI systems, uncertainty about language translation accuracy, and scepticism about AI-generated images and content on social media. Additionally, users are increasingly aware that their interactions with AI systems generate data that becomes input for further AI development, raising questions about data ownership, usage, and potential misuse.


The Professor noted India’s unique advantage in this context through comparative trust statistics: digital trust levels in India stand at approximately 70%, compared to just 25-30% in the United States. This “trust dividend” represents a significant competitive advantage that could accelerate AI adoption if properly leveraged, but it also creates a responsibility not to squander this societal asset through poor implementation or broken promises.


Acknowledging AI as a Force for Inequality

The Professor took a notably realistic turn in acknowledging that AI will inevitably create new forms of inequality, despite its potential benefits. However, he emphasized that “none of this means we should stop the train” regarding AI development. This inequality stems from several structural factors that cannot be easily addressed through policy alone. Algorithms trained on historical data will inevitably reflect and potentially amplify past inequalities, as data serves as “a mirror to our past,” and historical patterns have not been equitable.


Access inequality will manifest in multiple dimensions: differential access to advanced AI tools, geographic disparities between urban and rural areas, and varying quality of AI services across different socioeconomic segments. The global concentration of AI development in the United States and China creates additional dependency concerns, with much of China’s AI infrastructure built on US foundations, resulting in a remarkably small number of companies and individuals controlling the foundational technologies upon which global AI systems depend.


Resource consumption presents another inequality dimension, as AI systems require substantial energy, water, space, and environmental resources. This creates potential conflicts between AI development and environmental sustainability, with resource-intensive AI potentially exacerbating existing environmental inequalities.


Comprehensive Skilling and Workforce Transformation Strategies

Arun from the National Skill Development Corporation (NSDC) outlined extensive ongoing efforts to prepare India’s workforce for the AI era through innovative skilling approaches. NSDC has developed a four-pronged strategy addressing career trajectory guidance, AI-specific skilling programmes, AI-enhanced training delivery, and AI-powered programme monitoring and evaluation.


A sophisticated three-tier framework has emerged for AI education: “AI for all” focusing on basic awareness and usage skills, “AI for practitioners” addressing workplace integration and role transformation, and “AI for engineers” providing deep technical expertise. This framework recognises that different segments of the workforce require different levels of AI literacy and engagement.


Arun highlighted that NSDC works with 36 sector skill councils and 400 training partners, and noted India’s demographic dividend of adding “a million plus to workforce every year.” The organization has already seen significant uptake, with over 200,000 people registered for basic AI courses launched in July. The concept of stackable micro-credentials and nano-credentials represents an innovative response to the rapid obsolescence challenge in AI skills, allowing learners to continuously update their skills by adding new components as requirements evolve.


Practical implementation includes embedding AI modules into existing vocational training programmes, with every ITI student now receiving 7.5-hour AI awareness modules. The approach extends beyond theoretical knowledge to practical applications, such as teaching beauticians to use virtual try-on systems, helping tailors implement AI-powered design tools, and enabling workers to use AI for quality assessment, including determining “what’s a good weld or what’s a bad weld.”


Neena Pahuja from the National Council for Vocational Training (NCBT) emphasized the SWOT initiative and the importance of creating AI applications “made in our languages, made for our specific use.” She highlighted working with 10,000 students in Future Skills Centers and partnerships with major technology companies including Microsoft, Google, Amazon, Schneider, and Siemens.


Infrastructure Development and Technological Sovereignty

A technology company representative highlighted the critical importance of building comprehensive AI infrastructure within India rather than relying on external resources. This full-stack approach encompasses foundational compute infrastructure, connectivity systems, and last-mile application delivery. Significant investments are being made in AI data centres, including a major facility in Visakhapatnam, coupled with direct subsea cable connections to reduce dependency on routing through other countries.


The infrastructure discussion emphasised the importance of connecting solutions across entire value chains rather than creating isolated AI applications. In education, this means integrating AI across learning, administration, and workforce preparation. In agriculture, it involves connecting farmers from seed selection through market access, providing integrated information about weather patterns, planting schedules, harvest timing, market conditions, and financial support.


Access to affordable compute emerged as a critical success factor, with recognition that India’s AI ambitions require domestic computational resources that can serve the scale of India’s population and economic needs. This infrastructure development is coupled with efforts to create economic models for AI diffusion, ensuring that advanced AI capabilities can reach beyond well-funded enterprises to small businesses and individual users.


Innovative Approaches to AI Education and Adoption

The discussion revealed sophisticated thinking about how to make AI education accessible and effective for India’s diverse population. A key insight was the need to move from traditional “digital literacy” to “work literacy,” recognising that modern learners prefer consumable content delivered in short, focused segments rather than lengthy traditional courses.


This approach acknowledges changing attention spans and consumption patterns, with users increasingly accustomed to one- or two-minute content segments. Educational content must be designed for “anytime, anywhere, any media, any duration” consumption, removing friction from the learning process. The strategy involves creating engaging short-form content that can capture interest and then connecting interested learners to more comprehensive resources and hands-on facilities.


Arun mentioned the SID platform for large-scale program management, enabling NSDC to monitor and evaluate training programs across their extensive network. These programmes are embedded within credit-based systems, allowing students to build AI expertise progressively throughout their academic careers.


Addressing Workforce Transition and Human-AI Collaboration

A particularly forward-looking aspect of the discussion addressed the psychological and practical challenges of preparing workers for environments where they collaborate directly with AI systems and autonomous technologies. This goes beyond traditional retraining to address fundamental mindset shifts required when working alongside robotic systems, AI agents, and in automated manufacturing environments.


The discussion acknowledged that this transition will be challenging even for educated workers, requiring not just technical training but psychological preparation for new forms of human-machine collaboration. This includes understanding how to work effectively with AI assistants, how to maintain human oversight and control in automated environments, and how to adapt to rapidly changing role definitions as AI capabilities expand.


Rapid-Fire Vision for Decisive Action by 2030

The discussion concluded with rapid-fire responses from each speaker about the most decisive actions needed for India’s AI success by 2030:


Professor: Emphasized the critical importance of building trust infrastructure, focusing on making humans comfortable with AI “black boxes” and ensuring transparency in AI decision-making processes.


Arun (NSDC): Articulated the ambitious vision that “every Indian should have access to an AI assistant” within three years, leveraging India’s existing digital infrastructure and high trust levels.


Neena (NCBT): Stressed that “ethics and values should be part of every AI course taught in India,” emphasizing the need to create a generation of AI developers and users who prioritise responsible development and deployment.


Rakesh: Highlighted that “access to affordable compute is crucial” for India’s AI success, emphasizing the infrastructure foundation needed to support widespread AI adoption.


Technology Company Representative: Focused on the need for “economic models for AI diffusion,” ensuring that AI capabilities can reach beyond well-funded enterprises to small businesses and individual users, building on significant compute infrastructure investments.


Strategic Priorities and Path Forward

The discussion revealed a sophisticated understanding of both AI’s transformative potential and its inherent challenges. The consensus emerged around several strategic priorities: building trust infrastructure that makes AI systems transparent and accountable to users, ensuring ethical AI development through education and policy frameworks, creating comprehensive domestic AI infrastructure to reduce external dependencies, and developing innovative educational approaches that can keep pace with rapid technological change.


The conversation demonstrated that India’s approach to AI adoption is distinctly human-centred, prioritising purpose over power and focusing on practical applications that address real societal challenges. The recognition of AI as both an opportunity and a potential source of inequality has led to proactive strategies for inclusive development, leveraging India’s unique advantages in digital trust and demographic dividend while addressing the fundamental challenges of workforce transition and technological sovereignty.


The path forward requires coordinated action across government, industry, and educational institutions, with success measured not just by technological advancement but by AI’s contribution to reducing inequality and improving lives across all segments of Indian society. The discussion provided a roadmap for making AI serve as an equaliser rather than a divider, grounded in practical experience and realistic assessment of both opportunities and challenges ahead.


Session transcriptComplete transcript of the session
Speaker 1

In two significant areas, one is in agriculture, which is the highest employer, biggest employer anywhere. It’s also one of the least productive of sectors that we have anywhere. And that productivity gap in agriculture, if we can narrow it even by a small percentage, you will move the needle by a significant amount. And AI can do that. Just think about much of agricultural output in the global south comes from smallholder farmers who lose 40 % to 50 % of their crop because of pests. Now, if a farmer can identify what the pest is and use a homemade remedy that is given to them in their own language and using local ingredients, if I can move that 40 % down to 30 % or 20%, suddenly a huge swing in the farmer’s income.

So there is no question from a human perspective, if my income is going to, if my crop loss is going to, go up by 10 or 20%, you know, I will adopt it. So. So that’s the first thing, which is purpose. Now, in addition to agriculture, small businesses. I don’t really need a whole bunch of employees if I can essentially harness AI to do market research, to do analysis, and almost be an employee. And I can be a one -person shop and employ and really build a business. Now, beyond that, there are several other areas of application, which, you know, we’ve done the analysis to kind of see, you know, where are some of the biggest opportunities.

So there’s agriculture, small business. After that comes education and skill building. Another very powerful use of AI. And a fourth area is health care. Now, for each of these areas, there is an element of a major chasm that the humans need to cross. And that chasm doesn’t have to do with technology. It doesn’t have to do with how big the pipe is. It doesn’t have to do with whether I have access to, you know, any of the devices. It doesn’t have to do with the various elements of the digital public infrastructure. In fact, India is one of the shining examples. of the distribution system, the rails having been laid. But the key chasm, the big jump that we need to make is across a trust gap, which is in addition to digital infrastructure, in addition to other forms of infrastructure that includes talent and data and compute, there is a trust infrastructure that needs to be built.

Because from a human perspective, I will use a piece of technology if I can trust it. Now, there are many reasons why people are, on the one hand, very excited about AI, as is very evident over here, and at the same time, there is a lingering concern. There is a lingering concern because I don’t quite understand what’s inside that black box. I don’t quite understand how the hiring algorithms work. Why did I get rejected from this job? Why did I get that diagnosis? From a healthcare system. What is the language system telling me? Is something being lost in translation? Can I trust an image that has just been sent to me on social media? So the issues of trust are a very important set of questions.

And then the data that I’m submitting into the system, simply by interacting with AI, I’m submitting data and providing input. I’m actually acting as labor for the AI industry. What’s happening to the data? Who’s using it? Where does it go? Can it be used against me? Is it going to be used in my favor? So the whole question of trust is going to be an enormously important part that we need to consider. So first, purpose. Second is creating a trust infrastructure. And the third is recognizing that no matter what we say, no matter what rhetoric we put on our screens, no matter how many alliterative slogans we have in our meetings, AI is going to be a force for inequality.

There are many reasons why AI is going to create an unequal playing field, not the least of which being the fact that the algorithms are feeding on data. Data is simply a reflection of the past, and as we know, the past is not a terribly equal place. So that algorithm is going to essentially act as a mirror to our past, and maybe part of the risk is that the inequalities of the past get reinforced into the future. There are inequalities in terms of who has access to better tools. Now, even with open source and people being able to, you know, vibe code themselves, there’s an element of democratization, but there could be very different levels of access across a society.

So the usage context itself could be unequal. There could be inequalities when you go into different parts of the world, when you go to different parts of the country. So geographically, there is likely to be inequality. There’s inequality in terms of who’s providing you AI. So today, much of the frontier AI models are coming from two places, United States and China. And much of China’s AI infrastructure is built on top of a foundation from the United States. Much of the foundation of the United States, the leaders of the companies that are producing it, they’re all over here, really small. So it’s a tiny industry that’s providing us the foundation from which we are building the rest of the system.

And then one last really major source of potential inequality has to do with the resources that AI is absorbing, primarily energy, water, space, and even kind of our environmental resources, enormously important. Now, none of this means that we should stop the train. But we need to understand the human impact. that AI is going to have, both positive and negative, as we move forward and put the relevant policy systems in place, the relevant trust -building systems in place. Otherwise, we might be not only wasting a demographic dividend that India has got, but a trust dividend that India has got. One critical and really important aspect of an ecosystem like India is that it’s a very trusting society, very trusting in terms of digital.

It’s a very trusting infrastructure. Trust levels in India is in the 70 % range, whereas in the United States is in the 25 % to 30 % range. That’s a huge platform to build on. And it’s going to be really important for us to follow through with that trust that users, our potential consumers are giving us, and for the policy and the technology sector to be able to make sure that that trust dividend is not wasted. So with that… But I’m going to sit down, and I look forward to learning from my colleagues on the panel about how do we make AI more purposeful and not just powerful. Thank you.

Moderator

Thank you, Professor, for the insightful remarks. Very exciting, and at the same time, you know, you raise some concerns around inequality. Let me first go to Arunji. You know, as we said, like the demographic dividend that India holds, we will be adding a million plus to workforce every year. How do we make sure that they’re skilled, they’re ready for what the market is asking, the skills are continuously shifting, as CEO of NSDC with the mandate of skilling the population? How do you look at this? How is, you know, do you see AI as a threat, as an enabler? How are you approaching this?

Speaker 2

So, good morning. AI is an opportunity and an enabler. So let me begin with a few words about NSDC itself. So this is a national platform institution under the Ministry of Scale Development. And we work through two arms, 36 sector scale councils and close to around 400 training partners. So these are the two arms through which we have been working in the scaling space for the last two decades. With AI coming in, of course, it’s an opportunity, as I say. But primarily in four areas we have started work. One is, of course, AI and how career trajectories are getting shaped. So we require some kind of guidance, direction, et cetera. So work on that front. Second is creating scaling programs for AI, AI scaling programs.

Third is how does AI itself affect the entire value chain of scaling when it comes to, say, training, assessments, counseling and the other. areas. And the last is since we do large scale program management, how do we use AI to evaluate or monitor outcomes? These are the four primary areas we are working on. Obviously I will just pick for each of these areas in brief. The first one, setting the agenda or setting the direction with respect to careers, NSDC and the sector skill council, specifically the IT sector skill council, we have brought about certain reports, how jobs get shaped by AI, the new jobs and how the existing jobs get changed, etc. Within that is career counselling.

Once you know that this is the way a certain job would get transformed or a new job would come, a lot of career counselling is required for students. So how do we create AI enabled career counselling tools, models, etc. So that’s one area of big work. work. Coming to AI skilling programs, clearly there are three trials. The first is of course where we talk about AI for all skilling, which is more like AI awareness and AI usage. So we have this skilling for SOAR program under which we work with schools etc. The second is where we talk about how does skilling affect practitioners or people in the workplace. And this is where our sector skills councils are busy putting together how do we make the current programs, how do we bring in AI modules in it.

Of course to begin with how AI affects job roles to start with and then translating that into how the new programs would look like. The third area is AI for engineers where we skill engineers and this is where we work with engineering colleges. We have something called the Future Skills Centers and we work with close to right now around 10 ,000 students and around 50 ,000 students. And we work with 10 ,000 students and around 50 ,000 students. So we have a lot of different things going on. We have a lot of different things going on. We have a lot of different things going on. We have a lot of different things going on. We have a lot of different things going on.

We have a lot of different things going on. We have a lot of different things going is to create close to around 22 companies work with us, including Microsoft, Google, and Amazon, and Schneider’s, and Siemens, et cetera. And we create these kind of skilling centers within engineering colleges. The good thing about it is that this is part of the credit -based system. So students can pick up over the four years they are doing the engineering every year, every semester they pick up a course, and you string together a course, then you have a kind of a program for, say, an AI architect or something like that. So we look at the entire skilling program. The third is, as I said, AI is changing the way we skill, you know, the way we train, the way we assess.

Early days, again, pilots on, how do you use AI as a training assistant, you know, to our trainers, you know? So what do we do, how do we work with that? Similarly, assessment is a big area. Please see, many of our training involves vocational training, which means hands -on training. So we use AI for hands -on training. Hands -on training requires a lot of… piloting etc so towards that we are working can we say for example just giving an example say what’s a good weld or what’s a bad weld if the AI is trained on that then it can help the current assessor in actually you know actually providing a better assessment and also augmenting the number of assessors we are currently having the last piece is we have our skill platform for large scale program management today it is called SID and we are now bringing elements AI into it so that how do we how do we monitor outcomes better so big challenges in a country like India is monitoring outcomes they are facing how to use AI on that area also so

Moderator

very interesting and exciting to see what you have brought to the table Neenaji if I can move to you Anandji spoke about the skilling programs but certification standards are very key and how do you do that in an environment where skills are you know the courses are becoming outdated in months the requirements are shifting from your vantage point like a lot of content being created a lot of initiatives all around how do we define qualified professional in AI? Is there a plan for certification or standard setting? How should we think about it?

Neena Pahuja

Thank you so much. Thank you for inviting me. I’m a former executive member of NCBT. One minute about NCBT. NCBT is a regulatory body NCBT is a regulatory body under the Ministry of Skill Development. So something on AI since we’re sitting in an AI conference. Around two and a half years back we came up with a skilling framework for AI. And the framework actually talks about three layers of skilling for AI. It talks about skilling for all. It talks skilling for few and skilling for many and skilling for few. all of the initiative we started working as part of the SWOT initiative that was mentioned by Arun also. Now what does that mean? Like all of us know how to use payment gateways or payment UPI etc.

Can we actually use AI in a similar way? So our thought was can we take AI to everyone every nook and corner a radiowala or a plumber or somebody who is a beautician etc. So what did we do on that? And I’m going to take a minute before I come to a certification question. We actually have tried creating a small nano -credential for something which a beautician can use and how can she use AI for giving a better service to the customers. We’ve actually created a virtual try -on for a tailor. How can a tailor use a virtual try -on concept for actually taking it to, you know, telling a person which design or what kind of a color suits a person.

We actually created basic courses, of course, on AI, which also have been done and they were launched sometime in July. We’ve got around two lakhs plus people who registered on those courses. But idea was, how can we take it to everyone? So how can simple things like, how can a plumber find out if there’s a fault in the pipe? So can AI be used there? So one of the points which I think Professor talked about was, how do we take it to masses? How can AI make an impact in our lives or everybody’s life, like internet is doing or anybody else is doing? So that’s what we’ve tried doing as part of some of the courses that we already are in a state to launch.

In fact, some of the courses have been launched. We all been talking about in this conference that AI is going to replace coders. So there’s going to be lots of jobs, etc. But still, how can actually you use AI for helping to launch? We actually are stalling. we’ve actually demonstrated how AI can help in coding also. How can it help me to learn coding? How can it help me to test a particular program? So AI doesn’t stop at just being AI and taking away jobs. I think we have to groom and possibly diffuse, I mean, the word which has been said, the concept of AI which is happening. Now let’s look at certification in the courses.

Very wonderful question he asked. Things are changing almost every day. I think the carpenter’s role is going to change. In fact, we have from Furniture and Putting, the Sixth Grade Council, a small little model which says, how can I design a particular furniture better if I have AI? Knowing the wood, amount of wood, and the space for which I have to design the furniture, can the carpenter actually use AI to design the furniture better? That’s the way it’s going to make an impact. Now how can I embed this course in a course which I’m teaching a carpenter? That’s the impact it’s going to make. And that’s what we’re trying to do. So a wonderful question from that point of view.

So what we’ve done is we came up with the concept of stackable micro -credentials or stackable nano -credentials. Which means based on the changes which are happening, you could actually stack the small, small modules together and make a skill that is required that needs to be done. And in these skills, you could also have something like employability skill, which could be design thinking or others. And it could also be an AI module which can be embedded, which actually tells you how AI can be embedded in a particular course. For example, our ITIs already have a small seven and a half hours module which have been embedded as basic concepts of AI which are now being taught to every ITI student.

Now what we want to see is how can we actually do our late machines or other machines, how can they be done in a better way with AI. So that’s why we are trying to do it. Now certification is now, the way we are trying to do is small, small certificates. You know, which a person earns, can I. actually lead to credit and the total of that actually can also take you to a larger credit and that’s how it’s been planned as part of something called National Credit Framework that we actually came up with from NCVT and the Ministry of Education. This was launched around two years back. So that’s how it’s been planned. I hope that answers your question.

Moderator

To Rakesh, historically, you know, whenever general purpose huge technological change comes in, it ends up increasing divide for some time till people evolve and you know, you learn new skills and get off the curve. Now in India’s vantage point, we spoke about the starting point we have. In your view, what needs to be done differently? We heard a few initiatives in motion to make sure that we cross this transition and manage this transition very carefully and aptly.

Rakesh Kaul

Thank you so much. It is a very, very pertinent question. And I truly believe that in the past, we have been able to do this. And I truly believe that in the past, we have been able to do this. And I truly believe that in the past, we have been able to do this. era, India was at a disadvantage. It was very difficult when we had internet coming in. India was not as uniquely placed as it is placed today. We have ubiquitous connectivity. We have low cost of connectivity. We have a huge amount of internet penetration. We have applications which are general purpose applications used by a billion people like UPI and others. So today our starting point is very, very different, not only for our users, but also for those who are making these applications.

I think journey started about 10 years back when India realized that we have to make applications for our own people and not just rely on the world to make applications for us and take them forward. So that’s where we are today. Hence, the opportunity for us is immense, given that we are a billion people with access to low cost connectivity, reliable connectivity, already using these applications for our financial transactions and other use cases. And especially for all the programs that we just heard, NSGC and others are today making. So I’ll just talk about three things that I thought I’ll bring to your notice is, I think the point that we should move from digital literacy to work literacy.

What that typically, the point that I’m wanting to make here is that, just a minute, it’s, my phone is misbehaving. So the idea here is how can we remove friction to learning? And friction to learning, typically, the point that I wanted to make is how can it be anytime, anywhere, any media, any duration. And more and more, we are seeing that our population, our people are used to one minute, two minute consumable content. Giving a one hour lecture or one hour content may be difficult now. People really need to consume it in two minutes. If they like, they might go ahead with the two hours. content. So are we creating the right content for our users or are we just trying to reshape the content we have a lecture somewhere and we give a video on a platform and say consume it.

So this whole content strategy has to work if the skill is to be really imparted for people in a consumable manner where the as I said friction to usage is least and obviously if we dovetail the programs that sir was just talking about into those 600 labs that digital AI mission is going to set up across India then somebody who’s interested and you hook on by this small meaningful content on an Instagram can the person really find it interesting and get to somewhere where they can really see the benefit of it and we heard Nandan talking about it is that we should lead from the first principles it is not only giving black boxes to people and saying work with it if you really want India to progress people should understand also a lot of people should understand what goes into this.

Black box so that they can start innovating around so I think that’s one which is remove friction. the second is I think we are all talking about agents and we are talking about physical AI it is not going to be easy for any worker including trust me for me if tomorrow I am told that my secretary is an agent and not a physical person although the productivity of that agent may be much better now imagine we are getting into a workforce where we are talking of lights out factories it’s a reality in China where the factories are totally autonomous how do you get your workforce to work with this physical AI you are working and besides you there is a robotic arm working doing half the work how do you work in such environments where a lot of work is being done by physical AI it will take a lot of mindset shift it will take a lot of role shifts and it is not only telling them what AI is how their roles are changing what is now expected of them once that is clear then only you will be able to train them better so therefore how do we move towards this journey and in our mind be very optimistic of the fact that it is here and it will impact us.

We have to be ready. There is enough and more for a country like ours to become the torchbearers for the world of how to engage 1 billion people on this endeavor. We will create data which will train apps but for that we will have to take on the mantle of creating our own apps which are purposefully made for India, made in our languages, made for our specific use. Thank you.

Moderator

is going to add to the broader AI ecosystem that we need?

Speaker 3

So when I look at the work that we’ve been doing across board and across product areas, and speaking to some of the announcements we’ve made this week, what we’re looking at is how do we bring the full stack of AI into India, right from the foundational level. It’s creating a secure, resilient infrastructure. How do we bring that computational power that India needs in India and not having to rely on that power, that compute in other markets? And that’s why we started out with the build of the data center, the AI data center in Vizag. Adding to that is how do we then ensure connectivity with the rest of the world? That’s where we’ve looked at the subsea cable investments that we’re going to do, which is going to connect Vizag to, to the, to, uh, the U .S.

directly, you know, circumventing through the southern hemisphere. And then as you go up the stack, like you’re talking about, is how do we build out applications and solutions that are actually delivering value to the last mile citizen on the ground, whether it’s in agriculture or health. But every time we look at it, we are looking at how do we kind of, you know, complete the circle. So if you look at it in the education space and the work we’ve been doing with Charn Singh University, is how do you have AI not just at the skilling level, but how do you bring in AI at the learning level and the administration level so we can create a more effective and efficient way of actually delivering AI to the students.

And we’re addressing every part of that chain of learning. How then do you connect it to the workforce, correct? So as you look at the professional certification. And it’s important that these loops start to close because that’s when you actually see impact. That’s key. Similarly, as we talked about in agriculture healthcare, correct? How, in agriculture, you go from seed to market. How do you give information to a farmer to understand the weather pattern so he’s better able to know when to sow, when to harvest, information on the market, information on financial support. So the whole aspect that we are thinking through is how do we kind of help connect the dots? How do we come in and provide our support and our technology to ISVs to create these solutions to connect the dots?

Because that last mile connectivity is actually going to determine the success of this technology. Thank you.

Moderator

I think we’re quite at time, but I want to take just a minute for a last question to all the panelists together. If you had to identify one decisive action which India should take looking at 2030, something that we can be proud of, what action should India take to make AI as an equalizer? Let me start with you, Professor. five second response is rapid fire.

Speaker 1

Five second response, I think the one action that we need to take is improve the trust infrastructure and make sure that the human at the other side of the AI understand what’s inside the black box, at least to the extent that it makes them feel comfortable accepting the output and the decision that the black box is offering.

Speaker 2

I think in the next three years I think every Indian should have access to an AI assistant whether it’s a farmer, a student or anything. We have the platforms, we have the DPIs in place, we have the language thing in place, we also have the SIDs, the skills platform in place. It’s time we put it all together and create every Indian having one assistant working with them.

Neena Pahuja

I think what I would love is that if ethics and value is also part of every AI course that’s taught, at least in India, we will possibly create a different kind of AI creators in that field. That would be my opinion. Thank you.

Rakesh Kaul

I would believe that it is access to affordable compute which will be important for India to succeed.

Speaker 3

I think I’m going back to my first point is on the flywheel. I think a lot of the investments are coming into the compute side. How do we look at bringing in investments and creating economic models for the diffusion of AI and that’s going to be important.

Moderator

Trust, ethics, compute, human at the core of everything. Thank you so much for the exciting discussion. Thank you. Can I have a round of applause for the panelists and our moderator please.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
11 arguments165 words per minute1227 words444 seconds
Argument 1
Agriculture productivity can be significantly improved through AI pest identification and local language solutions
EXPLANATION
AI can help address the massive productivity gap in agriculture by enabling farmers to identify pests and receive remedies in their local language using local ingredients. This technology could significantly reduce crop losses and increase farmer incomes.
EVIDENCE
Smallholder farmers in the global south lose 40-50% of their crop due to pests. If AI can help reduce this to 20-30%, it creates a huge swing in farmer income.
MAJOR DISCUSSION POINT
AI applications in agriculture for pest management
Argument 2
Small businesses can leverage AI for market research and analysis to operate as one-person shops
EXPLANATION
AI can serve as a virtual employee for small business owners, handling tasks like market research and analysis. This enables entrepreneurs to build businesses without needing multiple employees.
EVIDENCE
AI can essentially act as an employee for market research and analysis, allowing one-person operations to build substantial businesses.
MAJOR DISCUSSION POINT
AI enabling small business operations
AGREED WITH
Speaker 3
Argument 3
Education and healthcare represent major AI application areas with significant potential
EXPLANATION
Beyond agriculture and small business, education and skill building along with healthcare are identified as powerful use cases for AI implementation. These sectors offer substantial opportunities for AI-driven improvements.
EVIDENCE
Analysis shows agriculture, small business, education/skill building, and healthcare as the biggest AI opportunities.
MAJOR DISCUSSION POINT
Key sectors for AI application
Argument 4
AI can help farmers reduce crop losses from 40-50% to 20-30% using local remedies
EXPLANATION
By providing pest identification and locally-appropriate solutions in farmers’ native languages, AI can significantly reduce the substantial crop losses that smallholder farmers currently experience. Even a 10-20% improvement in crop loss reduction would dramatically increase farmer incomes.
EVIDENCE
Smallholder farmers in the global south currently lose 40-50% of their crops to pests. Moving this down to 30% or 20% creates huge income improvements.
MAJOR DISCUSSION POINT
Quantified impact of AI in agriculture
Argument 5
Trust gap is the major barrier to AI adoption, not technology or infrastructure limitations
EXPLANATION
While India has strong digital infrastructure, the key challenge for AI adoption is building trust infrastructure. People need to trust AI systems before they will use them, which requires addressing concerns about transparency and understanding of AI decision-making.
EVIDENCE
India has excellent digital public infrastructure and distribution systems, but users have concerns about AI black boxes, hiring algorithms, healthcare diagnoses, and data usage.
MAJOR DISCUSSION POINT
Trust as the primary barrier to AI adoption
AGREED WITH
Neena Pahuja
DISAGREED WITH
Rakesh Kaul
Argument 6
Users need to understand AI black boxes to feel comfortable with AI decisions and outputs
EXPLANATION
For widespread AI adoption, people must have sufficient understanding of how AI systems work to trust their outputs and decisions. This transparency is essential for building the trust infrastructure needed for AI success.
EVIDENCE
Users have concerns about hiring algorithms, healthcare diagnoses, language translation accuracy, and image authenticity on social media.
MAJOR DISCUSSION POINT
Need for AI transparency and explainability
DISAGREED WITH
Speaker 2
Argument 7
AI will inevitably create inequality due to algorithms reflecting biased historical data
EXPLANATION
AI systems trained on historical data will perpetuate and potentially amplify existing inequalities since past data reflects unequal societies. This creates a risk that AI will reinforce historical biases and inequalities into the future.
EVIDENCE
Data is a reflection of the past, and the past is not a terribly equal place. Algorithms act as mirrors to our past, potentially reinforcing inequalities.
MAJOR DISCUSSION POINT
AI as a source of inequality
Argument 8
Unequal access to AI tools and geographic disparities will create divides
EXPLANATION
Despite democratization through open source tools, there will be different levels of access to AI across society and geography. This unequal access will create new forms of digital divides.
EVIDENCE
Even with open source availability, there are different levels of access across society and geographic regions.
MAJOR DISCUSSION POINT
Geographic and social AI access disparities
Argument 9
Concentration of AI development in US and China creates dependency issues
EXPLANATION
Most frontier AI models come from the US and China, with China’s AI infrastructure built on US foundations. This concentration in a small number of companies and countries creates dependency risks for other nations.
EVIDENCE
Much of frontier AI comes from US and China, with China’s AI infrastructure built on US foundations. The industry leaders are concentrated in a very small group.
MAJOR DISCUSSION POINT
Geopolitical concentration of AI development
Argument 10
AI consumes significant resources including energy, water, and environmental resources
EXPLANATION
AI systems require substantial resources for operation, including energy, water, space, and other environmental resources. This resource consumption represents a major source of potential inequality in AI access and impact.
EVIDENCE
AI absorbs enormous amounts of energy, water, space, and environmental resources.
MAJOR DISCUSSION POINT
Environmental impact of AI systems
Argument 11
India has a trust advantage with 70% digital trust levels compared to 25-30% in the US
EXPLANATION
India possesses a significant trust dividend with much higher levels of digital trust among its population compared to other major markets. This creates a strong foundation for AI adoption that should not be wasted.
EVIDENCE
Trust levels in India are around 70% while in the US they are 25-30%, representing a huge platform to build on.
MAJOR DISCUSSION POINT
India’s trust advantage for AI adoption
AGREED WITH
Rakesh Kaul
S
Speaker 2
2 arguments185 words per minute923 words299 seconds
Argument 1
NSDC is working on four areas: career guidance, AI-enabled training, and program monitoring
EXPLANATION
The National Skill Development Corporation is approaching AI through four key areas: understanding how AI shapes career trajectories, creating AI skilling programs, using AI to improve the training value chain, and leveraging AI for large-scale program monitoring and evaluation.
EVIDENCE
NSDC works through 36 sector skill councils and 400 training partners, focusing on career counseling, AI skilling programs, AI-enabled training/assessment, and AI-powered outcome monitoring.
MAJOR DISCUSSION POINT
Comprehensive approach to AI in skills development
Argument 2
Every Indian should have access to an AI assistant within three years
EXPLANATION
The goal is to provide every Indian, whether farmer, student, or other worker, with access to an AI assistant. The infrastructure including digital public infrastructure, language capabilities, and skills platforms are already in place to achieve this vision.
EVIDENCE
India has the platforms, DPIs, language capabilities, and skills platforms (SIDs) in place to support universal AI assistant access.
MAJOR DISCUSSION POINT
Universal AI assistant access goal
AGREED WITH
Neena Pahuja
DISAGREED WITH
Speaker 1
N
Neena Pahuja
4 arguments173 words per minute930 words322 seconds
Argument 1
Three-tier AI skilling framework: AI for all, AI for practitioners, and AI for engineers
EXPLANATION
NCVT developed a comprehensive skilling framework with three layers – basic AI literacy for everyone, specialized training for workplace practitioners, and advanced engineering-level AI education. This ensures AI skills reach all levels of society.
EVIDENCE
Framework includes AI awareness for all (like UPI usage), workplace AI modules for practitioners, and engineering college programs with 22 companies including Microsoft, Google, Amazon.
MAJOR DISCUSSION POINT
Tiered approach to AI education
Argument 2
Stackable micro-credentials and nano-credentials can adapt to rapidly changing skill requirements
EXPLANATION
To address the challenge of rapidly changing AI skills requirements, NCVT developed stackable micro and nano-credentials that can be combined as needed. This flexible approach allows workers to continuously update their skills as technology evolves.
EVIDENCE
Small modules can be stacked together, embedded in existing courses like ITI programs with 7.5-hour AI modules, and connected to the National Credit Framework.
MAJOR DISCUSSION POINT
Flexible credentialing for evolving skills
AGREED WITH
Rakesh Kaul
Argument 3
AI should be made accessible to all workers including radiowala, plumber, beautician through simple applications
EXPLANATION
AI should be democratized to reach every type of worker, from radio operators to plumbers to beauticians, through simple, practical applications. The goal is to make AI as ubiquitous and easy to use as payment gateways or UPI.
EVIDENCE
Created virtual try-on for tailors, AI tools for beauticians, basic fault detection for plumbers, and furniture design assistance for carpenters using local materials and space constraints.
MAJOR DISCUSSION POINT
AI democratization for all workers
AGREED WITH
Speaker 2
Argument 4
Ethics and values should be integrated into every AI course taught in India
EXPLANATION
To create responsible AI practitioners and developers, ethics and values should be embedded in all AI education in India. This approach would help India develop a different kind of AI creator who prioritizes ethical considerations.
MAJOR DISCUSSION POINT
Ethics integration in AI education
AGREED WITH
Speaker 1
R
Rakesh Kaul
5 arguments182 words per minute878 words288 seconds
Argument 1
Need to move from digital literacy to work literacy with friction-free, consumable content
EXPLANATION
The focus should shift from basic digital literacy to work-relevant AI literacy that removes barriers to learning. Content should be delivered in easily consumable formats that match how people prefer to learn – in short, engaging segments rather than long lectures.
EVIDENCE
People are used to 1-2 minute consumable content rather than hour-long lectures. Content should be anytime, anywhere, any media, any duration.
MAJOR DISCUSSION POINT
Evolving approach to digital education
AGREED WITH
Neena Pahuja
Argument 2
Workforce must adapt to working alongside physical AI and autonomous systems
EXPLANATION
Workers need to prepare for environments where they collaborate with physical AI systems, robotic arms, and autonomous factories. This requires significant mindset shifts and role adaptations beyond just understanding what AI is.
EVIDENCE
Examples include lights-out factories in China that are totally autonomous, and workplaces where robotic arms work alongside human workers.
MAJOR DISCUSSION POINT
Human-AI collaboration in physical work environments
Argument 3
India’s existing digital infrastructure provides a strong foundation for AI adoption
EXPLANATION
Unlike previous technological transitions, India is uniquely positioned for AI adoption with ubiquitous connectivity, low-cost access, high internet penetration, and widespread use of applications like UPI. This gives India a significant advantage over its historical position during earlier tech waves.
EVIDENCE
India has ubiquitous connectivity, low cost connectivity, huge internet penetration, and billion people using applications like UPI for financial transactions.
MAJOR DISCUSSION POINT
India’s digital infrastructure advantage
AGREED WITH
Speaker 1
Argument 4
India must create applications specifically designed for Indian users in local languages
EXPLANATION
For AI to truly benefit India’s population, applications must be purposefully designed for Indian users, delivered in local languages, and tailored to specific Indian use cases rather than relying on global applications adapted for India.
EVIDENCE
Journey started 10 years ago when India realized the need to make applications for its own people rather than relying on the world to make applications for India.
MAJOR DISCUSSION POINT
Localized AI application development
Argument 5
Access to affordable compute is essential for India’s AI success
EXPLANATION
For India to succeed in AI, ensuring access to affordable computational resources is critical. This foundational requirement will determine whether India can effectively participate in and benefit from the AI revolution.
MAJOR DISCUSSION POINT
Importance of affordable compute access
DISAGREED WITH
Speaker 1
S
Speaker 3
4 arguments164 words per minute480 words175 seconds
Argument 1
Full-stack AI infrastructure needed from foundational compute to last-mile applications
EXPLANATION
A comprehensive approach to AI infrastructure is required, spanning from basic computational infrastructure to end-user applications. This includes secure data centers, connectivity, and applications that deliver value to citizens in areas like agriculture and health.
EVIDENCE
Building AI data center in Vizag, subsea cable investments connecting Vizag to US, and applications in education with Charn Singh University covering learning, administration, and workforce connection.
MAJOR DISCUSSION POINT
Comprehensive AI infrastructure development
Argument 2
Secure, resilient AI infrastructure should be built within India rather than relying on external compute
EXPLANATION
India needs to develop its own secure and resilient AI infrastructure domestically rather than depending on computational power located in other markets. This ensures greater control and security over AI capabilities.
EVIDENCE
Building AI data center in Vizag to provide computational power within India rather than relying on compute in other markets.
MAJOR DISCUSSION POINT
Domestic AI infrastructure development
Argument 3
Connecting solutions across the entire value chain is crucial for real impact
EXPLANATION
AI solutions must address complete workflows rather than isolated points to achieve meaningful impact. This means connecting all parts of a process, from initial input to final outcome, to create effective end-to-end solutions.
EVIDENCE
In agriculture, connecting seed to market with weather information, sowing guidance, harvest timing, market information, and financial support. In education, addressing learning, administration, and workforce connection.
MAJOR DISCUSSION POINT
End-to-end AI solution integration
AGREED WITH
Speaker 1
Argument 4
Economic models for AI diffusion need investment alongside compute infrastructure
EXPLANATION
While significant investments are flowing into computational infrastructure, there’s a need for economic models and investments that focus on the diffusion and widespread adoption of AI technologies. This diffusion aspect is equally important for success.
MAJOR DISCUSSION POINT
Investment in AI diffusion models
M
Moderator
4 arguments154 words per minute399 words154 seconds
Argument 1
India will add a million plus workers to the workforce annually, requiring continuous skill adaptation
EXPLANATION
The moderator highlights India’s demographic dividend, noting that the country will be adding over a million people to its workforce each year. This presents both an opportunity and a challenge in ensuring these workers are properly skilled for evolving market demands.
EVIDENCE
India will be adding a million plus to workforce every year, with skills continuously shifting based on market demands
MAJOR DISCUSSION POINT
India’s demographic dividend and workforce growth
Argument 2
Skills and certification standards face challenges due to rapidly changing requirements in AI
EXPLANATION
The moderator raises concerns about how to define qualified professionals in AI when courses become outdated within months and requirements shift rapidly. This highlights the challenge of maintaining relevant certification standards in a fast-evolving field.
EVIDENCE
Courses are becoming outdated in months and requirements are shifting rapidly in the AI field
MAJOR DISCUSSION POINT
Challenges in AI certification and standards
Argument 3
Technological transitions historically increase divides before people adapt and learn new skills
EXPLANATION
The moderator acknowledges that general purpose technological changes typically create temporary inequality as some groups adapt faster than others. The key is managing this transition period effectively to minimize negative impacts.
EVIDENCE
Historical pattern shows that technological changes end up increasing divide for some time until people evolve and learn new skills
MAJOR DISCUSSION POINT
Managing technological transition periods
Argument 4
India needs decisive action by 2030 to make AI an equalizer rather than a divider
EXPLANATION
The moderator emphasizes the urgency of taking concrete steps to ensure AI serves as a force for equality rather than increasing disparities. This requires identifying and implementing key actions within the next six years.
MAJOR DISCUSSION POINT
AI as equalizer vs divider by 2030
Agreements
Agreement Points
AI should be accessible to all types of workers and citizens
Speakers: Speaker 2, Neena Pahuja
Every Indian should have access to an AI assistant within three years AI should be made accessible to all workers including radiowala, plumber, beautician through simple applications
Both speakers advocate for universal AI access across all professions and social levels, with Speaker 2 setting a three-year timeline and Neena Pahuja emphasizing practical applications for traditional workers
India has strong digital infrastructure foundation for AI adoption
Speakers: Speaker 1, Rakesh Kaul
India has a trust advantage with 70% digital trust levels compared to 25-30% in the US India’s existing digital infrastructure provides a strong foundation for AI adoption
Both speakers recognize India’s advantageous starting position for AI adoption, with Speaker 1 highlighting the trust dividend and Rakesh emphasizing the comprehensive digital infrastructure already in place
Need for comprehensive, end-to-end AI solutions
Speakers: Speaker 1, Speaker 3
Small businesses can leverage AI for market research and analysis to operate as one-person shops Connecting solutions across the entire value chain is crucial for real impact
Both speakers emphasize the importance of complete, integrated AI solutions rather than isolated applications, with Speaker 1 focusing on business applications and Speaker 3 on infrastructure connectivity
Trust and transparency are fundamental to AI adoption
Speakers: Speaker 1, Neena Pahuja
Trust gap is the major barrier to AI adoption, not technology or infrastructure limitations Ethics and values should be integrated into every AI course taught in India
Both speakers prioritize trust-building in AI systems, with Speaker 1 identifying trust as the primary adoption barrier and Neena advocating for ethics integration in education
AI education must be practical and adaptable to rapid changes
Speakers: Neena Pahuja, Rakesh Kaul
Stackable micro-credentials and nano-credentials can adapt to rapidly changing skill requirements Need to move from digital literacy to work literacy with friction-free, consumable content
Both speakers advocate for flexible, practical AI education approaches that can adapt to rapidly changing requirements and user preferences for consumable content
Similar Viewpoints
All three speakers emphasize the importance of comprehensive, multi-layered approaches to AI implementation that address different skill levels and use cases
Speakers: Speaker 1, Speaker 2, Neena Pahuja
Agriculture productivity can be significantly improved through AI pest identification and local language solutions NSDC is working on four areas: career guidance, AI-enabled training, and program monitoring Three-tier AI skilling framework: AI for all, AI for practitioners, and AI for engineers
Both speakers advocate for India-specific AI solutions that are designed for local needs and contexts rather than adapting global solutions
Speakers: Rakesh Kaul, Speaker 3
India must create applications specifically designed for Indian users in local languages Full-stack AI infrastructure needed from foundational compute to last-mile applications
All three speakers acknowledge that AI implementation will create challenges and disruptions that require proactive management and investment
Speakers: Speaker 1, Rakesh Kaul, Speaker 3
AI will inevitably create inequality due to algorithms reflecting biased historical data Workforce must adapt to working alongside physical AI and autonomous systems Economic models for AI diffusion need investment alongside compute infrastructure
Unexpected Consensus
AI as inevitable source of inequality despite its benefits
Speakers: Speaker 1, Moderator
AI will inevitably create inequality due to algorithms reflecting biased historical data Technological transitions historically increase divides before people adapt and learn new skills
Despite the overall optimistic tone about AI’s potential, there was unexpected consensus that AI will create inequality and divides, which is significant because it shows realistic assessment of challenges alongside opportunities
Importance of domestic AI infrastructure development
Speakers: Rakesh Kaul, Speaker 3
Access to affordable compute is essential for India’s AI success Secure, resilient AI infrastructure should be built within India rather than relying on external compute
Both speakers unexpectedly emphasized the strategic importance of domestic AI infrastructure, showing consensus on technological sovereignty concerns that weren’t initially apparent
Overall Assessment

The speakers showed strong consensus on making AI accessible to all Indians, leveraging India’s digital infrastructure advantages, the need for trust-building, and practical education approaches. There was also agreement on the importance of comprehensive solutions and India-specific AI development.

High level of consensus with complementary perspectives rather than conflicting views. The agreement spans technical, social, and policy dimensions, suggesting a mature understanding of AI implementation challenges and opportunities. This consensus provides a strong foundation for coordinated AI development efforts in India.

Differences
Different Viewpoints
Primary focus for AI success in India
Speakers: Speaker 1, Rakesh Kaul
Trust gap is the major barrier to AI adoption, not technology or infrastructure limitations Access to affordable compute is essential for India’s AI success
Speaker 1 emphasizes trust infrastructure as the key barrier, while Rakesh Kaul identifies affordable compute access as the critical requirement for AI success
Approach to AI transparency and user understanding
Speakers: Speaker 1, Speaker 2
Users need to understand AI black boxes to feel comfortable with AI decisions and outputs Every Indian should have access to an AI assistant within three years
Speaker 1 emphasizes the need for users to understand how AI works before adoption, while Speaker 2 focuses on rapid deployment of AI assistants without emphasizing user understanding of the technology
Unexpected Differences
Resource allocation priorities for AI development
Speakers: Speaker 1, Speaker 3
AI consumes significant resources including energy, water, and environmental resources Economic models for AI diffusion need investment alongside compute infrastructure
While Speaker 1 raises concerns about AI’s resource consumption as a source of inequality, Speaker 3 advocates for increased investment in AI infrastructure and diffusion models, creating an unexpected tension between resource conservation and expansion
Overall Assessment

The main areas of disagreement center around implementation priorities (trust vs. compute access), the pace and approach to AI deployment (understanding vs. rapid access), and resource allocation strategies

Moderate disagreement level with significant implications – while all speakers support AI adoption in India, their different priorities could lead to conflicting policy recommendations and resource allocation decisions that may impact the effectiveness and equity of AI implementation

Partial Agreements
Both speakers recognize the importance of addressing ethical concerns in AI, but Speaker 1 focuses on the inevitability of inequality while Neena Pahuja proposes proactive ethics integration in education as a solution
Speakers: Speaker 1, Neena Pahuja
AI will inevitably create inequality due to algorithms reflecting biased historical data Ethics and values should be integrated into every AI course taught in India
All speakers agree on democratizing AI access for all Indians, but differ on implementation – Speaker 2 focuses on AI assistants, Neena Pahuja on practical work applications, and Rakesh Kaul on localized application development
Speakers: Speaker 2, Neena Pahuja, Rakesh Kaul
Every Indian should have access to an AI assistant within three years AI should be made accessible to all workers including radiowala, plumber, beautician through simple applications India must create applications specifically designed for Indian users in local languages
Both speakers acknowledge India’s infrastructure advantages, but Rakesh Kaul emphasizes existing digital infrastructure while Speaker 3 focuses on building comprehensive new AI-specific infrastructure
Speakers: Rakesh Kaul, Speaker 3
India’s existing digital infrastructure provides a strong foundation for AI adoption Full-stack AI infrastructure needed from foundational compute to last-mile applications
Takeaways
Key takeaways
AI has transformative potential in four key sectors: agriculture (highest impact through pest identification reducing crop losses), small businesses (enabling one-person operations), education/skill building, and healthcare Trust infrastructure is the critical barrier to AI adoption in India, not technology or digital infrastructure – users need to understand AI ‘black boxes’ to feel comfortable with AI decisions India has a significant advantage with 70% digital trust levels compared to 25-30% in the US, creating a ‘trust dividend’ that should not be wasted AI will inevitably create inequality due to biased historical data, unequal access to tools, geographic disparities, and resource concentration in US/China A three-tier AI skilling framework is needed: AI for all (basic awareness), AI for practitioners (workplace integration), and AI for engineers (technical expertise) Stackable micro-credentials and nano-credentials can address rapidly changing skill requirements in the AI era India needs full-stack AI infrastructure built domestically, from foundational compute to last-mile applications, rather than relying on external resources Ethics and values should be integrated into every AI course taught in India to create responsible AI creators
Resolutions and action items
NSDC to continue work on four areas: AI career guidance, AI skilling programs, AI-enabled training systems, and AI-powered program monitoring Launch and scale AI awareness courses (already 200,000+ registered for basic AI courses launched in July) Embed 7.5-hour AI modules in all ITI programs across India Establish 600 AI labs across India through the digital AI mission Build AI data center in Vizag with direct subsea cable connectivity to the US Create AI assistants accessible to every Indian within three years Develop AI applications in local languages for Indian-specific use cases Integrate AI across the entire education value chain from learning to administration to workforce connection
Unresolved issues
How to effectively build trust infrastructure and make AI ‘black boxes’ transparent to users How to prevent AI from reinforcing historical inequalities and biases How to ensure equitable access to AI tools across different geographic regions and socioeconomic groups How to manage workforce transition to working alongside physical AI and autonomous systems How to create sustainable economic models for AI diffusion beyond just compute infrastructure investment How to balance AI development speed with ethical considerations and trust-building How to reduce dependency on US and China for foundational AI models and create indigenous alternatives
Suggested compromises
Focus on purpose-driven AI applications rather than just powerful AI systems Balance AI automation with human understanding and control Create stackable, modular learning systems that can adapt to changing requirements rather than fixed long-term programs Develop AI applications that augment human capabilities rather than replace workers entirely Build domestic AI infrastructure while maintaining global connectivity and collaboration Integrate ethics and values education alongside technical AI training
Thought Provoking Comments
The key chasm, the big jump that we need to make is across a trust gap… there is a trust infrastructure that needs to be built. Because from a human perspective, I will use a piece of technology if I can trust it.
This comment reframes the AI adoption challenge from a purely technical or access issue to a fundamental human psychology issue. It introduces the novel concept of ‘trust infrastructure’ as a prerequisite for AI success, which goes beyond traditional discussions of digital infrastructure.
This insight became a central theme that influenced the entire discussion. Every subsequent speaker addressed trust in some form – from certification standards to ethics in AI education. It shifted the conversation from ‘how to deploy AI’ to ‘how to make AI trustworthy and acceptable to humans.’
Speaker: Speaker 1
AI is going to be a force for inequality… algorithms are feeding on data. Data is simply a reflection of the past, and as we know, the past is not a terribly equal place. So that algorithm is going to essentially act as a mirror to our past.
This comment provides a profound philosophical insight about AI’s inherent bias problem, using the powerful metaphor of algorithms as ‘mirrors to our past.’ It challenges the common narrative of AI as inherently democratizing and forces acknowledgment of systemic inequality perpetuation.
This sobering perspective created a counterbalance to the optimistic tone and forced other speakers to address inequality mitigation in their responses. It elevated the discussion from technical implementation to social responsibility and shaped the moderator’s final question about making AI an ‘equalizer.’
Speaker: Speaker 1
Trust levels in India is in the 70% range, whereas in the United States is in the 25% to 30% range. That’s a huge platform to build on… that trust dividend is not wasted.
This introduces the concept of ‘trust dividend’ as a unique competitive advantage for India, providing concrete data that reframes India’s position in the global AI landscape from a follower to a potential leader due to cultural factors.
This insight shifted the discussion toward India-specific advantages and influenced subsequent speakers to focus on how India can leverage its unique position. It provided a foundation for optimism about India’s AI future and influenced the conversation toward India-centric solutions.
Speaker: Speaker 1
We came up with the concept of stackable micro-credentials or stackable nano-credentials… based on the changes which are happening, you could actually stack the small, small modules together and make a skill that is required.
This introduces a practical solution to the rapid obsolescence problem in AI skills training. The concept of stackable credentials addresses the core challenge of how to maintain relevant skills in a fast-changing field.
This comment provided a concrete answer to the moderator’s question about certification in rapidly changing fields, demonstrating how policy frameworks can adapt to technological change. It influenced the discussion toward practical implementation strategies.
Speaker: Neena Pahuja
We should move from digital literacy to work literacy… remove friction to learning… anytime, anywhere, any media, any duration… people are used to one minute, two minute consumable content.
This insight recognizes a fundamental shift in how people consume information and learn, challenging traditional educational approaches. It connects AI education to modern attention spans and consumption patterns.
This comment shifted the discussion toward user experience and accessibility, influencing how other speakers thought about content delivery and making AI education more practical and achievable for mass adoption.
Speaker: Rakesh Kaul
Every Indian should have access to an AI assistant whether it’s a farmer, a student or anything. We have the platforms, we have the DPIs in place… It’s time we put it all together.
This comment synthesizes the discussion into a concrete, ambitious vision that leverages India’s existing digital infrastructure. It transforms abstract concepts into a tangible goal that builds on India’s proven digital public infrastructure success.
This vision statement provided a unifying goal that other panelists could rally around in their final responses, creating convergence in the discussion toward a shared aspiration for India’s AI future.
Speaker: Speaker 2
Overall Assessment

These key comments fundamentally shaped the discussion by introducing three critical frameworks: trust as infrastructure, inequality as an inherent AI challenge, and India’s unique advantages. Speaker 1’s opening insights about trust infrastructure and inequality set a sophisticated analytical tone that elevated the entire conversation beyond technical implementation to human-centered considerations. The trust dividend concept reframed India’s position from disadvantaged to advantaged, creating an optimistic foundation that influenced all subsequent responses. The practical solutions offered by other speakers (stackable credentials, friction-free learning, AI assistants for all) directly responded to the challenges identified in the opening, creating a coherent narrative arc from problem identification to solution design. The discussion evolved from acknowledging AI’s potential negative impacts to developing India-specific strategies that leverage cultural and infrastructural advantages, ultimately converging on a shared vision of inclusive AI adoption.

Follow-up Questions
How do we make AI more purposeful and not just powerful?
This was posed as a central question for the panel discussion, focusing on ensuring AI serves meaningful human purposes rather than just advancing technological capabilities
Speaker: Speaker 1
How do we build trust infrastructure for AI adoption?
Speaker 1 identified trust as a major chasm that needs to be crossed, requiring research into how people can understand and trust AI systems, especially regarding black box algorithms
Speaker: Speaker 1
How can AI be used to monitor outcomes better in large scale program management?
Speaker 2 mentioned this as one of four areas NSDC is working on, noting that monitoring outcomes is a big challenge in a country like India
Speaker: Speaker 2
How do we define a qualified professional in AI when skills and requirements are shifting rapidly?
The moderator raised this as a key challenge for certification and standard setting in an environment where courses become outdated in months
Speaker: Moderator
How do we remove friction to learning and make it anytime, anywhere, any media, any duration?
Rakesh emphasized the need to move from digital literacy to work literacy and create consumable content that matches how people prefer to learn (short, bite-sized content)
Speaker: Rakesh Kaul
How do we prepare workforce to work alongside physical AI and autonomous systems?
Rakesh highlighted the challenge of mindset and role shifts needed when workers collaborate with robotic systems and AI agents in environments like lights-out factories
Speaker: Rakesh Kaul
How do we create economic models for the diffusion of AI?
Speaker 3 noted that while investments are coming into compute infrastructure, there’s a need to focus on creating sustainable economic models for AI adoption and diffusion
Speaker: Speaker 3
How do we ensure every Indian has access to an AI assistant by 2030?
Speaker 2 proposed this as a goal, noting that the platforms and infrastructure are in place but need to be integrated to provide universal access to AI assistance
Speaker: Speaker 2
How do we integrate ethics and values into every AI course taught in India?
Neena suggested this would create a different kind of AI creators and developers, emphasizing the importance of ethical AI development
Speaker: Neena Pahuja

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Responsible AI for Children Safe Playful and Empowering Learning

Responsible AI for Children Safe Playful and Empowering Learning

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on AI literacy for children and how to prepare young learners to understand and navigate an AI-powered world. The session, moderated by UNICEF India’s Chief of Education Saadhna Panday, featured representatives from LEGO Education discussing their approach to teaching AI concepts to children through hands-on, collaborative learning experiences.


Tom Hall from LEGO Education emphasized that children should not view AI as a “magic box” but should understand the fundamental concepts underlying the technology. He argued for giving children the tools to deconstruct and comprehend AI systems rather than simply teaching them to use AI tools. The company has developed a computer science and AI curriculum that teaches concepts like probability, algorithmic bias, and machine learning through physical building and coding activities that run locally on devices to ensure privacy and safety.


Atish Joshua Gonsalves demonstrated how students can learn about AI classifiers by training models to recognize their poses and movements, which then control LEGO robots. This approach emphasizes collaborative learning in groups of four, where students take turns building, coding, and training AI models. The curriculum is designed to be accessible even in resource-constrained environments, starting with screen-free activities using physical bricks to teach computational thinking concepts.


Richa Menke from LEGO’s Creative Play Lab discussed the tension between AI’s potential benefits and risks for children. She highlighted concerns about efficiency versus imagination, personalization versus identity development, and assistance versus agency. LEGO’s current smart brick products deliberately avoid using AI, maintaining high safety and privacy standards while the technology matures.


The panelists stressed that AI literacy should be treated as a fundamental skill alongside reading and mathematics, with education systems taking immediate responsibility to prepare children not just as AI consumers but as future creators and leaders of the technology.


Keypoints

Major Discussion Points:

AI Literacy as Fundamental Education: The need to teach children foundational AI concepts (probability, algorithmic bias, data processing) as core literacy skills alongside reading and math, rather than treating AI as a “magic box” or elective subject


Safety and Privacy in AI for Children: Establishing non-negotiable principles for AI products designed for children, including local processing (no data leaving devices), transparency in data provenance, and avoiding anthropomorphization of AI systems


Hands-on, Collaborative Learning Approach: Emphasizing that children learn best through physical manipulation, building together, and social interaction rather than isolated screen-based experiences, with AI education integrated into tactile, group-based activities


Balancing AI Assistance with Child Agency: Addressing the tension between AI’s efficiency and the need to preserve children’s imagination, struggle, and creative development – ensuring AI empowers rather than replaces critical thinking and problem-solving skills


Equity and Accessibility in AI Education: Discussing how to make AI literacy relevant and accessible across diverse contexts, from urban schools with resources to rural, multilingual, multi-level classrooms with limited technology access


Overall Purpose:

The discussion aimed to explore how to responsibly introduce AI literacy to children through educational products and curricula, with LEGO Education presenting their approach to teaching foundational AI concepts through hands-on, collaborative learning experiences while maintaining strict safety and privacy standards.


Overall Tone:

The tone was consistently thoughtful and cautious throughout, with speakers emphasizing responsibility and child welfare over technological advancement. There was an underlying sense of urgency about preparing children for an AI-powered future, balanced with deliberate restraint about rushing to implement AI tools without proper safeguards. The conversation maintained an optimistic but measured approach, celebrating children’s capabilities while acknowledging the serious considerations required when designing AI experiences for young learners.


Speakers

Speakers from the provided list:


Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations


Tom Hall: Works at LEGO Group, involved in AI literacy education and curriculum development


Atish Joshua Gonsalves: Product development at LEGO Education, previously worked with UN Refugee Agency, focuses on computer science and AI education products


Richa Menke: Heads up interactive play at the LEGO Group, leads the Creative Play Lab innovation team, focuses on creating interactive play experiences


Saadhna Panday: Chief of Education at UNICEF India, moderator of the panel discussion on AI literacy and children


Nikhil Bawa: Audience member, writes about AI and education


Asha Nanavati: Works with Alliance Educational Foundation, which runs a charitable K-12 school in Kerala


Speaker 4: Role/title not specified – appears to be an audience member asking a question


Additional speakers:


None identified beyond the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion on AI literacy for children, moderated by UNICEF India’s Chief of Education Saadhna Panday, brought together representatives from LEGO Education to explore how to responsibly prepare young learners for an AI-powered world. The session began with an impactful video featuring children’s voices about AI and followed a structured format of presentations and demonstrations before concluding with a panel discussion that revealed a thoughtful, cautious approach prioritising child development over technological advancement.


Opening: Children’s Voices on AI

The session opened with a powerful video featuring children discussing AI and their desire to be included in AI policy conversations. As Tom Hall from LEGO Education noted, referring to one particular child in the video, “He breaks me every time” when discussing the impact of hearing directly from young people about their perspectives on AI. The children in the video articulated principles for AI education that should be “safe, fair, and transparent,” immediately establishing that this discussion would centre children’s agency rather than treating them as passive recipients of technology.


One child’s statement was particularly striking: “We need to have a say in AI policies because AI literacy is really important. Thanks for finally asking us what we think.” This opening set the tone for the entire discussion, emphasising that children are not merely users of AI tools but stakeholders who should contribute to AI development and governance decisions.


Reframing AI Education: From Magic Box to Understanding

Tom Hall from LEGO Education presented a fundamental reframing of AI education philosophy, challenging the conventional approach of simply teaching children to use AI tools. “AI literacy isn’t about teaching children how to use this magic box,” Hall argued. “I think far more importantly it’s like how do we give the child the screwdriver to take that box apart and really understand what’s going on under the cover.”


Hall explained that children currently view generative AI systems as magical boxes where “you type in a text or a question and then outcome images and videos and entertaining things and maybe even the answer to a history essay question.” However, he emphasised that foundational AI literacy should focus on understanding concepts such as probability, how computers process data, algorithmic bias, and the nuances of these systems.


This approach aims to transform children from passive consumers into active investigators and future creators. The goal is not merely to prepare children for today’s AI tools, but to equip them with knowledge and confidence to “build what is yet to come” and “be the designer of what is to come.” Hall argued that AI literacy should be elevated to the status of modern literacy alongside mathematics and reading, rather than being treated as an elective subject for a select few.


Hall also referenced the UK’s introduction of computer science GCSE in 2014 as an example of how educational systems can successfully integrate new technological literacies into core curriculum requirements.


Demonstrating AI Literacy: The “Strike a Pose” Lesson

Atish Joshua Gonsalves from LEGO Education provided a practical demonstration of their AI education approach through their “Strike a Pose” lesson. In this hands-on activity, students work in groups of four, taking turns building, coding, and training AI models. Students create custom AI classifiers by posing in front of cameras and training models to recognise their movements, which then control LEGO robots including their “AI Dancer” robot.


This demonstration illustrated how children can understand fundamental AI concepts like probability and classification whilst seeing their ideas come to life through physical construction. Gonsalves emphasised that their curriculum follows a 5E model: engage, explore, explain, elaborate, and evaluate, ensuring structured learning progression while maintaining hands-on engagement.


Importantly, Gonsalves explained that their approach begins with completely screen-free activities, where foundational computer science concepts like sequences, loops, and probability are taught using physical bricks before introducing any digital components. This progression ensures that children develop computational thinking through tactile experiences before engaging with more abstract digital concepts.


The Science of Hands-On Learning

Hall presented research-backed arguments for physical, collaborative learning experiences over isolated digital interactions. “When you use your hands, and the science backs this up, you are engaging all parts of your brains that lead to learning,” he explained. He cited research showing that spatial awareness skills and basic mathematics develop more effectively when children use manipulatives and think through problems physically.


LEGO’s approach deliberately designs for collaboration first, with children working in groups where they share roles and responsibilities. This methodology extends to their First LEGO League, which Hall described as “the world’s largest STEM competition,” demonstrating their commitment to collaborative, hands-on learning at scale.


Safety and Privacy as Non-Negotiable Foundations

A striking aspect of the discussion was LEGO’s unwavering commitment to child safety and privacy. Gonsalves detailed their comprehensive safety guidelines, which include ensuring that all AI features in their educational products run locally on devices with no data ever leaving the device, no collection of login information, and no sharing with third parties. They deliberately avoid anthropomorphising AI systems to prevent children from forming unhealthy emotional bonds with technology.


Richa Menke from LEGO’s Creative Play Lab revealed a surprising decision: their current SmartBrick retail products deliberately avoid using AI altogether. Despite being a company focused on AI education, she explained that “LEGO products currently don’t employ AI because the safety and privacy bar hasn’t been met for childhood applications.” This decision reflects their belief that “childhood deserves deliberation” and that rushing to implement AI without proper safeguards could have long-term consequences.


Gonsalves emphasised that “safety and student well-being is a red line, is a non-negotiable for us.” Their AI education tools are designed with explicit user controls, such as cameras that are off by default and require deliberate activation by students, ensuring that children make conscious choices about their technology interactions.


Addressing Critical Tensions in AI and Child Development

Richa Menke introduced three crucial tensions that must be addressed when developing AI for children. First, the tension between efficiency and imagination: “If I can get an answer just like this, I don’t have to wait. I don’t have to struggle. I don’t have to develop my imagination.” This raises concerns about whether AI’s quick responses might rob children of important developmental experiences that build resilience and creativity.


Second, the tension between personalisation and identity development: “A child at seven is not the same as who they’re going to be at 17. So if we start personalising the experience for who they are at seven, are we holding them back?” This highlights concerns about AI systems potentially constraining children’s natural development and identity exploration.


Third, the tension between assistance and agency: whether AI tools might create children who are skilled at prompting but lack the ability to persevere through challenges independently. These tensions frame a fundamental question about what AI development should optimise for. Menke argued that whilst current AI systems often optimise for engagement and attention, “if we optimize for childhood, then we’re going to optimize for potential.”


The Promise and Urgency of AI

Panday illustrated AI’s transformative potential through a compelling example: AI systems that can detect pancreatic cancer 438 days earlier than traditional methods. This example demonstrated why AI literacy is not just about technology education but about preparing children for a world where AI will fundamentally change how we approach problems and solutions.


She also referenced Estonia as an example of a country successfully implementing AI education at a national level, showing that systematic AI literacy programs are not only possible but already being implemented effectively in some contexts.


Equity and Accessibility Challenges

The discussion acknowledged significant equity concerns in AI education implementation. Panday highlighted that “for a child living in urban Delhi, AI has found its way into their education either through the home or the school. But for a poor tribal girl living in rural Jharkhand, perhaps not so much.” This uneven access creates new forms of digital divide that education systems must address.


The challenge extends beyond access to technology to include teacher preparation and support. Gonsalves noted that “most teachers who are teaching computer science are actually not computer science teachers themselves. They are teaching math, they’re teaching science, they’re teaching English.” This reality requires comprehensive support systems, including ready-to-use lesson plans, classroom presentations, and facilitation notes that require no additional preparation time.


Questions from the audience highlighted practical challenges faced by charitable schools that cannot afford AI training for teachers, and the need for resources in local languages. The panellists acknowledged these challenges whilst emphasising that many AI literacy concepts can be taught without expensive technology, starting with discussion-based approaches and physical materials.


Children as Active Agents and Policy Contributors

A recurring theme throughout the discussion was the recognition of children’s agency and capacity to contribute meaningfully to AI development and governance. Panday emphasised that “time and again we make the error that we underestimate the capacity of children. They’re not passive recipients of education. They have tremendous agency. They can consume tech, they can shape it, and no doubt they will lead it in time.”


Hall advocated for involving children in AI policy discussions within schools, providing them with templates to discuss AI policies and trusting their thoughtful responses. This approach represents a significant shift from traditional educational technology implementation, where children are typically seen as recipients rather than contributors to policy and design decisions.


Implementation Strategies and Practical Approaches

LEGO Education’s practical approach, with their computer science and AI product announced in January and set to reach schools in April, involves a structured learning progression that begins with foundational computer science concepts before introducing AI elements. Their curriculum balances structured learning with open-ended creativity, with early lessons providing scaffolding and guidance while later units include design challenges where students apply learned concepts more autonomously.


For contexts with limited resources, the panellists emphasised that many fundamental concepts can be taught through discussion and physical manipulation without requiring advanced technology. Hall suggested starting with conversations about bias, if-then concepts, and policy discussions that can be conducted in any classroom setting.


The session also referenced opportunities for hands-on experience, with attendees encouraged to visit Hall 3 booth to try the products themselves, demonstrating the commitment to experiential learning that extends beyond the formal presentation.


The Call to Action: Acting Now with Scale and Equity

The discussion concluded with a strong emphasis on urgency and action. Rather than waiting for perfect solutions, the panellists advocated for beginning AI literacy education immediately with available resources and approaches. The session established that successful AI literacy education requires treating children as active agents rather than passive consumers, prioritising hands-on collaborative learning over isolated digital experiences, maintaining non-negotiable safety and privacy standards, and addressing equity concerns to ensure universal access.


Panday emphasised that empowerment through AI literacy is something “we can do quickly, with scale and with equity,” rejecting the notion that comprehensive AI education must wait for ideal conditions or resources.


Conclusion: A Child-First Approach to AI Education

The session demonstrated a mature approach to educational technology that prioritises long-term human development over short-term technological capabilities. The discussion reframed AI education from a technology-first to a child-first approach, emphasising that the goal is not to prepare children for AI, but to prepare AI for children whilst empowering young people to become the creators and leaders of future AI development.


Most significantly, the conversation established that responsible AI literacy education requires balancing technological potential with developmental appropriateness, ensuring that children are equipped not just to use AI tools, but to understand, question, and ultimately shape the AI-powered world they will inherit and lead. The philosophical shift represents a deliberate choice to prioritise childhood development and agency over technological advancement, whilst maintaining optimism about children’s capacity to become thoughtful creators and leaders in an AI-powered future.


The recurring message throughout was clear: children deserve to be heard, included, and empowered in discussions about AI, and the time to begin this work is now, with whatever resources and approaches are available, rather than waiting for perfect conditions that may never arrive.


Session transcriptComplete transcript of the session
Speaker 1

curious how it works and I think that a lot of kids are. I would love to learn how it can be used in everyday life and how it can be used as an accurate source of information. AI is like taxes, it’s unavoidable and if you don’t learn to evolve with it you’re gonna be left behind. I definitely want to be a part of solving big problems. We need to have a say in AI policies because AI literacy is really important. Thanks for finally asking us what we think. Bye.

Tom Hall

He breaks me every time. These were children that we brought into a school in California in December. actors in there. There’s just a lot of children with opinions and the little boy at the end, he just had a lot to say. He is very wise. But these are, those were the views of just some smart, inspiring young people. They’re not just eager to use AI, but I think you can see they’re especially eager to understand and to build things with it. And just as you saw, they have some really clear ideas about how it should and shouldn’t be used in today’s classrooms. But of course, you know, excitement and confidence are not the same as mastery or comprehension.

We do see an unfortunate trend where children do not understand the fundamentals of the systems they’re interacting with. And I think you can particularly see that in younger children who often see generative AI systems. As a kind of magic box that they can… into where you know you type in a text or a question and then outcome images and videos and entertaining things and maybe even the answer to a history essay question I think we need to be really clear that AI is not magic it’s not a magic toolbox it’s a technology system and foundational AI literacy isn’t about teaching children how to use this magic box I think far more importantly it’s like how do we give the child the screwdriver to take that box apart and really understand what’s going on under the cover so while you know supporting children to use AI tools safely ethically and effectively today is important I think far more it’s about equipping them with the knowledge and the tools the confidence to build what is yet to come So therefore our definition of AI literacy when we talk about it, it’s about understanding today’s technology, yes, but it’s far more about understanding the fundamental concepts so that you are armed and ready for what is yet to be designed, and actually so that you can be the designer of what is to come.

So I think that we have underestimated the role we have to play in preparing children today. We don’t want them to be passive consumers of AI. Instead, we really believe that we should be arming them with the tools, the literacies that are required to lead, to design, to create. And our goal is not about sort of robot -proofing our children for what’s coming at them, but just making sure that they are ready to build a better future and they’ve got the tools in their hands. So let’s talk about AI literacy as understanding the foundations of AI. So AI is the foundation of computer science and AI concepts, and that is about understanding the fundamental concepts of AI.

understanding probability, how computers sort of sense the world as data points and data sensors, sensing algorithmic bias and understanding all of the nuances of that. We don’t want that to be an elective or selective choice for just the few. We believe that these concepts have to be elevated to the status of modern literacy alongside maths and reading, problem solving, creativity and collaboration. And I think it’s best if we show you how we plan to do this in classrooms. So I’m going to hand over to Atish, and we’re going to run a live demo, which is always fun at a conference event.

Atish Joshua Gonsalves

Great, thanks, Tom. And I’m also delighted to introduce AI Dancer, who’s on the table here, who hopefully will do some dancing soon as well. So, yeah, very excited to share. I’m going to share a bit more about how we’ve translated some of these principles that Tom was talking about into the product. So I’m here. excited to shout about our new computer science and AI product which is just fresh off the press which we just announced in January and will hit schools in April but all of this we need to do this very responsibly we saw this kid earlier in the video talk about AI should be safe fair transparent so this is very wise kid right so we really agree at Lego education we’ve established clear guidelines for how this should work so let me step you through some of these guidelines so AI should be safe we do not generate any text or any media we do not anthropomorphize I got that right this time it’s just a fancy way of saying we do not make it think that AI is human we do not want them forming any unhealthy emotional bonds we we ensure that all our digital products are rooted in the principles of universal design of design principles and we are designed to for kids who have neurodiversity we’re designing for kids who have different learning needs so it’s really important that our products are designed in a very fair way transparent all the models that we would would use would which should have very clear data provenance so should understand where the data has come from which has trained those models and understand whether the models have been trained on different geographies on different kinds of kids on different kinds of adults so ensuring that these models have clear data provenance is super critical for us and then finally privacy so I just want to stress that in all our products AI features run locally on the devices nothing ever leaves the device nothing ever goes to us at the Lego group nothing goes to third parties no login is collected there’s in terms of the trading whether the kids are building their own AI models or they’re using pre -existing models nothing ever leaves so safety and student well -being is a red line is a non -negotiable for us so everything we know about decades of education research and the way we use AI is very important to us and I think that’s what we need to show us that kids look best when they are building when they’re using their hands and really creating and we do and we’ve seen this very much at like education and through years of research so now more than ever children need to learn and need to learn together so much of computer science and AI today is stores with kids sitting in front of the screen with the headphones on by themselves learning and I don’t think we see this as a vision for learning for us kids should be building together coding together experimenting together tinkering together and sharing together so that is really our vision of how kids should be learning computer science and AI so when they tackle these when they tackle these new technologies they also have those cross -cutting skills to also deal with us in the real world so bringing this all together at Lego education we have these four values that govern our approach to AI literacy so we prioritize child agency and engagement to ensure students are active participants in their own learning journeys we empower students with the foundations of AI that Tom was talking about that remain relevant as the technology evolves.

We uphold child safety and well -being as it’s non -negotiable for every AI interaction in the class and we foster hands -on immersive and collaborative experiences that inspire creativity and shared learning. So that is really the four principles that are driving all of this. So how do we make this, how do we bring this into a classroom? How do we, with our products, how do we make sure it’s hands -on, it’s understandable and safe for kids? So I would encourage you also after the session to go to the booth, I think it’s in Hall 3, and actually see these products in person, get hands -on with them, try them out yourself. So we’re really helping students to build real AI literacy by demystifying how AI works.

Through these playful features and lessons, learners explore concepts like computer vision, probability ballistic thinking, classification, machine learning, while seeing their ideas come to life. The result is student agency. Kids not just using AI but actually understanding and building with it. So what better way to show you how kids are using it than for me to try to actually make you use it. So here we have a lesson which is about teaching kids about pre -trained classifiers. So this is in the last unit of once they’ve gone to some core principles of computer science they’ve learned about basics and events and loops and data structures. So at the end they are looking at AI and data and here they’re learning about how you can use a pre -trained classifier, the model that already exists, to bring their AI down to life.

One thing you’ll notice here when the code is up here that the camera that they can use, the camera by default is off. So this is all. sort of in line with the principles of AI safety so it’s an explicit action the kids are taking and here when I hit play now okay I’ve got that’s why I have a video okay no worries so what I’m gonna do always fun trying to do a live demo we always have a backup so yeah you can see that as I’m lifting my hands up and down you you’re seeing the different probabilities changing here and what the kids are learning through this is that with traditional computer science you’ve got zeros and ones things can be on and off with AI what they’re learning here is there’s a 80 70 90 percent chance that I’ve lifted my left hand up or my right hand up or both hands up and then that’s triggering the different events so they’ve learned about events in earlier lessons and that’s what I’m talking about triggering those.

So they learn that AI is not always right. They’re learning that the more data that’s trained into the model, the better it gets. And they also learn from an ethics perspective that if the AI model is not trained with enough kids’ examples, it will have biases in it as well. So these are very core principles of AI, but taught in a very simple and playful way and making the AI dancer come to life. So

Speaker 1

Ready to excite your students with computer science and AI? This lesson is called Strike a Pose. Students will learn how to customize an AI classifier and program AI -activated events. We’ll kick off with a big question to spark curiosity. How could you train a robot to follow your movements? We will explore the topic through the computer science concepts AI and data. The question is tied to a real -life example, how AI can be trained to recognize images through data. This makes it more relatable to both students and teachers. In groups of four, each student picks a minifigure, which indicates their roles in the collaborative building process. The group will build a robot with movable arms and discuss how it might work.

Then it’s time to get hands -on with coding. Groups will open Lego Education Coding Canvas, enter the lesson pin, and connect their hardware. Students create and train their own AI custom classifier by posing in front of the camera and capturing pose data. With simple pre -made code and their classifier, groups explore making the robot mimic their arm poses. Group members take turns so everyone gets hands -on. Two students develop the build of the robot. While the other two iterate on their code and later they swap. Students present their robot, talk about their iteration process, and discuss how they created and trained their class. At the end of this lesson, students will be able to say, I can create a custom classifier.

I can use PoseData to train a custom classifier. I can describe how to create a custom classifier and use data to train it. This is the third of four lessons in the AI and Data unit, where students explore how computers learn from data. In the following lessons, students investigate how data quality and quantity can improve how their AI detects their poses. At the end, they apply what they’ve learned through an open -ended design challenge. All materials for this lesson can be found on the LEGO Education Teacher Portal lesson plan, ready -to -use classroom presentation and facilitation notes. No extra. No extra prep time needed.

Atish Joshua Gonsalves

So you got to see how AI model is really used, how the AI dance is really used in the classroom and what you saw also in the classroom there were kids meaningful roles in the building process as they were building out the model but also meaningful roles when they’re coding and also training the AI as well. And all of this also for the kids but none of this can happen without teachers, right? So we cannot simply drop new standards and mandates on educators without the support for them. You saw in the video briefly referenced the teacher portal where the teachers get all the resources and the support they need to bring computer science and AI to kids.

We know that most teachers who are teaching computer science are actually not computer science teachers themselves. They are teaching math, they’re teaching science, they’re teaching English and so they need to be prepared to really scale this up as well. So we really see this not as a problem. It’s not as a challenge in terms of access to tools but an access to confidence. So I think this is a nice, there’s a couple of very nice quotes here but I’m also, it’s a nice quote. I just wanted to hand over to Richa. I’m very pleased to hand over to her, who leads product development on the retail side and is behind the super exciting Smartbricks, if you’ve seen those.

Richa Menke

Thanks, Steve. Hi, everyone, good morning. Thank you for having me. So, my name is Richa Menke. I head up interactive play at the LEGO Group. So, we’ve just heard an important call to action in terms of AI literacy. So, preparing children to understand and navigate an AI -powered world. And this matters enormously. But what I’d like to do is spend a few minutes discussing the other side of this question, which is, how do we prepare AI for kids and imagination? And part of the reason we’re here is that we believe our focus on play and imagination not only unlocks exciting new play experiences, it might just be the unlock to a more inclusive and empowering future of AI.

So, childhood, as we know, is formative. it’s not a market opportunity, it’s a developmental window that closes. What enters that window shapes who we become. Our sense of confidence, our curiosity, our relationship with struggle and creation, and very importantly, that shaping can often be invisible. So this is very important to us in what we do in the Creative Play Lab, which is the innovation team at the LEGO Group. So what we do is we look at how do we create more and more relevant play experiences for kids, how do we employ new technologies in service of better play for kids, but always keeping in mind our DNA as the LEGO Group, that hands -on, minds -on play experience that we all love.

So eight years ago, our team asked the question, in a world of digital screens, how could we offer kids, more interactivity in their LEGO play experiences, but without… screen. And we were really, really committed to this and spent eight years getting there. And we just launched in January, the SmartPlay platform, which is a new dimension of Lego play. What this is, is, you know, as the child is playing with the SmartBreak in their models, the play actually responds with appropriate sounds and behaviors. So imagine you have your Star Wars X -Wing, and you know, the way you move it around, you know, if you fly with it, it’ll swoosh, if you drop it, it’ll make a crash sound.

So, you know, it’s really responsive to the kid. And all of this without a screen. Without a screen. That was very, very important to us. And also without AI. And we just, we didn’t need AI in this solution. But, you know, also, we’re not entirely sure if AI is ready for childhood. We really believe that childhood deserves deliberation. And that deliberation might be an unlock, as I mentioned, to the future of AI. So first of all, AI holds tremendous potential when you think about play. When you think of the creative barriers that kids face in play. So for example, I’m sitting with my brick bin, I have a ton of bricks. I don’t know where to start, this fear of blank canvas.

AI could easily offer little prompts that inspire me to play. It could support diverse learning methods. AI could help us better understand a child’s intent so we could offer more better, relevant, meaningful experiences. And one of my favorite aspects, which I think is super interesting, is that generative AI is probabilistic. And in other contexts, like productivity, a hallucination is a bug. But when it comes to play, maybe that hallucination is just a playful feature. So there’s huge potential in what AI could bring to offer better play. But of course, as you know, there are many challenges that need to be addressed. And there’s three… key tensions that we think are really important to address when we think about kids and childhood.

So first of all, it’s this tension between efficiency and imagination. If I can get an answer just like this, I don’t have to wait. I don’t have to struggle. I don’t have to develop my imagination. And does that rob kids of the opportunity to really develop their imagination and more importantly, develop the confidence in their own imagination? Personalization and identity. A child at seven is not the same as who they’re going to be at 17. So if we start personalizing the experience for who they are at seven, are we holding them back? And then finally, assistance and agency. Are we raising kids who are, it’s very easy for them to prompt, but they don’t have the ability to really persevere through.

So if I can get an answer just like this, I don’t have to wait. I don’t have to struggle. These are some of the key tensions that we see. And of course, there’s a lot of opportunities, but we feel the responsibility. to ensure that these are addressed. So when we develop new play experiences, we ask ourselves the question, does this increase or decrease the choices that a child has? So child agency. Does this expand imagination? I’d encourage you to ask yourself the question as you develop AI solutions. Does it preserve that healthy developmental friction where you have to actually think, and finally, just would I want this shaping my child inner voice as a way to really think about what’s right?

And I’d love to leave you with this question that we spend a lot of time thinking about is, as we look at AI systems today, what exactly are we optimizing for and how important that choice is? So if today AI systems, if we optimize for engagement, what we’re going to get is more attention. But what if, what if… If we optimize for childhood, then we’re going to optimize for potential. Thank you very much.

Saadhna Panday

All right. Good morning, everybody. I’m Sadhna Pandey, and I’m the chief of education at UNICEF India. And it’s a pleasure to moderate today’s panel discussion on AI literacy and children. So we’ve heard a lot at the summit about the wonder of tech. It really feels good to talk about the wonder of children and of education. So I want to thank Legol for creating the space for this discussion. We all know that AI has brought a step change in how we live, work, and play. And there’s no doubt that it is impacting children’s lives and how they experience education. The problem is that AI is not just a tool for education. The problem is that it is doing it unevenly.

For a child living in urban Delhi, AI has found its way into their education either through the home or the school. But for a poor tribal girl living in rural Jharkhand, perhaps not so much. Education systems are facing massive learning challenges for which governments are seeking equitable, scalable and evidence -based solutions. Two to three decades of digital learning has yielded small -scale wins and modest impact on learning. And yet we’ve seen the massive impact of AI already on health systems and that gives us tremendous hope. I keep repeating this example because I’m fascinated with it. In the area of radiology, AI has helped the diagnosis of pancreatic cancer 438 days earlier than would have been normally expected.

We were previously diagnosing pancreatic cancer at fourth stage. We can now diagnose it at stage one and it diagnoses it with greater accuracy than any human ever can and this without touching a patient and that makes me feel excited. We are looking for that kind of accelerator in education. Something that’s going to bring efficiency and quality without widening inequality and as you’ve said that remains deeply human centered because we know that learning is an inherently social process. We cannot be naive about this. We are walking a tightrope between something that is scaling so far and evolving so rapidly but anybody who’s worked in the education system knows it’s a big ship it takes a wide berth to turn but even when with that we are looking for a public good out of AI because we need it these are really tough interests to marry but it has been done for vaccine rollout and it is being done in countries like Estonia right now within the education space through all of this you got it bang on we’ve got to keep teachers pedagogy and curricula at the center and more than anything else we need to keep children at the center matching their right to learn by multi modes including tech with their right to protection participation and privacy keep that in mind and we need to keep that in mind and we need to keep that in mind and we need to keep that in mind and we need to keep that in mind and we need to keep But time and again we make the error that we underestimate the capacity of children.

They’re not passive recipients of education. They have tremendous agency. They can consume tech, they can shape it, and no doubt they will lead it in time. So today’s conversation is about agency. How do we build AI that empowers children to become creative, critical, independent thinkers that maximize the potential, take out of the best of AI, but offset its risks? To help us through that conversation, I have Tom and Richard. Welcome again, Tom and Richard. And we’re looking forward to a very robust engagement. this morning. Okay. So Tom, we’re going to start with you. So you talked about AI sometimes feeling magical, that it’s abracadabra and voila, something beautiful appears. And we know how children love magic.

They really become enthralled with it and

Tom Hall

Children do indeed love magic, don’t we all? And we all like fast results. And increasingly, We have much shorter attention spans than we had maybe even 10 years ago, and so we’re all looking for quick fixes. I think we often, well, I think we’re overlooking the fact that children have immediate access to data and information now that they trust inherently from the get -go, and they will take a question and feed it back as if it is the gospel. So there is this real danger that AI is indeed seen as a magic box, particularly generative AI, and I think that that’s amazing that children have this inherent curiosity and the Lego group sort of celebrates that curiosity every day.

It’s a wonderful thing. But as I said, I think it’s a real mistake if we don’t teach children to question the magic and actually make magic for themselves. And in order to do that, that’s why we are… so passionate about these fundamentals of AI literacy because if we simply hand children a box that promises quick magical results I think we are really short -selling them so I’d much rather yeah we hand over the screwdriver we hand over the the kind of compass and allow them to take things apart and start to create their own ideas I’m not sure if I addressed your question there but I think that the magic is the magic is something I would we really want children to create their own and I don’t think that we should be under any illusion that they’re going to work this out without an education system that takes and a societal system that takes this responsibility very very seriously and it’s not about taking this responsibility in a few months or a few years time the time is now to maybe stop some things and actually start a fundamentally different approach

Speaker 1

losing

Saadhna Panday

the responsibility to protect them.

Richa Menke

Thank you. Thank you for the question. Yes, it’s challenging because kids have access all the time. You can’t stop it. As you say, they have a mind of their own. But I think as we’ve seen even with social media that maybe we don’t always understand the long -term consequences. While I can have an immediate reaction and something that makes me happy in the minute, what is that going to do in the long run? So I think this focus on education as a filter to understand the long -term as a kind of compass of what is a better experience I think is incredibly important. So that’s kind of our position in terms of how we would employ AI.

Saadhna Panday

Wonderful. So there’s two things that we need for empowerment. One is foundational skills. The child needs to have a basic level of literacy to be able to engage with language models. The And then the last thing that we need to do is to understand the language. Second, critical web and AI literacy. And the model you put out looks fantastic. Now let’s take the model into a real -world classroom. What is it going to look like in rural Rajasthan where we’ve got multigrained, multilingual, multilevel classes? How do we make this come alive and have relevance for those type of settings?

Tom Hall

I think that the best thing you can do, and any teachers in this room will know this, ask children who are looking at you the question, like what type of conversation do they want to have? And in the form of AI, we’ve just produced a template to discuss AI policies with your classes. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do.

And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. assess this question in a very, very smart, thoughtful way. And if we don’t ask them the question, again, we are very guilty of simply publishing something and deciding that it’s in their best interest. Of course we need to guide them, and we’ve got a lot of information that we need to share with them. But let them think their way through this, and the best way to do that is ask the questions. So, yeah, take a discussion around, you know, where does bias show up in their lives? What might that look like if a technology system leant too heavily on a false set of information?

Teaching them sort of the basics of if -then concepts. I think you can do that in any type of classroom, and you don’t need any type of equipment on the table. You need minds to be switched on, and to do that I think you need to ask children the questions, and you need to trust that they’re going to have some thoughts, and you need to help them guide that policy. So that’s something we’d love to see widely spread.

Atish Joshua Gonsalves

Yeah, maybe just coming in from… So… Prior to Lego, I also worked… with the UN Refugee Agency for many years and also sort of saw these applications of ed tech in quite rural or humanitarian contexts as well. So I think there are interesting ways to bring some of these concepts to life, even in very, I think I heard the phrase frugal AI being used here at the conference. But one of the things I think even for us, just because we have access to these powerful models doesn’t mean we need to put those necessarily directly into the hands of kids as well. So even as we look at sort of education progression from kindergarten right up to grade eight and beyond, the age appropriateness is super important.

So even as we’re looking at the littlest ones and how they learn about computational concepts and AI, a lot of where we start is actually completely screen free. They are working with understanding computer science concepts like sequences and loops and just doing this completely with bricks. And you can imagine some of these contexts, it may be bricks, it may be something else, but it isn’t even like the hardware. or a screen at all. So you can teach concepts of probability and computational thinking even without some of these, if you don’t have these resources. And this actually aligns well with, as we think of age -appropriate progression. But I really challenge the audience as well around this need to want to put things into kids’ hands directly in any context.

I mean, not just in challenging contexts in rural India, but also in other countries as well. Well, let’s not rush for the fastest and the best model, but what’s actually right for the kids as well.

Saadhna Panday

Absolutely. We need to generate a fair amount of evidence before we rush to scale with something like this. Although we have to mediate the fact that smartphone penetration in a country like India is widespread. So access is there. And a school is a microcosm of a local community. So whatever is happening in the whole country, in our home, is going to reflect. in the school, and if it impacts child well -being or if it impacts learning, then the schooling system will have to respond. So Tom, I’m coming back to you again. AI can sometimes feel very passive. You put something in, you get something out. But we know that the best learning happens through engagement.

It’s that journey of discovery that excites the child. So how do we make this thing interactive? What do we need to do to support creativity in the use of AI?

Tom Hall

I’ll declare my bias here, which is that I work for LEGO Group, therefore I’m kind of deeply entrenched in a passion for hands -on learning and a deep belief that when you use your hands, and the science backs this up, you are engaging all parts of your brains that lead to learning. You lead to deeper engagement and ultimately… ultimately a deeper mastery of the subject in front of you. We could show through thousands of research studies that we’ve done through the LEGO Foundation or with any of our research partners that spatial awareness skills develop stronger when children are using their hands. The very basics of mathematics in primary years will develop in a stronger way when you’re using manipulatives and you’re thinking through things.

So this use of hands and manipulatives is something we believe in so deeply. So I think artificial intelligence, it’s a concept of technology. We really believe there’s no reason why hands -on learning shouldn’t be brought in here. You saw in the video that we designed for collaboration first. So this is not a one -on -one learning experience. We really want children to learn together. Groups of four. One, two, three, four. Whatever number you put around the table. We want them to be looking at each other and challenging each other. working in groups, learning the fundamentals of collaboration. It’s not always easy. Things will break. You’ll have to start again. You might not like the role you’ve been given.

That’s a great life lesson. So I think AI can sometimes feel the magic box, but also maybe the dark box. And actually, it’s about helping kids understand that there are really clean and to understand technology fundamentals that underlie artificial intelligence and give them curriculum that means something to them. So we introduced a computer science GCSE in the UK back in 2014. I went to school in the UK. It’s where I live. I’m not too proud to say that that was a failure in terms of uptake by students because there were two mistakes that we made. One was a really lack of teachers and there was no teacher training. So there was no… kind of innovation put into the delivery pipeline, but then there was also a real lack of innovation in the courseware and the curriculum that we designed for that GCSE.

And so children just sat very bored in a computer science class learning very outdated principles. So I think the best thing we can do for interactivity and artificial intelligence sort of education is apply this to things that mean something to today’s teenagers and young people. And that means kind of meeting them where they are and sort of helping them apply fundamentals of AI to the life that’s going on around them. And I think that applies both to the child in the classroom and also the teacher. So give them curriculum that sort of applies now rather than

Saadhna Panday

I must say that I’ve seen the joy of the Lego bricks. I’m South African and I would travel to the United States and I would travel to the United States and I would travel to the United States and I would travel to the rural areas of KwaZulu -Natal and there’d be nothing else. there except a hut. You go to the back of the hut and you see a child with two things, the workbook given by South African government and hand -me -down Lego bricks. And you would see that coming alive of head, heart, and mind. And it was beautiful to see. So thank you, Lego, for that. All right. Richa, I’m coming back to you.

We’re excited about the tech, but we’re also worried about safety. And we’re worried about privacy. And our young adolescents, in particular, who also make up the child cohort, are worried about privacy and safety. So in all of the issues that a private entity needs to think about when they’re designing a digital experience for children, where does safety and privacy stand? And how do you create this joyful, meaningful, and meaningful experience for children? Thankful experience while reducing the risk with the tool. like AI?

Richa Menke

Thank you. So, as you can imagine, safety, privacy, these are absolutely foundational and non -negotiable as we’ve seen on the LEGO education side and similarly in ours. And just to be clear, none of our LEGO products actually employ AI. So the smart break is not using it because for all of these exact same reasons that we have a very high bar, if you look through the lens of childhood, we have a higher bar that we need to meet. So there is this tension, though, that obviously there’s so much potential for meaningful, incredible, hands -on play developed through AI, but at the same time, we need to ensure that until that bar is met, we would not put that in our products.

Saadhna Panday

Excellent. So for our young people of today who will be consumers of AI, trust, transparency, privacy, sustainability, and voice would be critically important. important that we’re not just handing something to them. They get to shape it and co -create it with us. At this point in time, we have a couple of minutes. So we’re going to take a couple of questions from the audience. Since I’m left -handed, my bias is on the left side. I’m declaring it up front. So I’m going to take three quick questions in the first round, and then I will come across. So I’ll take one from the front, one from the back, and then on this side. Right. Okay. Over to you.

Nikhil Bawa

Thank you. Thank you. Fantastic session. My name is Nikhil Bawa. I write about AI and education. I’m just curious about what advice you would have for parents because schools are going to be slow to adapt. And so do you have resources for parents in particular about, because they will, I mean, I’m trying to develop an alternate home curriculum for four hours a week outside. at a school for my kid. Just curious about what you would recommend for parents. You need a combination of structured and unstructured play both, right? I want to know your views on how you’re thinking unstructured play with AI and then play around with also other things like self -regulation, which becomes very difficult for even a team to manage.

So that’s one question and the second is, we’re doing a research on this entire AI adoption at homes which is beyond classrooms. And the initial findings are quite disturbing because it is getting a adopted and adopted just because it’s becoming like a race, especially in India. So I would also like to know if there are some recommendations of various AI play adoption from you guys. Okay, beyond the classroom.

Asha Nanavati

Good morning. Thank you so much. My name is Asha Nanavati. I’m with Alliance Educational Foundation, which runs a charitable small K -12 school in Kerala. They love the Lego products, you know. But I really heard what you said earlier, Richa, about capacity building, about including teachers. We’re a charitable school. All profits go back to the meals, the child. And we don’t maybe have funding for training teachers on AI adoption safety practices. We have play school learners up. So is Lego thinking about doing anything in India? We definitely would love to hear more about that. Thank you.

Tom Hall

Can we take a response to those questions? Can I work back? So we have a recommended AI toolkit to take into classrooms. And it’s a facilitated conversation with children around, you know, what do you think about AI? What should a policy be for a school and a classroom? To be honest, I think that is applicable to a group of teachers in a training day as it is to children and a teacher. And I’ve seen really great examples of schools that I know in the UK following a similar approach. I think maybe there’s a theme in all of the questions. Like maybe don’t worry about applying the brakes, right? Things are moving incredibly fast. I wouldn’t go along with what can feel like this very fast river or wave or current.

I think it’s perfectly okay to apply the brakes and say we need to hit pause and we need to have a conversation. And the conversation needs to be about what do we want. And when I say we, I mean the children in the classroom and the teacher. Like what do we want to get out of this experience? And I think have the conversation. Have the conversation first and don’t worry too much about the tools or the software. that you’re worried that you might be missing out on using. And as Richard just shared, we’re not using generative AI in our products, and that’s for a very deliberate reason, because we just don’t know enough yet about safety and privacy.

We have conducted research into that, and we’re following that very closely, but we’re not willing to take any risks. And I think this time of childhood is just too precious to make some shotgun choices that we’re going to pay very heavily for in the future. So I think empower the teacher and the child to have some really formative discussions about what do we want to get out of this, and then maybe look at what’s available.

Speaker 4

Le

Atish Joshua Gonsalves

arning our child agency versus some scaffolding. So as we bring these products also into core classes, classrooms as part of education strategy now. We do understand the needs for teachers to provide a scaffolding as they take them through this learning journey. So we have, for example, at LEGO Education, we follow something called a 5E model of engage, explore, explain, elaborate, and evaluate. But it’s just a fancy way of saying how do you sort of get the kids hooked initially to a big picture question or a real -life example. But you provide the educators and the students sort of a structure as you go through this process of thinking about that question. I think who had that question yesterday, the distance between a question and answer and that space between that’s where magic or inspiration happens, right?

And so giving that space for that to happen. And then when they’re building – and so you’re providing the structure for them to work in groups and build this out. But towards the end, in the elaboration phase and at the end of every unit, there’s something called a design challenge where the kids are not provided that much instruction. They’re given an open -ended prompt. And then they take the concepts and learn. They take the lessons that they’ve learned and apply that in a more open -ended way. outside of the Lego education computer science and AI product we also have something called First Lego League which is the world’s largest STEM annual STEM competition as well and there you see these it’s it’s so inspiring to see these groups of eight kids building a robotics or challenge and then doing a science theme as well but that’s completely open -ended so they will go beyond what a 45 minute lesson is what they would do within a 45 minute lesson and have a lot more agency in terms of what they can create beyond what so the teacher would take them through in in a classroom Ni

Tom Hall

khil we have some really great resources actually available online both on Lego and Lego foundation around facilitated play with your child and it’s starting from very early years through to later years

Saadhna Panday

so I’m gonna take two more questions because we coming to the end of the session we need to close we need to close okay now I will take one question but really

Tom Hall

Well, I think we heard a lot yesterday that we need to make sure that any tools that are made available are done so in languages that mean something to you on the ground. So I think there are many tools out there that can do automated translation. We hope that the quality is going to be really strong in them. We’re currently producing in English language. Of course, there will be localizations in the future.

Saadhna Panday

All right, colleagues, we need to come to a close because people need to move to the next session. We’re designing for safety, for equity. and while we provide services, we need to match it with demand. And to match with demand, teachers, learners, parents need to be empowered. That responsibility rests with all of us. It’s hard to do many things in an education system. Empowerment is not one of them. We can do that quickly. We can do that with scale and we can do that with equity. So I want to say thank you to our panelists for today for having an engaging conversation and a big thank you to Lego for bringing us together to have a conversation about children, education, and AI.

Thank you so much. The session is closed. Thank you. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
T
Tom Hall
6 arguments167 words per minute2191 words786 seconds
Argument 1
AI literacy should focus on understanding fundamental concepts rather than just using AI tools as “magic boxes” – children need the “screwdriver” to understand what’s happening under the hood
EXPLANATION
Tom Hall argues that children should not view AI as a magical system that produces quick results without understanding. Instead, they need to be equipped with the tools and knowledge to deconstruct and understand how AI systems actually work, moving beyond surface-level usage to deeper comprehension.
EVIDENCE
Hall mentions that younger children often see generative AI systems as “a kind of magic box” where you input text and get outputs like images, videos, and essay answers. He emphasizes the need to give children “the screwdriver to take that box apart and really understand what’s going on under the cover.”
MAJOR DISCUSSION POINT
AI literacy and education fundamentals
Argument 2
AI literacy must be elevated to the status of modern literacy alongside math and reading, not treated as an elective for a few
EXPLANATION
Hall contends that understanding AI concepts should be considered as fundamental as traditional literacy skills. He believes these concepts should be universally accessible rather than being optional subjects available only to select students.
EVIDENCE
Hall states: “We believe that these concepts have to be elevated to the status of modern literacy alongside maths and reading, problem solving, creativity and collaboration. And I think it’s best if we show you how we plan to do this in classrooms.”
MAJOR DISCUSSION POINT
AI literacy and education fundamentals
Argument 3
We should ask children what type of conversations they want to have about AI and trust their thoughtful responses
EXPLANATION
Hall advocates for including children’s voices in AI policy discussions rather than making decisions for them without their input. He believes children can think through AI-related issues in smart and thoughtful ways when given the opportunity.
EVIDENCE
Hall mentions producing “a template to discuss AI policies with your classes” and emphasizes asking children questions about where bias shows up in their lives and what false information in technology systems might look like. He states: “let them think their way through this, and the best way to do that is ask the questions.”
MAJOR DISCUSSION POINT
Child agency and empowerment
AGREED WITH
Atish Joshua Gonsalves, Saadhna Panday, Speaker 1
Argument 4
Learning is most effective when children use their hands and engage in spatial awareness activities supported by research
EXPLANATION
Hall argues that hands-on learning engages all parts of the brain and leads to deeper understanding and mastery. He emphasizes that physical manipulation and spatial awareness activities are scientifically proven to enhance learning outcomes.
EVIDENCE
Hall cites “thousands of research studies that we’ve done through the LEGO Foundation or with any of our research partners that spatial awareness skills develop stronger when children are using their hands. The very basics of mathematics in primary years will develop in a stronger way when you’re using manipulatives.”
MAJOR DISCUSSION POINT
Hands-on and collaborative learning
AGREED WITH
Atish Joshua Gonsalves, Richa Menke
Argument 5
AI education should be designed for collaboration first, with children working in groups and learning together
EXPLANATION
Hall believes that AI learning should prioritize collaborative experiences where children work together in groups, challenge each other, and learn important life skills through group dynamics rather than individual screen-based learning.
EVIDENCE
Hall describes designing “for collaboration first. So this is not a one-on-one learning experience. We really want children to learn together. Groups of four. One, two, three, four. Whatever number you put around the table.” He mentions that children learn collaboration skills and that “things will break. You’ll have to start again. You might not like the role you’ve been given. That’s a great life lesson.”
MAJOR DISCUSSION POINT
Hands-on and collaborative learning
AGREED WITH
Atish Joshua Gonsalves, Richa Menke
Argument 6
It’s acceptable to apply brakes and have conversations about what we want from AI rather than rushing to adopt tools
EXPLANATION
Hall suggests that it’s perfectly reasonable to slow down AI adoption and have deliberate conversations about goals and desired outcomes before implementing AI tools. He advocates for thoughtful consideration over rapid implementation.
EVIDENCE
Hall states: “Things are moving incredibly fast. I wouldn’t go along with what can feel like this very fast river or wave or current. I think it’s perfectly okay to apply the brakes and say we need to hit pause and we need to have a conversation.” He emphasizes having conversations about “what do we want to get out of this experience” before worrying about specific tools or software.
MAJOR DISCUSSION POINT
Balancing innovation with child development
AGREED WITH
Atish Joshua Gonsalves, Richa Menke
DISAGREED WITH
Nikhil Bawa
A
Atish Joshua Gonsalves
7 arguments214 words per minute2059 words575 seconds
Argument 1
AI features should run locally on devices with no data leaving the device, no login collection, and no third-party data sharing
EXPLANATION
Gonsalves emphasizes strict privacy protections in AI educational products, ensuring that all AI processing happens locally on student devices without any data transmission to external parties. This approach prioritizes student privacy and data security.
EVIDENCE
Gonsalves states: “in all our products AI features run locally on the devices nothing ever leaves the device nothing ever goes to us at the Lego group nothing goes to third parties no login is collected there’s in terms of the trading whether the kids are building their own AI models or they’re using pre-existing models nothing ever leaves.”
MAJOR DISCUSSION POINT
Safety, privacy, and responsible AI development
AGREED WITH
Tom Hall, Richa Menke
Argument 2
AI should not be anthropomorphized or create unhealthy emotional bonds with children
EXPLANATION
Gonsalves argues that AI systems designed for children should not be made to seem human-like or encourage children to form emotional attachments to the technology. This is part of responsible AI design for young users.
EVIDENCE
Gonsalves mentions: “we do not anthropomorphize I got that right this time it’s just a fancy way of saying we do not make it think that AI is human we do not want them forming any unhealthy emotional bonds.”
MAJOR DISCUSSION POINT
Safety, privacy, and responsible AI development
Argument 3
Safety and student well-being are non-negotiable red lines in AI product development
EXPLANATION
Gonsalves establishes that student safety and well-being are absolute priorities that cannot be compromised in AI educational product development. These considerations override other product features or capabilities.
EVIDENCE
Gonsalves explicitly states: “safety and student well-being is a red line is a non-negotiable for us so everything we know about decades of education research and the way we use AI is very important to us.”
MAJOR DISCUSSION POINT
Safety, privacy, and responsible AI development
AGREED WITH
Tom Hall, Richa Menke
Argument 4
Children learn best through building, coding, experimenting, and sharing together rather than isolated screen-based learning
EXPLANATION
Gonsalves advocates for collaborative, hands-on learning approaches in computer science and AI education, contrasting this with solitary screen-based learning that he sees as less effective for developing both technical and social skills.
EVIDENCE
Gonsalves describes the current state: “so much of computer science and AI today is stores with kids sitting in front of the screen with the headphones on by themselves learning and I don’t think we see this as a vision for learning for us kids should be building together coding together experimenting together tinkering together and sharing together.”
MAJOR DISCUSSION POINT
Hands-on and collaborative learning
AGREED WITH
Tom Hall, Richa Menke
Argument 5
Children should be active participants in their learning journeys rather than passive consumers of AI
EXPLANATION
Gonsalves emphasizes the importance of child agency and engagement in AI education, ensuring that students are actively involved in their learning process rather than simply receiving information or using tools passively.
EVIDENCE
This is mentioned as one of the four values governing LEGO Education’s approach: “we prioritize child agency and engagement to ensure students are active participants in their own learning journeys.”
MAJOR DISCUSSION POINT
Child agency and empowerment
AGREED WITH
Tom Hall, Saadhna Panday, Speaker 1
Argument 6
AI concepts can be taught without screens or advanced hardware, starting with basic computational thinking using physical materials
EXPLANATION
Gonsalves argues that fundamental AI and computer science concepts can be introduced to children using simple, physical materials rather than requiring sophisticated technology. This makes AI education more accessible across different contexts and resource levels.
EVIDENCE
Gonsalves explains: “even as we’re looking at the littlest ones and how they learn about computational concepts and AI, a lot of where we start is actually completely screen free. They are working with understanding computer science concepts like sequences and loops and just doing this completely with bricks.”
MAJOR DISCUSSION POINT
AI literacy and education fundamentals
AGREED WITH
Saadhna Panday, Asha Nanavati
Argument 7
Teachers need support and confidence-building, not just access to tools, especially since most aren’t computer science specialists
EXPLANATION
Gonsalves identifies that the main barrier to scaling AI education is not lack of tools but lack of teacher confidence and preparation. He notes that most teachers delivering computer science education are actually specialists in other subjects.
EVIDENCE
Gonsalves states: “We know that most teachers who are teaching computer science are actually not computer science teachers themselves. They are teaching math, they’re teaching science, they’re teaching English and so they need to be prepared to really scale this up as well. So we really see this not as a problem. It’s not as a challenge in terms of access to tools but an access to confidence.”
MAJOR DISCUSSION POINT
Equity and accessibility concerns
AGREED WITH
Tom Hall, Speaker 4, Asha Nanavati
R
Richa Menke
6 arguments163 words per minute1203 words441 seconds
Argument 1
LEGO products currently don’t employ AI because the safety and privacy bar hasn’t been met for childhood applications
EXPLANATION
Menke explains that despite the potential benefits of AI in play experiences, LEGO has chosen not to implement AI in their products because they maintain higher safety and privacy standards when designing for children that current AI technology cannot meet.
EVIDENCE
Menke states: “just to be clear, none of our LEGO products actually employ AI. So the smart break is not using it because for all of these exact same reasons that we have a very high bar, if you look through the lens of childhood, we have a higher bar that we need to meet.”
MAJOR DISCUSSION POINT
Safety, privacy, and responsible AI development
AGREED WITH
Tom Hall, Atish Joshua Gonsalves
DISAGREED WITH
Speaker 1
Argument 2
We need to be cautious about long-term consequences of AI on children, similar to lessons learned from social media
EXPLANATION
Menke draws parallels between current AI adoption and past social media implementation, warning that immediate positive reactions don’t necessarily indicate long-term benefits for child development. She advocates for considering long-term developmental impacts.
EVIDENCE
Menke explains: “as we’ve seen even with social media that maybe we don’t always understand the long-term consequences. While I can have an immediate reaction and something that makes me happy in the minute, what is that going to do in the long run?”
MAJOR DISCUSSION POINT
Safety, privacy, and responsible AI development
Argument 3
There’s tension between AI efficiency and developing children’s imagination and struggle-through-problems abilities
EXPLANATION
Menke identifies a fundamental conflict between AI’s ability to provide quick answers and children’s need to develop imagination, patience, and problem-solving skills through struggle and waiting. She questions whether AI efficiency might rob children of important developmental experiences.
EVIDENCE
Menke describes the tension: “If I can get an answer just like this, I don’t have to wait. I don’t have to struggle. I don’t have to develop my imagination. And does that rob kids of the opportunity to really develop their imagination and more importantly, develop the confidence in their own imagination?”
MAJOR DISCUSSION POINT
Balancing innovation with child development
Argument 4
AI development should increase rather than decrease the choices available to children
EXPLANATION
Menke proposes that when developing AI experiences for children, the key question should be whether the technology expands or limits children’s agency and options. She advocates for AI that empowers rather than constrains child choice.
EVIDENCE
Menke asks: “when we develop new play experiences, we ask ourselves the question, does this increase or decrease the choices that a child has? So child agency.”
MAJOR DISCUSSION POINT
Child agency and empowerment
Argument 5
We should optimize AI systems for childhood and potential rather than just engagement and attention
EXPLANATION
Menke argues that the fundamental design philosophy for AI systems should prioritize long-term child development and potential rather than short-term engagement metrics. This represents a different approach to AI optimization when children are involved.
EVIDENCE
Menke concludes: “as we look at AI systems today, what exactly are we optimizing for and how important that choice is? So if today AI systems, if we optimize for engagement, what we’re going to get is more attention. But what if, what if… If we optimize for childhood, then we’re going to optimize for potential.”
MAJOR DISCUSSION POINT
Balancing innovation with child development
Argument 6
Hands-on, minds-on play experiences should remain central even when incorporating new technologies
EXPLANATION
Menke emphasizes that LEGO’s core philosophy of physical, hands-on play should be maintained even as new technologies like AI are considered. She advocates for technology that enhances rather than replaces tactile play experiences.
EVIDENCE
Menke describes their approach: “how do we employ new technologies in service of better play for kids, but always keeping in mind our DNA as the LEGO Group, that hands-on, minds-on play experience that we all love.”
MAJOR DISCUSSION POINT
Hands-on and collaborative learning
AGREED WITH
Tom Hall, Atish Joshua Gonsalves
S
Saadhna Panday
4 arguments129 words per minute1416 words657 seconds
Argument 1
AI is impacting children’s education unevenly – urban children have access while rural children may not
EXPLANATION
Panday highlights the digital divide in AI access, noting that children in urban areas are experiencing AI integration in education through homes and schools, while children in rural or disadvantaged areas lack similar access. This creates educational inequality.
EVIDENCE
Panday provides specific examples: “For a child living in urban Delhi, AI has found its way into their education either through the home or the school. But for a poor tribal girl living in rural Jharkhand, perhaps not so much.”
MAJOR DISCUSSION POINT
Equity and accessibility concerns
AGREED WITH
Atish Joshua Gonsalves, Asha Nanavati
Argument 2
We need equitable, scalable, and evidence-based solutions that don’t widen inequality
EXPLANATION
Panday argues that education systems need AI solutions that can be implemented at scale while maintaining equity and being based on solid evidence. She emphasizes the importance of avoiding solutions that might increase rather than decrease educational disparities.
EVIDENCE
Panday states: “Education systems are facing massive learning challenges for which governments are seeking equitable, scalable and evidence-based solutions.” She also mentions the need for solutions “without widening inequality and as you’ve said that remains deeply human centered because we know that learning is an inherently social process.”
MAJOR DISCUSSION POINT
Equity and accessibility concerns
AGREED WITH
Atish Joshua Gonsalves, Asha Nanavati
Argument 3
Children are not passive recipients but have tremendous agency to consume, shape, and lead technology
EXPLANATION
Panday challenges the view of children as merely passive users of technology, arguing instead that they have significant capacity to actively engage with, influence, and eventually lead technological development. She emphasizes the importance of recognizing and nurturing this agency.
EVIDENCE
Panday states: “But time and again we make the error that we underestimate the capacity of children. They’re not passive recipients of education. They have tremendous agency. They can consume tech, they can shape it, and no doubt they will lead it in time.”
MAJOR DISCUSSION POINT
Child agency and empowerment
AGREED WITH
Tom Hall, Atish Joshua Gonsalves, Speaker 1
Argument 4
Children need foundational skills and critical web/AI literacy to engage meaningfully with AI systems
EXPLANATION
Panday identifies two essential components for empowering children with AI: basic literacy skills that enable them to interact with language models, and critical thinking skills that help them evaluate and understand AI systems and their outputs.
EVIDENCE
Panday explains: “So there’s two things that we need for empowerment. One is foundational skills. The child needs to have a basic level of literacy to be able to engage with language models. Second, critical web and AI literacy.”
MAJOR DISCUSSION POINT
AI literacy and education fundamentals
A
Asha Nanavati
1 argument140 words per minute101 words43 seconds
Argument 1
Charitable schools need accessible training and resources for AI adoption and safety practices
EXPLANATION
Nanavati represents the perspective of resource-constrained educational institutions that want to implement AI education but lack funding for comprehensive teacher training and safety practices. She highlights the need for accessible support for schools serving disadvantaged populations.
EVIDENCE
Nanavati explains: “We’re a charitable school. All profits go back to the meals, the child. And we don’t maybe have funding for training teachers on AI adoption safety practices. We have play school learners up. So is Lego thinking about doing anything in India?”
MAJOR DISCUSSION POINT
Equity and accessibility concerns
AGREED WITH
Saadhna Panday, Atish Joshua Gonsalves
N
Nikhil Bawa
1 argument116 words per minute200 words102 seconds
Argument 1
Parents need resources for home-based AI education since schools are slow to adapt
EXPLANATION
Bawa identifies a gap between the pace of AI development and school adaptation, arguing that parents need practical resources and curricula to provide AI education at home while waiting for formal education systems to catch up.
EVIDENCE
Bawa states: “schools are going to be slow to adapt. And so do you have resources for parents in particular about, because they will, I mean, I’m trying to develop an alternate home curriculum for four hours a week outside. at a school for my kid.”
MAJOR DISCUSSION POINT
Balancing innovation with child development
DISAGREED WITH
Tom Hall
S
Speaker 1
3 arguments146 words per minute455 words186 seconds
Argument 1
Children are curious about how AI works and want to learn how it can be used in everyday life and as an accurate source of information
EXPLANATION
This speaker expresses genuine curiosity about AI functionality and practical applications. They recognize the importance of understanding AI’s role in daily life and its reliability as an information source.
EVIDENCE
The speaker states: ‘curious how it works and I think that a lot of kids are. I would love to learn how it can be used in everyday life and how it can be used as an accurate source of information.’
MAJOR DISCUSSION POINT
Child agency and empowerment
Argument 2
AI is unavoidable like taxes, and people need to evolve with it or be left behind
EXPLANATION
The speaker draws a parallel between AI and taxes to emphasize AI’s inevitability in society. They argue that adaptation to AI technology is necessary for staying relevant and competitive in the future.
EVIDENCE
The speaker states: ‘AI is like taxes, it’s unavoidable and if you don’t learn to evolve with it you’re gonna be left behind.’
MAJOR DISCUSSION POINT
AI literacy and education fundamentals
DISAGREED WITH
Richa Menke
Argument 3
Children want to be part of solving big problems and need to have a say in AI policies because AI literacy is important
EXPLANATION
This speaker advocates for meaningful youth participation in AI governance and policy-making. They emphasize that children should be active contributors to addressing societal challenges through AI and should have input in how AI policies are developed.
EVIDENCE
The speaker states: ‘I definitely want to be a part of solving big problems. We need to have a say in AI policies because AI literacy is really important. Thanks for finally asking us what we think.’
MAJOR DISCUSSION POINT
Child agency and empowerment
AGREED WITH
Tom Hall, Atish Joshua Gonsalves, Saadhna Panday
S
Speaker 4
1 argument1 words per minute1 words54 seconds
Argument 1
Teachers need comprehensive support including lesson plans, classroom presentations, and facilitation notes with no extra preparation time required
EXPLANATION
This speaker emphasizes that successful AI education implementation requires providing teachers with complete, ready-to-use resources. They recognize that teachers are already overburdened and need streamlined materials that don’t require additional preparation time.
EVIDENCE
The speaker mentions: ‘All materials for this lesson can be found on the LEGO Education Teacher Portal lesson plan, ready-to-use classroom presentation and facilitation notes. No extra. No extra prep time needed.’
MAJOR DISCUSSION POINT
Equity and accessibility concerns
AGREED WITH
Tom Hall, Atish Joshua Gonsalves, Asha Nanavati
Agreements
Agreement Points
Child safety and privacy are non-negotiable priorities in AI development
Speakers: Tom Hall, Atish Joshua Gonsalves, Richa Menke
It’s acceptable to apply brakes and have conversations about what we want from AI rather than rushing to adopt tools AI features should run locally on devices with no data leaving the device, no login collection, and no third-party data sharing Safety and student well-being are non-negotiable red lines in AI product development LEGO products currently don’t employ AI because the safety and privacy bar hasn’t been met for childhood applications
All LEGO representatives agree that child safety and privacy must be absolute priorities, even if it means slowing down or avoiding AI implementation until proper safeguards are established
Hands-on, collaborative learning is superior to isolated screen-based learning
Speakers: Tom Hall, Atish Joshua Gonsalves, Richa Menke
Learning is most effective when children use their hands and engage in spatial awareness activities supported by research AI education should be designed for collaboration first, with children working in groups and learning together Children learn best through building, coding, experimenting, and sharing together rather than isolated screen-based learning Hands-on, minds-on play experiences should remain central even when incorporating new technologies
All speakers emphasize the importance of physical, collaborative learning experiences over individual digital interactions, backed by research on spatial awareness and social learning
Children should be active agents rather than passive consumers of AI
Speakers: Tom Hall, Atish Joshua Gonsalves, Saadhna Panday, Speaker 1
AI literacy should focus on understanding fundamental concepts rather than just using AI tools as ‘magic boxes’ – children need the ‘screwdriver’ to understand what’s happening under the hood We should ask children what type of conversations they want to have about AI and trust their thoughtful responses Children should be active participants in their learning journeys rather than passive consumers of AI Children are not passive recipients but have tremendous agency to consume, shape, and lead technology Children want to be part of solving big problems and need to have a say in AI policies because AI literacy is important
There is strong consensus that children should be empowered as active participants who understand, question, and help shape AI rather than simply using it as consumers
Teacher support and confidence-building are essential for successful AI education implementation
Speakers: Tom Hall, Atish Joshua Gonsalves, Speaker 4, Asha Nanavati
Teachers need support and confidence-building, not just access to tools, especially since most aren’t computer science specialists Teachers need comprehensive support including lesson plans, classroom presentations, and facilitation notes with no extra preparation time required Charitable schools need accessible training and resources for AI adoption and safety practices
All speakers recognize that teachers need substantial support, training, and ready-to-use resources to effectively implement AI education, particularly since most are not computer science specialists
AI education must address equity and accessibility concerns
Speakers: Saadhna Panday, Atish Joshua Gonsalves, Asha Nanavati
AI is impacting children’s education unevenly – urban children have access while rural children may not We need equitable, scalable, and evidence-based solutions that don’t widen inequality AI concepts can be taught without screens or advanced hardware, starting with basic computational thinking using physical materials Charitable schools need accessible training and resources for AI adoption and safety practices
Speakers agree that AI education initiatives must actively address digital divides and ensure that solutions are accessible to disadvantaged communities and resource-constrained schools
Similar Viewpoints
Both speakers advocate for deliberate, cautious approaches to AI implementation that prioritize long-term child development over rapid technological adoption
Speakers: Tom Hall, Richa Menke
It’s acceptable to apply brakes and have conversations about what we want from AI rather than rushing to adopt tools We need to be cautious about long-term consequences of AI on children, similar to lessons learned from social media
Both speakers are concerned about AI potentially interfering with healthy child development, whether through inappropriate emotional attachments or by removing beneficial developmental challenges
Speakers: Atish Joshua Gonsalves, Richa Menke
AI should not be anthropomorphized or create unhealthy emotional bonds with children There’s tension between AI efficiency and developing children’s imagination and struggle-through-problems abilities
Both speakers view AI literacy as a fundamental skill that should be universally accessible rather than optional, requiring both basic and critical thinking components
Speakers: Tom Hall, Saadhna Panday
AI literacy must be elevated to the status of modern literacy alongside math and reading, not treated as an elective for a few Children need foundational skills and critical web/AI literacy to engage meaningfully with AI systems
Unexpected Consensus
Deliberately not using AI in current products despite being an AI education company
Speakers: Atish Joshua Gonsalves, Richa Menke
Safety and student well-being are non-negotiable red lines in AI product development LEGO products currently don’t employ AI because the safety and privacy bar hasn’t been met for childhood applications
It’s unexpected that a company focused on AI education would deliberately avoid using AI in their products, but both speakers from LEGO agree that current AI technology doesn’t meet their safety standards for children
Starting AI education without screens or advanced technology
Speakers: Tom Hall, Atish Joshua Gonsalves
We should ask children what type of conversations they want to have about AI and trust their thoughtful responses AI concepts can be taught without screens or advanced hardware, starting with basic computational thinking using physical materials
Surprisingly, AI education experts advocate for beginning AI literacy through completely analog, discussion-based and physical manipulation methods rather than digital tools
Children as AI policy contributors rather than just users
Speakers: Tom Hall, Saadhna Panday, Speaker 1
We should ask children what type of conversations they want to have about AI and trust their thoughtful responses Children are not passive recipients but have tremendous agency to consume, shape, and lead technology Children want to be part of solving big problems and need to have a say in AI policies because AI literacy is important
There’s unexpected consensus that children should be involved in AI policy-making and governance discussions, not just be recipients of AI education – treating them as stakeholders in AI development
Overall Assessment

The speakers demonstrate remarkably high consensus on prioritizing child safety, agency, and holistic development over rapid AI adoption. Key areas of agreement include the need for hands-on collaborative learning, treating children as active agents rather than passive consumers, ensuring teacher support, and addressing equity concerns. Unexpectedly, even AI education advocates emphasize caution and deliberate implementation.

Very high consensus with strong alignment on fundamental principles. The implications are significant for AI education policy – suggesting that successful implementation requires prioritizing child development principles over technological capabilities, ensuring universal access, and involving children as active participants in shaping AI governance rather than just users of AI tools.

Differences
Different Viewpoints
Timeline and urgency for AI implementation in education
Speakers: Tom Hall, Nikhil Bawa
It’s acceptable to apply brakes and have conversations about what we want from AI rather than rushing to adopt tools Parents need resources for home-based AI education since schools are slow to adapt
Tom Hall advocates for slowing down and having deliberate conversations before implementing AI tools, while Nikhil Bawa expresses urgency about the need for immediate AI education resources because schools are adapting too slowly
Current readiness of AI technology for children
Speakers: Richa Menke, Speaker 1
LEGO products currently don’t employ AI because the safety and privacy bar hasn’t been met for childhood applications AI is unavoidable like taxes, and people need to evolve with it or be left behind
Richa Menke believes AI technology is not yet ready for children and maintains high safety standards, while Speaker 1 views AI as inevitable and emphasizes the need to adapt quickly
Unexpected Differences
Role of efficiency in child development
Speakers: Richa Menke, Speaker 1
There’s tension between AI efficiency and developing children’s imagination and struggle-through-problems abilities AI is unavoidable like taxes, and people need to evolve with it or be left behind
This disagreement is unexpected because it reveals a fundamental philosophical divide about whether AI’s efficiency benefits children or potentially harms their development. Richa questions whether quick AI answers rob children of important developmental struggles, while Speaker 1 sees AI adaptation as necessary evolution
Overall Assessment

The main areas of disagreement center around the timeline for AI implementation, the current readiness of AI technology for children, and the balance between AI efficiency and child development needs

The level of disagreement is moderate but philosophically significant. While speakers generally agree on core principles like child safety, agency, and hands-on learning, they differ substantially on implementation approaches and urgency. These disagreements have important implications for AI education policy, as they reflect tensions between innovation advocates who see AI as inevitable and child development experts who prioritize safety and developmental appropriateness over rapid adoption

Partial Agreements
All speakers agree on the importance of hands-on, foundational learning approaches, but they differ in their implementation strategies – Tom emphasizes giving children the ‘screwdriver’ to understand AI, Atish focuses on screen-free computational thinking with physical materials, and Richa prioritizes maintaining traditional play experiences
Speakers: Tom Hall, Atish Joshua Gonsalves, Richa Menke
AI literacy should focus on understanding fundamental concepts rather than just using AI tools as ‘magic boxes’ AI concepts can be taught without screens or advanced hardware, starting with basic computational thinking using physical materials Hands-on, minds-on play experiences should remain central even when incorporating new technologies
Both speakers agree on the importance of child agency and empowerment, but Tom focuses on involving children in AI policy discussions within educational settings, while Saadhna emphasizes children’s broader capacity to lead technological development
Speakers: Tom Hall, Saadhna Panday
We should ask children what type of conversations they want to have about AI and trust their thoughtful responses Children are not passive recipients but have tremendous agency to consume, shape, and lead technology
Both speakers recognize the need for equitable access to AI education, but Saadhna focuses on systemic solutions at the policy level while Asha represents the practical challenges faced by resource-constrained schools
Speakers: Saadhna Panday, Asha Nanavati
We need equitable, scalable, and evidence-based solutions that don’t widen inequality Charitable schools need accessible training and resources for AI adoption and safety practices
Takeaways
Key takeaways
AI literacy should focus on understanding fundamental concepts rather than just using AI tools – children need to understand what’s happening ‘under the hood’ rather than treating AI as a ‘magic box’ AI literacy must be elevated to the status of modern literacy alongside math and reading, not treated as an optional elective Safety and privacy are non-negotiable in AI development for children – all AI features should run locally with no data leaving devices Children should be active participants and co-creators in AI development rather than passive consumers Hands-on, collaborative learning approaches are most effective for AI education, with children working in groups rather than isolated screen-based learning AI is creating educational inequity – urban children have access while rural children may not, requiring deliberate efforts to ensure equitable access Teachers need confidence-building and support, not just access to tools, since most educators teaching computer science are not specialists in the field It’s acceptable to ‘apply the brakes’ and have conversations about desired outcomes before rushing to adopt AI tools AI development should optimize for childhood development and potential rather than just engagement and attention
Resolutions and action items
LEGO Education will launch their new computer science and AI product in April with safety guidelines ensuring local processing and no data collection LEGO has created an AI toolkit for classroom discussions about AI policies that can be used by teachers and parents LEGO will provide resources through their teacher portal to support educators who are not computer science specialists LEGO will offer resources online for parents to facilitate AI-related play and learning at home Future localization of LEGO’s AI education materials into multiple languages is planned
Unresolved issues
How to effectively scale AI literacy education to rural and underserved communities with limited resources Funding and support mechanisms for charitable schools that cannot afford AI training for teachers Long-term consequences of AI exposure on child development and learning patterns Balancing the tension between AI efficiency and children’s need to develop imagination and problem-solving through struggle Creating age-appropriate AI education progression from early childhood through adolescence Addressing the rapid pace of AI adoption in homes, particularly in India, which may have ‘disturbing’ implications according to research Developing evidence-based approaches before scaling AI education solutions widely
Suggested compromises
Start AI education with completely screen-free, hands-on activities using physical materials like bricks to teach computational concepts Implement a gradual progression approach where younger children learn foundational concepts without direct AI interaction Use existing tools with automated translation capabilities while working toward proper localization Focus on facilitated discussions and policy conversations about AI rather than rushing to implement AI tools Combine structured learning with open-ended design challenges to balance scaffolding with student agency Apply the same AI discussion toolkit to both children and teachers to build understanding across all stakeholders
Thought Provoking Comments
AI literacy isn’t about teaching children how to use this magic box. I think far more importantly it’s like how do we give the child the screwdriver to take that box apart and really understand what’s going on under the cover… our definition of AI literacy when we talk about it, it’s about understanding today’s technology, yes, but it’s far more about understanding the fundamental concepts so that you are armed and ready for what is yet to be designed, and actually so that you can be the designer of what is to come.
This comment reframes the entire discussion by challenging the conventional approach to AI education. Instead of focusing on tool usage, Hall advocates for deep understanding of underlying principles. The metaphor of giving children ‘the screwdriver to take that box apart’ is particularly powerful as it transforms children from passive consumers to active investigators and future creators.
This comment established the philosophical foundation for the entire discussion, shifting focus from AI as a consumption tool to AI as something children should understand, critique, and ultimately design. It influenced subsequent speakers to emphasize agency, understanding, and creative engagement rather than mere usage.
Speaker: Tom Hall
How do we prepare AI for kids and imagination?… what if we optimize for childhood, then we’re going to optimize for potential.
This comment introduces a crucial paradigm shift by flipping the typical question. Instead of asking how to prepare children for AI, Menke asks how to prepare AI for children. The distinction between optimizing for engagement versus optimizing for childhood/potential is profound and challenges the tech industry’s typical metrics.
This reframing elevated the discussion to consider AI development from a child-centric perspective rather than a technology-centric one. It introduced the concept that the design philosophy behind AI systems fundamentally shapes childhood development, leading to deeper conversations about values and long-term impact.
Speaker: Richa Menke
There’s three key tensions that we think are really important to address when we think about kids and childhood… efficiency and imagination. If I can get an answer just like this, I don’t have to wait. I don’t have to struggle. I don’t have to develop my imagination… Personalization and identity. A child at seven is not the same as who they’re going to be at 17… assistance and agency.
This comment articulates fundamental developmental concerns that are often overlooked in AI discussions. It highlights how AI’s apparent benefits (efficiency, personalization, assistance) might actually undermine crucial aspects of child development (imagination, identity formation, agency).
These tensions became a framework for evaluating AI applications throughout the discussion. It added nuance to the conversation by showing that seemingly positive AI features could have negative developmental consequences, leading other speakers to address how their approaches navigate these tensions.
Speaker: Richa Menke
In the area of radiology, AI has helped the diagnosis of pancreatic cancer 438 days earlier than would have been normally expected… We are looking for that kind of accelerator in education. Something that’s going to bring efficiency and quality without widening inequality and as you’ve said that remains deeply human centered because we know that learning is an inherently social process.
This comment provides a compelling analogy that raises expectations for AI’s potential in education while acknowledging the unique challenges. The specific example of early cancer detection creates a powerful benchmark for what transformative AI impact could look like in education.
This comment shifted the discussion toward considering AI’s transformative potential in education while maintaining focus on equity and human-centeredness. It challenged the panelists to think about scalable solutions that could have dramatic positive impact without losing the social nature of learning.
Speaker: Saadhna Panday
But time and again we make the error that we underestimate the capacity of children. They’re not passive recipients of education. They have tremendous agency. They can consume tech, they can shape it, and no doubt they will lead it in time.
This comment challenges a fundamental assumption in many educational technology discussions – that children are merely recipients rather than active agents. It reframes children as capable partners in shaping technology rather than subjects to be protected from it.
This perspective influenced the subsequent discussion to focus more on empowerment and co-creation rather than protection and control. It supported the earlier themes about giving children agency and tools to understand and create rather than just consume AI.
Speaker: Saadhna Panday
I really challenge the audience as well around this need to want to put things into kids’ hands directly in any context… Well, let’s not rush for the fastest and the best model, but what’s actually right for the kids as well.
This comment provides a crucial counterbalance to the excitement about AI in education by advocating for restraint and age-appropriateness. It challenges the assumption that having access to the most advanced AI tools is necessarily beneficial for children.
This comment introduced a note of caution that tempered the discussion’s enthusiasm, leading to more nuanced conversations about implementation timelines, age-appropriateness, and the difference between what’s technologically possible and what’s developmentally appropriate.
Speaker: Atish Joshua Gonsalves
Overall Assessment

These key comments fundamentally shaped the discussion by establishing a child-centric, agency-focused framework for thinking about AI in education. Rather than a typical technology-first approach, the conversation was anchored in developmental psychology, educational philosophy, and children’s rights. The comments created a progression from challenging conventional AI education approaches, to reframing the relationship between AI and childhood development, to establishing practical tensions and considerations for implementation. This resulted in a sophisticated discussion that balanced technological potential with developmental appropriateness, emphasized empowerment over protection, and prioritized understanding over usage. The overall effect was to elevate the conversation beyond typical ed-tech discussions to a more nuanced exploration of how AI can serve children’s developmental needs while preparing them to be creators rather than just consumers of future technology.

Follow-up Questions
How can AI literacy concepts be effectively implemented in multilingual, multilevel, and resource-constrained classroom settings like rural Rajasthan?
This addresses the critical gap between theoretical AI literacy models and real-world implementation in diverse educational contexts where students have varying levels of foundational literacy and limited resources.
Speaker: Saadhna Panday
What are the long-term developmental consequences of children’s early exposure to AI systems, particularly regarding imagination and critical thinking skills?
This explores the tension between AI’s efficiency and the developmental need for children to struggle, wait, and develop their own imagination and problem-solving capabilities.
Speaker: Richa Menke
How can we develop evidence-based approaches to AI adoption in education before rushing to scale implementation?
This addresses the need for rigorous research and evidence collection to ensure AI tools are effective and safe before widespread deployment in educational settings.
Speaker: Saadhna Panday
What specific resources and training programs are needed to support teachers who are not computer science specialists in delivering AI literacy education?
This identifies the critical need for teacher preparation and support systems, especially for educators in under-resourced schools who need to teach AI concepts without specialized backgrounds.
Speaker: Atish Joshua Gonsalves and Asha Nanavati
How can parents develop home-based AI literacy curricula when schools are slow to adapt to AI education needs?
This addresses the gap between the rapid pace of AI development and the slower adaptation of formal education systems, requiring alternative approaches for children’s AI education.
Speaker: Nikhil Bawa
What are the implications of unregulated AI adoption in homes, particularly in competitive educational environments like India?
This explores concerning trends in AI adoption driven by competitive pressures rather than educational best practices, requiring research into safety and effectiveness.
Speaker: Nikhil Bawa
How can AI literacy education be localized and made available in languages that are meaningful to diverse global communities?
This addresses the need for culturally and linguistically appropriate AI education materials to ensure equitable access across different communities.
Speaker: Audience member (implied)
What is the optimal balance between structured and unstructured play when incorporating AI into children’s learning experiences?
This explores how to maintain the benefits of open-ended play while providing necessary guidance and safety measures in AI-enhanced learning environments.
Speaker: Nikhil Bawa
How can we ensure that AI systems are optimized for childhood development and potential rather than just engagement metrics?
This fundamental question challenges current AI development priorities and calls for child-centered design principles that support long-term developmental outcomes.
Speaker: Richa Menke

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Scaling Innovation Building a Robust AI Startup Ecosystem

Scaling Innovation Building a Robust AI Startup Ecosystem

Session at a glanceSummary, keypoints, and speakers overview

Summary

The transcript documents a Startup Felicitation Ceremony organized by the Software Technology Parks of India (STPI) to recognize and celebrate startups supported under the STPI ecosystem for their excellence across various categories including revenue, funding, employment, women participation, innovation, and AI-led impact. The ceremony was presided over by distinguished dignitaries including the Director General of STPI and Nirja Shekhar, Director General of the National Productivity Council, who presented certificates and trophies to the winning startups.


Multiple startups were recognized across different revenue categories and achievement areas. Phoenix Marine Exports and Solutions received recognition for highest revenue impact in Tier 2 and Tier 3 regions, while Vimeo Consulting was honored for highest funding raised. Swadha Agri was celebrated for employment generation, and Strangify Technologies was recognized for women employment. Sikwara Tech IT Solutions received multiple recognitions including highest employment, women employment, and AI-based impact. Other notable winners included Suhora Technologies for revenue performance, Puvation Technologies for funding, and various companies for innovation and beneficiary impact.


Several startup founders shared their inspiring journeys during the ceremony. Devika Chandrasekaran from Fuselage Innovations described how STPI’s early support through the Scout 2021 program provided crucial validation for their drone manufacturing company, which now serves over 10,000 farmers and recently received the National Startup Award. Dr. Soumya from TectoCell highlighted their AI-powered diagnostic solutions at the intersection of radiology and DNA sequencing, emphasizing STPI’s role in helping them achieve clinical accuracy and navigate regulatory compliance. The founders of EZO5 Solutions shared their remarkable turnaround story, describing how STPI’s intervention helped them during a critical cash flow crisis, leading to processing one million scans and eventually gaining recognition from Prime Minister Modi and Bill Gates. The ceremony concluded with memento presentations to dignitaries and a vote of thanks, celebrating the collaborative ecosystem that enables Indian startups to scale globally and demonstrate that innovation from India is both scalable and internationally relevant.


Keypoints

Major Discussion Points:


Startup Recognition and Awards Ceremony: The main focus was felicitating startups supported under the STPI ecosystem across multiple categories including highest revenue, funding raised, employment generation, women participation, AI-based impact, and innovation excellence.


Diverse Startup Success Stories: Featured startups from various sectors including drone technology (Fuselage Innovations), AI-powered healthcare diagnostics (TectoCell), cybersecurity solutions (SecurTech), food robotics (CaneBot), and precision oncology treatment planning (EZO5 Solutions).


STPI’s Role as an Ecosystem Enabler: Multiple founders emphasized how STPI provided crucial early-stage support, validation, funding assistance, mentorship, investor connections, and regulatory guidance that helped them scale from prototypes to successful businesses.


Innovation Impact and Scale: Startups demonstrated significant real-world impact – from serving 10,000+ farmers with drone technology to processing millions of medical scans, flagging thousands of TB cases, and even gaining recognition from Prime Minister Modi and Bill Gates.


Collaborative Ecosystem Building: The event highlighted partnerships between STPI, National Productivity Council, and other stakeholders in fostering a supportive startup environment that enables Indian innovations to scale globally.


Overall Purpose:


The discussion served as a formal recognition ceremony to celebrate and showcase successful startups nurtured within the STPI ecosystem, while demonstrating the tangible impact of government support in fostering innovation and entrepreneurship across India’s tier 2 and tier 3 regions.


Overall Tone:


The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with the awards ceremony, became more personal and engaging during founder testimonials where entrepreneurs shared their struggles and successes, and concluded on a warm, collaborative note with group photographs and networking. The atmosphere remained positive and encouraging, emphasizing achievement, gratitude, and forward momentum.


Speakers

Moderator: Role – Event moderator for the Startup Felicitation Ceremony


Devika Chandrasekaran: Role – Co-founder of Fuselage Innovations; Area of expertise – Drone technology for agriculture, defense, and disaster management applications


Dr. Saumya Shukla: Role – Representative of TectoCell; Area of expertise – AI-powered diagnostic solutions at the intersection of radiology and DNA sequencing


Noor Fatima: Role – Co-founder of EZO5 Solutions; Area of expertise – AI-powered platform for precision treatment planning in oncology


Arita Dalan: Role – Representative of SecurTech IT Solutions Private Limited; Area of expertise – Cybersecurity solutions for enterprises


Kirty Datar: Role – Representative of CaneBot Solutions Private Limited; Area of expertise – AI-powered food robotics platform


Meenal Gupta: Role – Founder of EZO5 Solutions; Area of expertise – AI-powered medical imaging and oncology treatment planning


Shri Praveen Kumar: Title – Joint Director, STPI; Role – Presented vote of thanks


Milind Datar: Role – Representative of CaneBot Solutions Private Limited; Area of expertise – AI-powered food robotics platform for fresh beverage preparation


Additional speakers:


Nirja Shekhar: Title – Director General, National Productivity Council (NPC); Role – Dignitary presenting awards


Shri Ashok Gupta: Title – Director STPI Gurugram; Role – Dignitary presenting mementos


Shri Atul Kumar Singh: Title – Additional Director, STPI; Role – Dignitary presenting mementos


Shri Rakesh Dubey: Title – Director, Startups and Innovation, STPI; Role – Dignitary and supporter of the event


Geetika Dayal: Role – Representative from an organization (specific title not clearly mentioned); Role – Partnership and startup ecosystem building


Bala MS: Role – Industry representative; Area of expertise – GCC (Global Capability Centers) and industry perspective for startups


Full session reportComprehensive analysis and detailed insights

The transcript documents a Startup Felicitation Ceremony organized by the Software Technology Parks of India (STPI) to recognize and celebrate the achievements of startups within the STPI ecosystem. The ceremony acknowledged startup excellence across multiple categories and showcased the impact of government-supported innovation programs, featuring startups from both metropolitan centers and tier 2 and tier 3 cities.


Award Categories and Recognition Framework


The ceremony featured a structured recognition system acknowledging startup excellence across six key dimensions: revenue generation, funding acquisition, employment creation, women workforce participation, innovation excellence, and artificial intelligence-led impact. Awards were categorized by revenue levels, with distinctions for companies generating up to 25 crores and up to 50 crores.


Distinguished dignitaries presided over the ceremony, including the Director General of STPI and Nirja Shekhar, Director General of the National Productivity Council. Fourteen startups received recognition across various categories, with some companies achieving multiple distinctions.


Award Recipients and Categories


The recognized startups included:


– Phoenix Marine Exports and Solutions Private Limited: highest revenue impact in tier 2 and tier 3 regions


– Vimeo Consulting Private Limited: exceptional funding acquisition


– Swadha Agri Private Limited: employment generation


– Strangify Technologies Private Limited: women workforce participation


– Sikwara Tech IT Solutions Private Limited: multiple recognitions including highest employment generation, women employment, and AI-based impact


– Additional startups were recognized across the remaining categories, representing diverse sectors and technological approaches


Startup Presentations and Achievements


Useless Innovations was represented by co-founder Devika Chandrasekaran, who described their drone manufacturing for agriculture, defense, and disaster management applications. The company serves more than 10,000 farmers across India and recently received the National Startup Award with an opportunity to present before Prime Minister Narendra Modi. Chandrasekaran emphasized that STPI’s Scout 2021 program provided crucial support, noting “the support we received through the program was not just funding, it’s validation.”


TectoCell was presented by Dr. Saumya Shukla, who outlined their AI-powered diagnostic solutions combining radiology and DNA sequencing. The company addresses challenges including drug resistance and clinical trial optimization. Dr. Shukla highlighted STPI’s role in facilitating regulatory compliance, global collaborations, and data acquisition for their healthcare technology development.


Sequera Tech IT Solutions was represented by Arita Dalan, who discussed their cybersecurity solutions for organizations across banking, pharmaceutical, and digital sectors. Their approach focuses on simplifying security frameworks for organizations of all sizes.


Caneboard Solutions Private Limited was presented by founders Kirty Datar and Milind Datar, who developed the world’s first fresh sugarcane juice robotic vending machine using AI, robotics, and IoT technologies. The platform addresses hygiene concerns while creating direct market linkages for farmers and has expanded beyond sugarcane juice to other fresh beverages.


EZO5 Solutions was represented by co-founders Noor Fatima and Meenal Gupta, who described their Imagix AI platform for precision oncology treatment planning. Their achievements include processing around one million scans, analyzing 50,000 chest X-rays, flagging 4,000 cases of TB, identifying six cases of lung cancer, and creating 1,000 radiotherapy plans. The founders candidly shared their journey from near-bankruptcy to global recognition, with Fatima stating that STPI “came to our rescue” when they had only two months of cash flow remaining. Their work has gained interest from both Prime Minister Modi and Bill Gates.


STPI’s Support Role


Based on founder testimonials, STPI’s support extends beyond funding to include validation, strategic guidance, regulatory assistance, and crisis intervention. Founders consistently praised STPI’s comprehensive approach, with multiple speakers emphasizing how STPI’s support proved crucial during critical development phases. The testimonials reveal STPI’s role in facilitating international partnerships, providing mentorship, and offering bridge support during funding gaps.


Global Recognition and Impact


Several STPI-supported startups have achieved national and international recognition. Useless Innovations received the National Startup Award and presented to Prime Minister Modi. EZO5 Solutions progressed from local operations to global recognition, with founders describing their journey as “going from local to global serving the whole world” and gaining interest from Microsoft and Bill Gates.


Ceremony Conclusion and Acknowledgments


The ceremony concluded with memento presentations to the recognized startups by STPI officials. Shri Praveen Kumar delivered the vote of thanks, acknowledging various stakeholders including industry partners, government officials, and the startup community. He emphasized the collaborative nature of ecosystem building and the importance of continued support for innovation.


The event ended with networking opportunities and group photography sessions, providing informal interaction opportunities for startup founders, STPI officials, and other ecosystem participants.


Significance and Impact


The ceremony demonstrated the diversity and sophistication of India’s startup landscape, with recognized companies spanning agricultural technology, healthcare diagnostics, cybersecurity, food robotics, and advanced manufacturing. The geographic diversity of participants, including specific recognition for tier 2 and tier 3 region achievements, highlighted STPI’s commitment to democratizing innovation opportunities across India.


The unanimous positive feedback from startup founders regarding STPI’s support, combined with the tangible achievements showcased, illustrates the effectiveness of comprehensive ecosystem support in nurturing early-stage ventures through critical development phases. The ceremony served as both celebration and validation of the collaborative approach to startup development, demonstrating how institutional support can enable startups to achieve both local impact and global recognition.


Session transcriptComplete transcript of the session
Moderator

Manwala Dignitaris seated. So we now come to one of the most awaited segments, the Startup Felicitation Ceremony. Today, we recognize startups supported under STPI ecosystem for excellence across revenue, funding, employment, women participation, innovation, and AI -led impact. I would like to request our honored dignitaries, DG sir, STPI, and Nirja Shekhar ma ‘am, Director General, National Productivity Council, to kindly come forward to present the certificate and trophy to our startups. I request these startups to kindly come on the stage as per the name announced. So the first name is, may I invite Phoenix Marine Exports and Solutions Private Limited to come on the stage. They are being recognized under the category, Highly Effective. Highest revenue up to 25 CR revenue and highest impact based on revenue, Tier 2 and Tier 3 reason.

May I request DG STPI and DG NPC to please present the certificate and trophy. Once again, a big round of applause for their outstanding contribution. Now, may I invite Vimeo Consulting Private Limited to please come on the stage. They are being recognized for highest funding raised, up to 25 CR revenue category. Heartiest congratulations on your fundraising success. A big round of applause. A louder round of applause please. Now may I invite Swadha Agri Private Limited to the stage. They are being felicitated for highest employment generation up to 25 CR revenue category. Congratulations for generating valuable employment. A big round of applause. Thank you. Now may I invite Strangify Technologies Private Limited to please come on the stage.

They are being recognized for highest number of women employment up to 25 CR revenue category. Well done for empowering women in the workforce. A big round of applause. A louder round of applause for women participation. Now our next startup is Suhora Technologies Private Limited. May I invite Strangify to Suhora Technologies Private Limited to this stage. They are being recognized for highest revenue up to 50 CR revenue category. Congratulations on your outstanding business performance. A big round of applause. Now I invite Puvation Technologies Solutions Private Limited. They are being felicitated for highest funding raised up to 50 CR revenue category. Applause for your impressive funding milestone. A big round of applause. A big round of applause. now I invite our next startup Sikwara Tech IT Solutions Private Limited to come on the stage they are being recognized under multiple categories so the categories are highest employment up to 50 CR revenue category highest women employment up to 50 CR revenue category highest AI based impact based on revenue a special recognition for excellence across multiple dimensions a big round of applause now I invite our next startup Atmik Bharat Industries Private Limited to the stage they are being recognized for highest impact based on beneficiaries Congratulations for touching countless lives.

A big round of applause. May I invite Mobile Pay E -Commerce Private Limited. They are being felicitated for highest impact based on beneficiaries as a second position. Well done for your meaningful outreach. A big round of applause. Thank you. Now I invite the another startup, Devnagri AI Private Limited to please come on the stage. They are being recognized for highest AI based impact based on revenue as a second position. Congratulations on leveraging AI for impact. A big round of applause. Thank you. Thank you so much, sir, DG, sir, for attending us. Now I invite our next startup, Dactrocell Healthcare and Research Private Limited. They are being recognized for most innovative startup. Applause for Breakthrough Healthcare Innovation.

A big round of applause. Now I invite our next startup, EZO5 Solutions Private Limited. Please come on the stage. They are being felicitated as most promising innovation. Please, please. A big round of applause. Thank you. Thank you. Connector Foods Private Limited. Please come on the stage for a beautiful couple. They are being recognized as most innovative startup as a second position. Well done for creative excellence. A big round of applause. Finally, our last startup. May I invite Pew’s Ledge Innovations Private Limited. They are being recognized as most promising innovation, second position. Congratulations on your forward -looking journey. A big round of applause. A big round of applause for all our felicitated startups. Your innovation, resilience and contribution to India’s digital economy truly inspire us all.

May I request our dignitaries to kindly resume their seats on the dais. We will now invite our selected startups to briefly share their journey with us. So may I invite Fuselage Innovations, Private Limited, to kindly come on the stage and share your

Devika Chandrasekaran

Hi everyone, my name is Devika Chandrasekaran. I’m the co -founder of Useless Innovations. It’s truly an honor to stand on a stage today being felicitated by STPI. This moment feels very special because we started our journey with STPI in our early days. Back in 2021, we participated in a program called Scout 2021. At that time, we were building our prototype. The support we received through the program was not just a funding, it’s a validation. That recognition gave us the confidence to push forward. We are proud to be a part of this. Today, Fuselage Innovations manufactures drones in agriculture, defence, disaster management applications We are working with more than 10 ,000 farmers across India helping them to improve productivity, efficiency through drone technology We are also contributing to defence, disaster management and maritime operations serving critical national needs Last month, we were deeply honoured to receive National Startup Award and we got the opportunity to present our journey in front of our Honourable Prime Minister, Narendra Modi Sir I would like to sincerely thank to STBI and everyone involved in the journey to believe a startup like us The ecosystem, the encouragement and the early trust that make a huge difference in our journey Thank you so much

Moderator

Thank you for sharing inspiring story. Now may I invite Dr. Rosals to kindly come on the stage and share your startup journey with us.

Dr. Saumya Shukla

Good evening, everyone. My name is Dr. Soumya, and I’m really glad to be a part of this prolific platform today. Just very quickly, I’d like to walk you through what we build. So at TectoCell, we build AI -powered diagnostic solutions at the intersection of radiology and DNA sequencing, while addressing the huge havoc of drug resistance and robust clinical trials panning across India, facilitated by the Software Technology Parks of India. we’ve been able to sort of exceptionally benchmark our accuracy, clinical accuracy, that sort of amplifies the reliability of our products. And the continued commitment of Software Technology Parks of India to sort of help us navigate through our regulatory compliances, get global collaborations, and also sort of get data acquisition, which is sort of machine -readable, is extremely noteworthy.

And this unique foundation sort of puts us in a very good position, in a very strengthful position to now sort of scale this globally, building from India for the world. So I’m very grateful for this. Thank you.

Moderator

Big round. Big round of applause. Thank you so much for sharing your story and journey with us. Now I invite Sequera Tech IT Solutions Private Limited to come on the stage and share your startup journey with us.

Arita Dalan

Hi everyone. They are one of the nurturing body which has done a lot of collaboration in the industries as well. They are one of the bodies which has given us an opportunity to talk to the investors as well. And there are various industry connect as well that is being established by the organization. And we are very sincerely thankful to the entire organization and the team of SDPI as well. Just to give you a brief about SDPI. SecurTech. SecurTech is a cyber security organization. Our mantra is to simplify security. We are touching, we are securing the security for the large enterprise organization, with size organization across the industry, whether it is pharma, banking finance organizations, or even the small organizations which are currently establishing the digital landscape in the country, while they are being regulated by large RBI and SAV.

So, in nutshell, we are providing them all the frameworks, security parameters, and the solutions as well, so that they can be powered, they can be enabled, and they can secure their own infrastructure platforms and the data that they are processing for the countries or for the users that they are providing services. So, whether it is a startup organization or even a large infrastructure organization, we are securing. We are providing them end -to -end. Thank you. Thanks, everyone. Thank you.

Moderator

Now, I invite… Caneboard Solutions Private Limited to come on the stage and share your journey with us.

Kirty Datar

Good afternoon everyone and thank you so much STPI for this honor here today.

Milind Datar

In what we have built an AI powered food robotics platform that prepares and serves fresh and hygienic beverages completely autonomously without any human intervention and as our first application we have built world’s first fresh sugarcane juice robotic vending machine which replaces the unhygienic and unsafe unhygienic beverages being sold on the road sides wherein the customers then finally choose packaged drinks over fresh and hygienic beverages or fresh beverages.

Kirty Datar

Our technology integrates robotics, IoT -embedded systems, and predictive AI to deliver farm -to -consumer juice in under 30 seconds in a fully autonomous manner. Beyond consumers, this creates direct market linkages for sugarcane farmers, ensures fair pricing, and also supports circular economy. We are extending now our platform beyond sugarcane juice to other fresh juices and smoothies, and positioning CaneBot as a productive platform company in food robotics. STPI has played a very meaningful role in our journey. The mentorship and peer network helped us think beyond the product to scalability, governance, and global readiness. Platforms like Tycon Exposure and, you know, through STPI gave us direct access to global investors and ecosystem partners, helping us sharpen our positioning as a deep tech company.

Most importantly, STPI’s recognition has strengthened our credibility with customers, investors, and government stakeholders. STPI is a great place to start your journey. We are very happy and very honored to be here today. And we thank you so much to STPI and everybody who is present here today. Thank you so very much.

Moderator

May I invite now EZO5 to kindly come on the stage and share your startup journey with us.

Noor Fatima

Hi, everyone. Good afternoon. I’m Noor Fatma, co -founder of EZO5 Solutions.

Meenal Gupta

Hi, I’m Meenal Gupta, founder of EZO5 Solutions. At

Noor Fatima

t EZO5, we have built an AI -powered platform, Imagix AI, that does precision treatment planning for oncology cases. And, so in the long, In the startup journey, there was a time for us when we, one and a half years back, when we had just two months of cash flow with us. We were thinking a lot what to do and that is when STPI came to our rescue and it helped us raise money and there has been no looking back since then. So in the past three years where we have been incorporated, we have processed around one million scans. We have, in the last three months, we have scanned around 50 ,000 chest XAs where we have flagged around 4 ,000 cases of TB, cut the transmission by short.

We have flagged six cases of lung cancer where the intervention was still possible. We have prepared 1 ,000 radiotherapy plans in the last three months and we have cut short the treatment planning and start from around one month to a week. So that is the impact we are making through the support. Of the whole ecosystem and STPI.

Meenal Gupta

and proudly I say that the impact that we have brought even our Prime Minister Mr. Narendra Modi was interested and he invited us to discuss our solution in IMC and just day before yesterday we have gone global because Bill Gates showed interest in our solution and he invited us in Microsoft to show our solution and he was discussing how he can help us. Thank you. So now we are going from local to global serving the whole world. Thank you.

Moderator

Thank you to all the founders for sharing such inspiring stories. So we now proceed with presentations of mementos to our esteemed dignitaries. To begin with may I request Shri Ashok Gupta Sir Director STPI Do Gurugram to kindly come on the stage. Sir will present the memento to Nirja Shekhar ma ‘am, Director General, NPC. A big round of applause. Thank you so much sir and thank you so much ma ‘am. Next may I request Shri Atul Kumar Singh sir, Additional Director, STPI to kindly come on the stage and present the memento to Shri Bala MS. A big round of applause. Thank you. May I now request Shri Praveen Kumar sir, Joint Director, STPI to kindly come on the stage and present the memento to Geetika Dayal ma ‘am.

A big round of applause. May I also request Shri Praveen Kumar sir, Joint Director, STPI to kindly present the memento to Shri Rakesh Dubey sir, Director, Startups and Innovation, STPI. Thank you sir. A big round of applause. Thank you. now I would like to request Shri Praveen Kumar sir joint director STPI to present the formal

Shri Praveen Kumar

vote of thanks respected dignitaries speakers startup founders innovators and ladies and gentlemen on behalf of software technology parks of India it is our true privilege to thank each one of you for making this session focused, meaningful and definitely forward looking Nirja Sekhar ma ‘am thank you for your thoughtful reflections on productivity and growth your perspective adds depth and direction both to our collective mission ma ‘am we are truly encouraged to have your presence thank you thank you so much we are grateful for it Sri Rakesh Dube, sir, thank you for your profound support, which has been both guiding and grounding, sir. Your constant encouragement and hands -on involvement in shaping the entire session together has helped us immensely, sir.

My sincere appreciation to Geetika Dayal, madam, from Thai Daily NCR, for your continued partnership and reinforcing the importance of collaborative startup ecosystem building, madam. Thank you. Thank you, Mr. Bala, for bringing up a sharp industry lens and pragmatic approach that startups need. And startups can directly relate to as they scale. So your thoughts on the GCC is definitely going to help them all. to the startups, all the startups who were felicitated today, congratulations. Your achievement demonstrates that innovation from India including tier 1 and tier 2 is both scalable and globally relevant. To all the founders who shared their journey, thank you for your candor and inspiration. Your stories remind us why platforms like STPI matter. And before I conclude, I sincerely appreciate my organizing team and every colleague who worked diligently behind the scenes to ensure the session came seamlessly.

With that, I once again thank all of you and the dignitaries and I request dignitaries and startups to come forward and have a group of staff. Thank you. Thank you again. I request to all the felicitated startups to kindly come on the stage and have the group photograph with all the dignitaries on the dais. Thank you.

Moderator

The other directors as well to please come on stage and join us for the group photographs. Yes, Kavita ma ‘am, please come on the stage. I also request Kishori ma ‘am to please join us for the group photograph. Thank you. Thank you. Thank you. Thank you. The floor is open. Anyone can take a group photograph with anyone. Please come and join. Thank you. Thank you. Thank you. they are kept on the right side of the podium please you I repeat please collect the trophy cases Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Milind Datar

da

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Moderator
2 arguments47 words per minute1160 words1463 seconds
Argument 1
Recognition of startups across multiple categories including revenue, funding, employment, women participation, innovation, and AI-led impact
EXPLANATION
The moderator announced the STPI startup felicitation ceremony recognizing excellence across various dimensions of startup performance. This comprehensive recognition system acknowledges different aspects of startup success beyond just financial metrics.
EVIDENCE
Categories mentioned include highest revenue (up to 25 CR and 50 CR), highest funding raised, highest employment generation, highest women employment, most innovative startup, most promising innovation, and highest AI-based impact
MAJOR DISCUSSION POINT
STPI’s comprehensive approach to recognizing startup excellence across multiple dimensions
Argument 2
Felicitation of 13 startups under various achievement categories with certificates and trophies
EXPLANATION
The ceremony formally recognized 13 specific startups with awards and certificates for their achievements in different categories. This public recognition serves to validate and encourage startup innovation within the STPI ecosystem.
EVIDENCE
Startups felicitated include Phoenix Marine Exports, Vimeo Consulting, Swadha Agri, Strangify Technologies, Suhora Technologies, Puvation Technologies, Sikwara Tech IT Solutions, Atmik Bharat Industries, Mobile Pay E-Commerce, Devnagri AI, Dactrocell Healthcare, EZO5 Solutions, Connector Foods, and Pew’s Ledge Innovations
MAJOR DISCUSSION POINT
Public recognition and validation of startup achievements within STPI ecosystem
D
Devika Chandrasekaran
2 arguments125 words per minute212 words101 seconds
Argument 1
Fuselage Innovations manufactures drones for agriculture, defense, and disaster management, serving 10,000+ farmers and receiving National Startup Award
EXPLANATION
Devika highlighted how their company has scaled from prototype development to serving over 10,000 farmers across India with drone technology. The company has achieved significant recognition including the National Startup Award and an opportunity to present to Prime Minister Modi.
EVIDENCE
Started with STPI’s Scout 2021 program, now serves 10,000+ farmers, received National Startup Award, presented to PM Modi, works in agriculture, defense, disaster management and maritime operations
MAJOR DISCUSSION POINT
Successful scaling of drone technology for agricultural and defense applications
AGREED WITH
Milind Datar
Argument 2
STPI provided early validation and support through Scout 2021 program that gave startups confidence to push forward
EXPLANATION
Devika emphasized how STPI’s early support was crucial not just for funding but for validation during their prototype stage. This early recognition and trust from STPI gave them the confidence needed to continue developing their technology.
EVIDENCE
Participated in Scout 2021 program when building prototype, received funding and validation that gave confidence to push forward
MAJOR DISCUSSION POINT
Importance of early-stage validation and support for startup success
AGREED WITH
Dr. Saumya Shukla, Arita Dalan, Kirty Datar, Noor Fatima
D
Dr. Saumya Shukla
2 arguments125 words per minute172 words82 seconds
Argument 1
TectoCell builds AI-powered diagnostic solutions combining radiology and DNA sequencing to address drug resistance and clinical trials
EXPLANATION
Dr. Saumya explained how TectoCell operates at the intersection of radiology and DNA sequencing to create AI-powered diagnostic solutions. Their focus is on addressing critical healthcare challenges including drug resistance and improving clinical trial processes across India.
EVIDENCE
AI-powered diagnostic solutions at intersection of radiology and DNA sequencing, addressing drug resistance and clinical trials across India, exceptional clinical accuracy benchmarking
MAJOR DISCUSSION POINT
AI-powered healthcare diagnostics addressing drug resistance
AGREED WITH
Kirty Datar, Noor Fatima
Argument 2
STPI facilitated regulatory compliance, global collaborations, and data acquisition for startups to scale globally
EXPLANATION
Dr. Saumya highlighted STPI’s comprehensive support beyond funding, including help with regulatory frameworks, international partnerships, and accessing machine-readable data. This support positions startups to scale from India to global markets.
EVIDENCE
STPI helped navigate regulatory compliances, get global collaborations, and data acquisition that is machine-readable, positioning for global scaling
MAJOR DISCUSSION POINT
STPI’s role in facilitating global scaling through regulatory and collaboration support
AGREED WITH
Devika Chandrasekaran, Arita Dalan, Kirty Datar, Noor Fatima
A
Arita Dalan
2 arguments114 words per minute227 words119 seconds
Argument 1
SecurTech provides cybersecurity solutions and frameworks to enterprises across banking, pharma, and emerging digital organizations
EXPLANATION
Arita described SecurTech as a cybersecurity organization with the mantra to ‘simplify security’ for various types of organizations. They provide comprehensive security frameworks and solutions to help organizations comply with regulations and secure their digital infrastructure.
EVIDENCE
Provides security for large enterprises, mid-size organizations across pharma, banking, finance, and small organizations establishing digital presence, helps with RBI and SAV regulations, provides end-to-end security frameworks and solutions
MAJOR DISCUSSION POINT
Comprehensive cybersecurity solutions for diverse organizational needs
Argument 2
STPI enabled industry connections, investor access, and collaborative opportunities for startup growth
EXPLANATION
Arita emphasized STPI’s role as a nurturing body that facilitates important connections between startups and the broader ecosystem. This includes connecting startups with investors and establishing industry partnerships that are crucial for growth.
EVIDENCE
STPI provided opportunities to talk to investors, established various industry connections, acted as nurturing body with collaboration in industries
MAJOR DISCUSSION POINT
STPI’s ecosystem building through connections and partnerships
AGREED WITH
Devika Chandrasekaran, Dr. Saumya Shukla, Kirty Datar, Noor Fatima
K
Kirty Datar
2 arguments1045 words per minute193 words11 seconds
Argument 1
CaneBot developed world’s first autonomous sugarcane juice robotic vending machine using AI, robotics, and IoT for farm-to-consumer delivery
EXPLANATION
Kirty described their innovative AI-powered food robotics platform that creates fresh, hygienic beverages autonomously. The technology integrates multiple advanced technologies to deliver farm-to-consumer juice in under 30 seconds without human intervention.
EVIDENCE
World’s first fresh sugarcane juice robotic vending machine, integrates robotics, IoT-embedded systems, and predictive AI, delivers farm-to-consumer juice in under 30 seconds autonomously
MAJOR DISCUSSION POINT
Innovation in food robotics combining AI, IoT, and automation
AGREED WITH
Dr. Saumya Shukla, Noor Fatima
Argument 2
STPI’s mentorship and peer network helped startups think beyond products to scalability and governance
EXPLANATION
Kirty highlighted how STPI’s support went beyond technical aspects to help startups develop strategic thinking about scaling and governance. The mentorship and networking opportunities provided broader perspective on building sustainable businesses.
EVIDENCE
STPI mentorship and peer network helped think beyond product to scalability, governance, and global readiness, platforms like Tycon gave access to global investors and ecosystem partners
MAJOR DISCUSSION POINT
Strategic mentorship for startup scalability and governance
AGREED WITH
Devika Chandrasekaran, Dr. Saumya Shukla, Arita Dalan, Noor Fatima
M
Milind Datar
1 argument37 words per minute72 words114 seconds
Argument 1
CaneBot’s AI-powered food robotics platform creates direct market linkages for farmers and supports circular economy
EXPLANATION
Milind emphasized the broader impact of their technology beyond just providing beverages, highlighting how it creates direct market access for sugarcane farmers and supports sustainable economic practices. The platform is expanding beyond sugarcane to other fresh juices and smoothies.
EVIDENCE
Creates direct market linkages for sugarcane farmers, ensures fair pricing, supports circular economy, extending platform beyond sugarcane juice to other fresh juices and smoothies
MAJOR DISCUSSION POINT
Technology platform supporting farmer livelihoods and circular economy
AGREED WITH
Devika Chandrasekaran
N
Noor Fatima
2 arguments159 words per minute205 words77 seconds
Argument 1
EZO5’s Imagix AI platform processes medical scans for precision oncology treatment, having processed 1 million scans and flagged critical cases
EXPLANATION
Noor detailed the significant impact of their AI platform in healthcare, particularly in oncology and TB detection. The platform has processed massive volumes of medical scans and has successfully identified critical cases where early intervention was possible.
EVIDENCE
Processed 1 million scans over 3 years, in last 3 months scanned 50,000 chest X-rays, flagged 4,000 TB cases, identified 6 lung cancer cases where intervention was possible, prepared 1,000 radiotherapy plans, reduced treatment planning time from one month to one week
MAJOR DISCUSSION POINT
AI-powered medical diagnostics with significant healthcare impact
AGREED WITH
Dr. Saumya Shukla, Kirty Datar
Argument 2
STPI came to rescue during critical funding periods and helped startups raise money when needed
EXPLANATION
Noor shared a critical moment in their startup journey when they had only two months of cash flow remaining. STPI’s intervention at this crucial time helped them secure funding and continue their operations, leading to their subsequent success.
EVIDENCE
Had only two months of cash flow one and a half years back, STPI helped raise money during critical period, no looking back since then
MAJOR DISCUSSION POINT
STPI’s critical role in startup survival during funding crises
AGREED WITH
Devika Chandrasekaran, Dr. Saumya Shukla, Arita Dalan, Kirty Datar
M
Meenal Gupta
1 argument152 words per minute92 words36 seconds
Argument 1
EZO5 achieved global recognition with interest from Prime Minister Modi and Bill Gates, expanding from local to global impact
EXPLANATION
Meenal highlighted the remarkable journey of their startup from local operations to global recognition. The interest shown by both Prime Minister Modi and Bill Gates demonstrates the international potential and impact of their AI healthcare solution.
EVIDENCE
Prime Minister Modi invited them to discuss solution in IMC, Bill Gates showed interest and invited them to Microsoft, going from local to global serving the whole world
MAJOR DISCUSSION POINT
Global recognition and scaling of Indian AI healthcare innovation
AGREED WITH
Dr. Saumya Shukla, Noor Fatima
S
Shri Praveen Kumar
1 argument87 words per minute328 words225 seconds
Argument 1
Gratitude expressed to dignitaries, speakers, and organizing team for making the session meaningful and forward-looking
EXPLANATION
Shri Praveen Kumar delivered the formal vote of thanks, acknowledging the contributions of various dignitaries, speakers, and the organizing team. He emphasized how the collective efforts made the session both meaningful and forward-looking for the startup ecosystem.
EVIDENCE
Thanked Nirja Sekhar for reflections on productivity and growth, Rakesh Dubey for profound support and guidance, Geetika Dayal for partnership in ecosystem building, Mr. Bala for industry lens and GCC insights, organizing team for seamless execution
MAJOR DISCUSSION POINT
Acknowledgment of collaborative efforts in building startup ecosystem
Agreements
Agreement Points
STPI’s critical role in providing comprehensive startup support beyond just funding
Speakers: Devika Chandrasekaran, Dr. Saumya Shukla, Arita Dalan, Kirty Datar, Noor Fatima
STPI provided early validation and support through Scout 2021 program that gave startups confidence to push forward STPI facilitated regulatory compliance, global collaborations, and data acquisition for startups to scale globally STPI enabled industry connections, investor access, and collaborative opportunities for startup growth STPI’s mentorship and peer network helped startups think beyond products to scalability and governance STPI came to rescue during critical funding periods and helped startups raise money when needed
All startup founders unanimously praised STPI’s multifaceted support including validation, regulatory guidance, networking, mentorship, and critical funding assistance that went far beyond traditional incubation
AI-powered solutions addressing critical healthcare challenges
Speakers: Dr. Saumya Shukla, Noor Fatima, Meenal Gupta
TectoCell builds AI-powered diagnostic solutions combining radiology and DNA sequencing to address drug resistance and clinical trials EZO5’s Imagix AI platform processes medical scans for precision oncology treatment, having processed 1 million scans and flagged critical cases EZO5 achieved global recognition with interest from Prime Minister Modi and Bill Gates, expanding from local to global impact
Multiple healthcare startups demonstrated the transformative potential of AI in medical diagnostics, from drug resistance to cancer detection, with significant scale and global recognition
Technology solutions creating direct impact on farmers and agricultural productivity
Speakers: Devika Chandrasekaran, Milind Datar
Fuselage Innovations manufactures drones for agriculture, defense, and disaster management, serving 10,000+ farmers and receiving National Startup Award CaneBot’s AI-powered food robotics platform creates direct market linkages for farmers and supports circular economy
Both startups emphasized how their technology platforms directly benefit farmers – through drone technology improving agricultural productivity and robotic platforms creating market linkages
Integration of multiple advanced technologies (AI, IoT, robotics) for comprehensive solutions
Speakers: Dr. Saumya Shukla, Kirty Datar, Noor Fatima
TectoCell builds AI-powered diagnostic solutions combining radiology and DNA sequencing to address drug resistance and clinical trials CaneBot developed world’s first autonomous sugarcane juice robotic vending machine using AI, robotics, and IoT for farm-to-consumer delivery EZO5’s Imagix AI platform processes medical scans for precision oncology treatment, having processed 1 million scans and flagged critical cases
Startups demonstrated sophisticated integration of multiple cutting-edge technologies including AI, IoT, robotics, and advanced diagnostics to create comprehensive solutions
Similar Viewpoints
Both founders emphasized STPI’s intervention at critical moments in their startup journey – early validation for Fuselage and rescue funding for EZO5 – highlighting STPI’s role in startup survival and growth
Speakers: Devika Chandrasekaran, Noor Fatima
STPI provided early validation and support through Scout 2021 program that gave startups confidence to push forward STPI came to rescue during critical funding periods and helped startups raise money when needed
Both emphasized STPI’s role in helping startups think strategically about scaling, governance, and global readiness rather than just product development
Speakers: Dr. Saumya Shukla, Kirty Datar
STPI facilitated regulatory compliance, global collaborations, and data acquisition for startups to scale globally STPI’s mentorship and peer network helped startups think beyond products to scalability and governance
Both co-founders of EZO5 highlighted their AI platform’s significant healthcare impact and remarkable journey from local operations to global recognition by world leaders
Speakers: Noor Fatima, Meenal Gupta
EZO5’s Imagix AI platform processes medical scans for precision oncology treatment, having processed 1 million scans and flagged critical cases EZO5 achieved global recognition with interest from Prime Minister Modi and Bill Gates, expanding from local to global impact
Unexpected Consensus
Comprehensive cybersecurity needs across all organizational sizes and sectors
Speakers: Arita Dalan
SecurTech provides cybersecurity solutions and frameworks to enterprises across banking, pharma, and emerging digital organizations
While other startups focused on AI and innovation, the cybersecurity perspective highlighted the universal need for security frameworks across all digital transformation efforts, representing an unexpected but critical consensus on digital security as foundational infrastructure
Circular economy and sustainability integration in technology solutions
Speakers: Milind Datar
CaneBot’s AI-powered food robotics platform creates direct market linkages for farmers and supports circular economy
The emphasis on circular economy principles in a technology-focused startup ecosystem represents unexpected consensus on sustainability being integral to digital innovation rather than separate from it
Overall Assessment

Strong consensus emerged around STPI’s comprehensive ecosystem support, AI-powered healthcare solutions, agricultural technology impact, and multi-technology integration approaches

Very high level of consensus with unanimous praise for STPI’s multifaceted support and shared recognition of technology’s transformative potential across healthcare, agriculture, and security sectors. The consensus suggests a mature startup ecosystem where founders understand the importance of comprehensive support systems and integrated technology solutions for scaling impact.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

No disagreements identified in this transcript. This was a startup felicitation ceremony where all speakers shared positive experiences and success stories with STPI support. All speakers expressed gratitude and highlighted different aspects of STPI’s enabling role in their startup journeys.

Zero disagreement level. This was a ceremonial event focused on recognition and celebration rather than debate or discussion of opposing viewpoints. All speakers were aligned in praising STPI’s support and sharing their success stories. The implications are positive for STPI’s ecosystem building efforts, as the unanimous positive feedback demonstrates the effectiveness of their startup support programs across diverse sectors including healthcare AI, cybersecurity, food robotics, and drone technology.

Takeaways
Key takeaways
STPI has successfully created a comprehensive startup ecosystem that supports companies from early prototype stage to global scaling, as demonstrated by 13 startups being recognized across multiple achievement categories The startups supported by STPI are making significant real-world impact across critical sectors including healthcare (processing 1 million medical scans, flagging cancer cases), agriculture (serving 10,000+ farmers), cybersecurity, and food technology STPI’s support model goes beyond funding to include validation, mentorship, regulatory compliance assistance, global collaboration facilitation, and investor connections, which are crucial for startup success Several STPI-supported startups have achieved national and international recognition, including National Startup Awards and interest from global leaders like Prime Minister Modi and Bill Gates The startups demonstrate India’s capability to build globally relevant solutions from tier 2 and tier 3 cities, with technologies spanning AI, robotics, IoT, and advanced diagnostics STPI’s early intervention and support during critical periods (such as funding crises) has been instrumental in preventing startup failures and enabling continued growth
Resolutions and action items
Continued collaboration between STPI and National Productivity Council for supporting startup ecosystem development Ongoing support for startups to scale globally while building from India Maintenance of industry connections and investor access platforms for emerging startups
Unresolved issues
None identified
Suggested compromises
None identified
Thought Provoking Comments
The support we received through the program was not just a funding, it’s a validation. That recognition gave us the confidence to push forward.
This comment reframes the value proposition of startup support programs beyond mere financial assistance. It highlights the psychological and strategic importance of validation in early-stage ventures, suggesting that confidence and recognition can be as crucial as capital for startup success.
This comment set the tone for subsequent presentations by establishing that STPI’s value extends beyond transactional support. It influenced other founders to emphasize the holistic benefits they received, creating a narrative thread about ecosystem building rather than just funding distribution.
Speaker: Devika Chandrasekaran (Fuselage Innovations)
At TectoCell, we build AI-powered diagnostic solutions at the intersection of radiology and DNA sequencing, while addressing the huge havoc of drug resistance and robust clinical trials panning across India
This comment is particularly insightful as it demonstrates the convergence of multiple cutting-edge technologies (AI, radiology, genomics) to address critical healthcare challenges. It showcases how Indian startups are tackling complex, multi-dimensional problems with sophisticated technical solutions.
This elevated the technical discourse of the session, moving from general startup success stories to specific deep-tech applications. It demonstrated the sophistication of the STPI ecosystem and influenced subsequent speakers to highlight their technical innovations more prominently.
Speaker: Dr. Saumya Shukla (TectoCell)
There was a time for us when we, one and a half years back, when we had just two months of cash flow with us. We were thinking a lot what to do and that is when STPI came to our rescue
This comment provides rare transparency about startup vulnerability and near-failure experiences. It’s thought-provoking because it acknowledges the precarious nature of startup journeys and positions support organizations as critical intervention points during crisis moments.
This honest revelation about financial distress added authenticity and emotional depth to the discussion. It shifted the narrative from purely celebratory to more realistic, showing that even successful startups face existential challenges, making STPI’s support more meaningful and relatable.
Speaker: Noor Fatima (EZO5 Solutions)
Bill Gates showed interest in our solution and he invited us in Microsoft to show our solution and he was discussing how he can help us. So now we are going from local to global serving the whole world.
This comment represents a significant validation of the ‘local to global’ innovation model, demonstrating how Indian startups can achieve international recognition and scale. It’s particularly impactful because it involves a globally recognized technology leader acknowledging an Indian healthcare AI solution.
This comment served as a powerful crescendo to the founder presentations, demonstrating the ultimate potential of the STPI ecosystem. It reinforced the global relevance of Indian innovation and likely inspired other founders and attendees about the scalability potential of their own ventures.
Speaker: Meenal Gupta (EZO5 Solutions)
Your achievement demonstrates that innovation from India including tier 1 and tier 2 is both scalable and globally relevant.
This comment is insightful because it explicitly addresses the geographic democratization of innovation in India, acknowledging that breakthrough innovations are emerging from beyond traditional tech hubs. It positions tier 2 and tier 3 cities as legitimate sources of globally competitive solutions.
This comment provided a strategic framework for understanding the broader significance of the day’s celebrations. It elevated the discussion from individual startup success to systemic ecosystem development, emphasizing STPI’s role in enabling distributed innovation across India’s geography.
Speaker: Shri Praveen Kumar (STPI)
Overall Assessment

These key comments transformed what could have been a routine felicitation ceremony into a meaningful discourse about ecosystem building, innovation validation, and the global potential of Indian startups. The progression from Devika’s insight about validation over funding, through Dr. Saumya’s technical sophistication, Noor’s vulnerability about near-failure, to Meenal’s global recognition, created a compelling narrative arc. This sequence demonstrated the complete startup journey – from early validation needs through technical development, crisis management, to international scaling. Shri Praveen Kumar’s closing observation about geographic democratization of innovation provided the strategic context that elevated individual success stories to systemic ecosystem achievements. Together, these comments shaped the discussion into a comprehensive exploration of how support ecosystems can nurture innovation from conception to global impact, while acknowledging both the challenges and extraordinary potential of the Indian startup landscape.

Follow-up Questions

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Powering AI Global Leaders Session AI Impact Summit India

Powering AI Global Leaders Session AI Impact Summit India

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion features Chris Lehane, OpenAI’s Chief Global Affairs Officer, speaking at an AI Impact Summit in Delhi about democratizing artificial intelligence and India’s strategic role in shaping AI’s global future. Lehane begins by acknowledging previous speakers, including Prime Minister Modi and OpenAI CEO Sam Altman, who both emphasized the importance of “democratic AI.” He introduces the concept of a “capability gap,” explaining that OpenAI’s research shows power users of AI technology create a 7x economic impact compared to non-power users, creating a significant disparity that society must address.


Lehane argues that education serves as the key to closing this capability gap, identifying three critical components: access, literacy, and agency. He highlights that India has 100 million regular AI users, partly due to affordable access with free tools and a paid version costing only $3.99 monthly. For literacy, he emphasizes that people must simply start using AI technology in any capacity to develop proficiency. The most challenging element is agency – fostering an ethos where people view AI as a tool for empowerment rather than just homework assistance.


Lehane positions AI as a general-purpose technology comparable to historical innovations like the wheel, steam power, and electricity, noting it can scale human abilities to think, learn, and create for anyone who can communicate. He draws a historical parallel between AI adoption and the printing press, contrasting Europe’s embrace of knowledge democratization with China’s restrictive approach. Lehane concludes by emphasizing India’s pivotal role as the world’s largest democracy in determining whether AI develops along democratic or autocratic lines globally.


Keypoints

Major Discussion Points:


The Capability Gap in AI Usage: There’s a significant divide between “power users” who leverage AI as assistants, coaches, and work multipliers (creating 7x economic impact) versus casual users who treat it as a basic search function, creating an urgent need to close this gap societally.


Three Pillars of Democratizing AI Through Education: Access (making AI tools affordable and available, like OpenAI’s $3.99/month model in India), literacy (both traditional and AI-specific skills development), and agency (developing the mindset to actively use AI for productive purposes rather than just homework shortcuts).


AI as a General Purpose Technology: AI represents a transformational technology similar to the wheel, steam power, or electricity that fundamentally changes human productive capacity, allowing individuals to own and monetize their labor rather than just sell it in the traditional labor-capital dynamic.


India’s Strategic Role in Global AI Democracy: India’s position as the world’s largest democracy with 100+ million AI users makes it pivotal in determining whether the world develops “democratic AI” or “autocratic AI,” drawing parallels to how Europe and China differently adopted the printing press.


Educational System Transformation: The need to redesign education systems that were built for the industrial age (factory-like structures) to prepare students for the “intelligence age,” fostering agency and entrepreneurial thinking about AI capabilities.


Overall Purpose:


The discussion aims to position India as a crucial strategic partner in democratizing AI globally, emphasizing the urgent need to close capability gaps through improved access, literacy, and agency while highlighting India’s unique opportunity to lead the world toward democratic rather than autocratic AI development.


Overall Tone:


The tone is consistently optimistic and inspirational throughout, with Chris Lehane maintaining an encouraging, partnership-focused approach. He balances urgency about addressing capability gaps with confidence in India’s potential to lead global AI democratization. The tone remains respectful and collaborative, positioning OpenAI not as a vendor but as a strategic partner in a shared mission to benefit all humanity.


Speakers

Speaker: Role/title not specified, appears to be a moderator or host introducing the session and thanking partners


Chris Lehane: Works at OpenAI (mentioned as “what we do at OpenAI”), discussing AI democratization and policy implications


Additional speakers:


Ronnie: Chief Economist at OpenAI and academic professor at Duke University, expertise in economics of AI and employability


Sam Altman: CEO and co-founder of OpenAI (mentioned but did not speak in this transcript)


Prime Minister: (mentioned as having spoken the day before, but did not speak in this transcript)


Rupa: Role/title not specified, mentioned as having discussed literacy in relation to AI


Full session reportComprehensive analysis and detailed insights

This discussion features Chris Lehane, OpenAI’s Chief Global Affairs Officer, speaking at an AI Impact Summit in Delhi following earlier remarks from the Prime Minister and OpenAI CEO Sam Altman about “democratic AI.” Lehane was introduced after a video about Vahan.ai, with the host acknowledging Ronnie’s parents’ story of being born in India and moving to the U.S., which Lehane praised as exemplifying the themes of his presentation.


The Capability Gap Challenge


Lehane’s central argument focuses on a “capability gap” emerging from AI usage patterns. Drawing from OpenAI’s economics research, he explains that power users—those who use AI as assistants, coaches, and work multipliers rather than enhanced search engines—generate approximately 7x the economic impact of casual users. This disparity threatens to create significant societal inequality, with some individuals thriving economically while others risk being left behind.


Speaking in his informal, conversational style, Lehane acknowledges he’s playing “technologist,” “economist,” and “amateur historian” despite being a history major. He emphasizes that this capability gap represents more than typical technology adoption—it signals potential societal bifurcation where mastery of AI tools determines economic success.


Three Pillars of AI Democratization


Lehane identifies education as the historical “passport” for closing capability gaps and proposes three pillars for democratizing AI:


Access forms the foundation, requiring AI tools to be financially and practically available. India exemplifies this with 100 million regular users—roughly one-third of the population. Lehane notes OpenAI’s strategy of providing free basic tools and affordable premium options, mentioning “about $3.99 a month, if I’m remembering correctly” for advanced features in India, though he expresses uncertainty about the exact figure.


Literacy encompasses both traditional educational skills and AI-specific competencies. Lehane advocates for practical engagement, encouraging people to start using AI for any purpose—citing examples from his friends who use it for astrology or sports betting. He argues that AI literacy develops through hands-on experimentation rather than formal training.


Agency represents the most challenging pillar—fostering a mindset where individuals actively choose productive AI use rather than passive consumption. Lehane estimates only 20% of students demonstrate genuine agency with AI, viewing it as empowerment, while 80% treat it merely as a homework shortcut.


AI as General Purpose Technology


Positioning AI within historical context, Lehane describes it as the latest general purpose technology, following the wheel, domestication of animals, steam power, combustion engines, electricity, and transistors. He notes that for 190,000 years, humans produced essentially what they consumed in a one-to-one ratio, but general purpose technologies over the past 10,000 years have progressively increased productivity.


AI serves as the “ultimate leveling tool,” scaling human abilities to think, learn, create, and produce for anyone capable of communication. This could fundamentally alter the traditional social contract, which Lehane describes as historically being “a calibration, maybe a fight between labor and capital.”


Educational System Transformation


The current educational system, Lehane argues, remains designed for industrial-age requirements. Developed during early American industrialization to transition agricultural workers into factory employment, schools still operate on factory schedules with bells and classroom-to-classroom movement mimicking assembly lines.


As society enters the “intelligence age,” educational systems require fundamental restructuring. Students need to understand AI not as a work-avoidance tool but as capability amplification, fostering agency, creativity, and entrepreneurial thinking rather than preparing for predetermined roles.


Historical Precedent: The Printing Press


Lehane draws parallels between current AI adoption and the printing press introduction in the late 1400s, acknowledging that “none of these [analogies] are perfect” and will “rhyme more than repeat.” In Europe, political fragmentation prevented centralized control of information, leading to democratized knowledge that contributed to the Age of Discovery, Enlightenment, and Reformation.


Conversely, China’s dynastic rule recognized the printing press’s potential to spread challenging ideas and restricted its use, limiting transformative benefits. This historical parallel frames contemporary AI development as a choice between democratic and autocratic implementations.


India’s Strategic Role


Lehane positions India as uniquely influential in global AI development. As the world’s largest democracy with over 100 million AI users and rapidly growing adoption—particularly in developer tools like Codex, which is growing fastest in India—the country’s approach will significantly influence international patterns.


He emphasizes viewing India as a “strategic partner” rather than merely a customer market, recognizing that successful AI democratization requires collaboration between technology companies and democratic institutions. India’s success in demonstrating democratic AI implementation could provide a model for other nations.


Conclusion


Speaking with concern about keeping attendees from dinner, Lehane concludes by emphasizing the recursive nature of AI acceleration and the historical significance of current decisions. He describes the privilege of being in Delhi during “an incredible week” and positions India’s choices as crucial for determining whether AI serves democratic or autocratic ends globally.


The presentation combines practical implementation strategies with broader considerations about technology’s societal role, presenting AI development as both a technical and civilizational challenge requiring thoughtful collaboration from democratic societies. Throughout, Lehane maintains his conversational tone while acknowledging his limitations in speaking about systems beyond his expertise, particularly noting he’s “a lot more familiar with the U.S. public education system than certainly the Indian one.”


Session transcriptComplete transcript of the session
Speaker

The Chief Global Affairs Officer to join us for this moment. Please give a big round of applause to all our partners. Thank you. Thank you so much. Thank you so much. Thank you for your partnership. Thank you. Thank you. Thanks. Next, we have a short video coming up bridging these two sessions. which is what we talked about in the first section with Ronnie and the experts over here about the economics of AI, employability, what we can do with students. There’s a company called Vahan .ai that has done some incredible work in this space to be able to connect talent together with jobs. We have a short video and right after that we’ll have Mr. Chris Lehane giving us a talk about what we do at OpenAI.

Thank you. Over to you, Chris.

Chris Lehane

Thank you, thank you. Thank you everyone. Thanks for those who’ve hung out for a little bit longer. I know I am standing between you and probably dinner, and given how good the food here is in India, I am very cognizant that I should be pretty quick because I don’t want to stand in your way. First of all, great panel. It was awesome just to hear those different thoughts and perspectives. And Ronnie, who I think is one of the most excited people here in Delhi for this Impact Summit, your parents would be very proud of you in all seriousness. They were born here, they came to the U .S., and then to have their son coming and doing an event like this is a tremendous story.

So thank you. And Ronnie, thank you for everything that you do at OpenAI. And I really want to thank the OpenAI team that has helped put this together and all the incredible work that’s been done over the course of this week. And really thank everyone here in the room for participating in this summit. It is really a unique and special moment in time here in India. You know, yesterday we all heard from the Prime Minister. We also heard from Sam Altman, our CEO and co -founder. And, you know, the commonality in what they talked about. It was really focused on this idea of democratic AI. I think the Prime Minister, not surprisingly, was incredibly eloquent in talking about just how important it is to get that right.

And Sam, I think, built on that in his remarks. And something that Ronnie mentioned, I think, deserves some unpacking because it’s directly related to this democratizing of AI concept. And Ronnie, you had touched on the capability gap. So let me just unpack that for a couple seconds because I do think it’s at the core of this concept of democratic AI. And so what we know from our research, and really the research that Ronnie and his team do, is that there’s something called this capability gap. And what that really means is the technology continues to accelerate. In fact, there’s a recursive nature to it right now. So that acceleration is potential. going to become even faster and faster.

And what we’re seeing is that there is a subset of users. Think of them as power users. And those power users who are using the technology, and Ronnie I think you did your survey of how people are using it. I’m not sure if the astrologist counts as a power user, but I think some of the other examples, we’re getting there and perhaps it does. But what we’re seeing from those power users, so not just those who are using it for sort of a more comprehensive search function, but they’re really using it as an assistant, as a coach, as a multiplier of their work, is they are effectively creating a 7x economic impact. So put that in really simplistic or reductionist terms.

If you’re at a company and you’re a power user of our tools or AI generally, you are likely delivering a 7x value vis -a -vis a non -power user for your employer. Or if you’re self -employed and using it yourself. And so I think we’re really at this moment in time and we need to begin thinking about how do we close that capability gap, right? Because there’s going to be a subset of folks who left to their own are going to do very well by this, but we need to be thinking about society as a whole as we go forward. You know, Ronnie, in addition to being a chief economist, is also an academic professor at Duke.

A number of the folks up here had academic backgrounds. And we do know that over the course of human history, education ends up being the passport to close these types of capability gaps. And I think as we think about the role of education going forward, there’s really three elements to it here. Some of them are touched on in the conversation. The first is access. I mean, access is core to democratizing AI. You know, here in India, we have a hundred million folks who use this on a regular basis. Think about a third of the population who use this on a regular basis. I mean, access is core to democratizing AI. I mean, access is core to democratizing AI.

I mean, access is core to democratizing AI. I mean, access is core to democratizing AI. I mean, access is core to democratizing I mean, access is core to democratizing AI. I mean, access is core to democratizing AI. And amongst the reasons why there’s so many people using it here in India, I mean, we have 800 million globally, is because the vast majority are able to access our tools for free. And even the pay version here in India Go is a relatively very affordable model. I think it’s about $3 .99 a month, if I’m remembering correctly, okay. And so that access piece is really important. You have to have access to this if you’re going to have any chance to participate in those economics.

The second piece, and I think Rupa hit on this, is literacy. And, you know, this is literacy in the sense of, you know, reading and writing and arithmetic and AI literacy. And it’s really start using the tools. I might get asked all the time at events like this and other events, you know, which did my kid major in college? Or what? Start. Start using the technology. Start playing with it. Using it for astrology. Astrology. I have friends who use it for sports betting. Just use it in any type, shape, way, or form that you can, because once you start to use it, you will actually become really, really, really good at it. And then the third piece, and the third piece, I think, is really the most challenging and what we all have to get right, is the agency piece.

This is a technology, and this is a sophisticated crowd. You all understand this. But this is a technology that at its core is a general purpose technology. So what are general purpose technologies? We’ve got Ronnie, who’s an economist, who will probably come kick me when I do this description of it. But these are transformational technologies that just change the ability of humans to produce. So if you think about it, humans have been around roughly 200 ,000 years. For the first 190 ,000 of those years, humans produced basically what they could eat. And there was sort of a direct one -to -one ratio. And then about 10 ,000 years ago, you started to get stuff like the wheel. and later on you got the domestication of animals, then the wheel, then you got steam power, and then you got combustion engine, your printing press, electricity, the transistor.

Each one of those drove productivity up higher and higher and drove human progress. This AI is an ultimate leveling tool. It scales the ability of any person, so long as they can talk, to be able to think, to learn, to create, to build, and to produce. But you have to take agents. You actually have to want to use it for those purposes. And one of the things that’s very much in my head, and I’m a lot more familiar with the U .S. public education system than certainly the Indian one, so what I’m going to talk about is a little bit more from a U .S. perspective, although I do think it translates. So in the U .S., the public education system that we currently have was really created at the early stages of the industrial age in the United States.

and it was basically designed to help teach folks to come in from rural areas where they had mostly been an agricultural economy and be able to work in factories so in the U .S. the time that school started sort of aligned with when factories opened the fact that you went from classroom to classroom was basically designed to teach you to work on an assembly line even the bells that you got to move you around was designed to start to get you to understand and think as if you were working in the factory there are also other pieces built in civics courses I’m old enough that we had home ec and wood shop and other types of things that basically taught you core skills to be able to work in a factory well as we enter into this intelligence age what is the version of that that is going to change how people think and understand it’s almost an ethos that we have to build you know Sam often talks about the fact that if you probably look at kids in the school right now about 20 % of those kids actually really do have agency.

They’re excited to learn this. Maybe the other 80 % see it as a really easy way to get their homework done. That’s an ethos that we need to change. We need to get to a place where closer to 100 % of those students are going to really think about this is a technology that can allow me to succeed. It can allow me to actually take my labor and not necessarily have to sell my labor or get paid for my labor, but I actually get to own my labor and make money off of my labor. If you really think about how the social contract has generally worked, it has always been this calibration, maybe a fight between labor and capital.

This technology allows folks who are using their labor to be able to actually own it and participate in it in a fundamentally different way. For us, thinking about that agency piece is really critical. I’ll end this by just saying I think India is in a unique, unique moment to lead on this. The number of folks who are already using it. year. I think, Rana, you may have mentioned that Codex, which is our developer tool, this is the place in the world where it’s growing the fastest. And I’m going to end with a little bit of a historic analogy. I get to sometimes play a technologist on stages like this, and even a little bit of an economist today.

But I was a history major in college, so I get to play amateur historian, emphasis on amateur. Everyone has their own favorite historical analogy for this technology, for AI. The one that I’ve really been thinking about a lot lately, and none of these are perfect. They’re not exact replications. It’s going to rhyme more than repeat. But the one that’s very much in my head these days is the printing press. And I will sort of share two different parts of the world when the printing press came out. So the printing press developed late 1400s. Most of the world was more or less in a very similar economic place. So the printing press was a very similar economic place.

So the printing press was a very similar economic place. So the printing press was a very similar economic place. So the printing press was a very similar economic place. So the printing press was a very similar economic place. but two places went in very different directions on this one was europe and the other was china in europe because there was a little bit of a baseline of actual literacy from the catholic church and moreover because it was a fragmented continent with different countries that fragmentation really allowed people to use the printing press to spread ideas no one government actually controlled but that was being produced by the printing press and as a result you had the democratization of knowledge and ideas and thinking in a way that humans had never experienced at scale up to that moment in time and there’s a direct through line in europe from the printing press to the democratization of knowledge to the age of discovery the age of science enlightenment to reformation and the economic uplift of europe the other extreme was what took place in china which is under the dynasty at that time there was a real concern that the printing press was going to in fact allow knowledge to be spread and the spreading of that knowledge would potentially generate a challenge to the authoritarian government in place and so as we sit here at this moment in time right there is going to be a huge question as to whether the world is built out on democratic AI or autocratic AI a centralized version of it and India is gonna have the dispositive voice on how that plays out this is the world’s largest democracy if the world’s largest democracy is able to democratize AI here that means we’re going to be democratizing AI around the world so this is a moment in time for this incredible country that’s going to be playing a leading role not just for the people here as important as that is but for the entire world and the entire world and the entire world but for people around the world and so we feel incredibly privileged to be able to be here in Delhi in India at this moment.

It’s amongst the reasons why we don’t see India as a customer. We see India as a strategic partner, and not just a strategic partner for us as a business, but for a strategic partner for us to be able to deliver on our company’s mission, which is building AI that benefits all of humanity. Thank you very much for being here. It’s been an incredible week. Talk to you guys soon. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
C
Chris Lehane
14 arguments185 words per minute2347 words758 seconds
Argument 1
There is a capability gap where power users of AI create 7x economic impact compared to non-power users, requiring society-wide solutions to close this gap
EXPLANATION
Chris Lehane explains that as AI technology accelerates, there’s a growing divide between power users who use AI as an assistant, coach, and work multiplier versus those who use it for basic functions like search. This creates a significant economic disparity that needs to be addressed at a societal level.
EVIDENCE
Research shows power users deliver 7x economic value compared to non-power users for their employers or in self-employment
MAJOR DISCUSSION POINT
AI capability gap and economic inequality
Argument 2
Education serves as the passport to closing capability gaps throughout human history and will be crucial for AI democratization
EXPLANATION
Lehane argues that historically, education has been the key mechanism for closing technological and capability gaps. He emphasizes that education will play the same critical role in ensuring AI benefits are distributed equitably across society.
EVIDENCE
Historical precedent showing education has consistently served as the mechanism to close capability gaps throughout human history
MAJOR DISCUSSION POINT
Role of education in AI democratization
Argument 3
Democratic AI is essential and was emphasized by both the Prime Minister and OpenAI’s CEO as a core concept
EXPLANATION
Lehane highlights that both India’s Prime Minister and Sam Altman spoke about the importance of democratic AI during the summit. This concept represents a shared vision for ensuring AI development and deployment serves democratic values and broad societal benefit.
EVIDENCE
Speeches from the Prime Minister and Sam Altman at the summit both focused on democratic AI as a central theme
MAJOR DISCUSSION POINT
Democratic AI as a foundational principle
AGREED WITH
Speaker
Argument 4
Access is fundamental to democratizing AI, with 100 million users in India accessing tools mostly for free or at affordable rates like $3.99/month
EXPLANATION
Lehane emphasizes that widespread access to AI tools is the foundation of democratization. He points to India’s success in achieving broad adoption through free access and affordable pricing models as a key example of how to ensure equitable access.
EVIDENCE
100 million users in India (one-third of population) use AI tools regularly, with most accessing free versions and paid version available for $3.99/month
MAJOR DISCUSSION POINT
Affordable access as prerequisite for AI democratization
Argument 5
AI literacy requires people to start using the technology in any form – whether for astrology, sports betting, or other applications – to develop proficiency
EXPLANATION
Lehane advocates for a practical approach to AI literacy, encouraging people to begin using AI tools for any purpose that interests them. He argues that hands-on experience, regardless of the application, is the best way to develop AI proficiency and understanding.
EVIDENCE
Examples of diverse AI usage including astrology and sports betting as valid starting points for developing AI literacy
MAJOR DISCUSSION POINT
Practical approach to building AI literacy
Argument 6
Agency is the most challenging aspect, requiring people to actively choose to use AI as a tool for thinking, learning, creating, and producing
EXPLANATION
Lehane identifies agency as the most difficult component of AI democratization, requiring individuals to actively embrace AI as a productivity and creativity tool. He notes the need to shift from passive consumption to active utilization of AI capabilities.
EVIDENCE
Observation that only about 20% of students currently show real agency in learning AI, while 80% see it merely as a homework shortcut
MAJOR DISCUSSION POINT
Building agency and proactive AI adoption
Argument 7
AI represents the ultimate leveling tool that scales human ability to think, learn, create, and produce for anyone who can communicate
EXPLANATION
Lehane positions AI as a transformative general-purpose technology that can amplify human capabilities across all domains of thinking and production. He emphasizes that this scaling effect is available to anyone with basic communication skills, making it potentially democratizing.
EVIDENCE
Historical context of general-purpose technologies like the wheel, steam power, electricity, and transistors that drove human progress over 200,000 years of human history
MAJOR DISCUSSION POINT
AI as a general-purpose technology for human capability enhancement
Argument 8
This technology allows workers to own their labor rather than just sell it, fundamentally changing the traditional labor-capital relationship
EXPLANATION
Lehane argues that AI enables a fundamental shift in economic relationships, allowing workers to retain ownership of their enhanced labor rather than simply selling it to employers. This represents a potential transformation of the traditional dynamic between labor and capital.
EVIDENCE
Historical context of the ongoing calibration and fight between labor and capital throughout human history
MAJOR DISCUSSION POINT
Transformation of labor-capital relationships through AI
Argument 9
Current education systems designed for the industrial age need transformation to prepare students for the intelligence age
EXPLANATION
Lehane explains that existing education systems were designed to prepare workers for industrial factory work, with schedules, classroom transitions, and bells mimicking factory operations. He argues these systems need fundamental redesign for the AI era.
EVIDENCE
Detailed explanation of how U.S. public education system was designed to transition agricultural workers to factory work, including timing, classroom structure, and bell systems that mirrored factory operations
MAJOR DISCUSSION POINT
Need for educational system transformation
Argument 10
India is uniquely positioned to lead AI democratization globally as the world’s largest democracy with rapidly growing AI adoption
EXPLANATION
Lehane positions India as having unique advantages for leading global AI democratization efforts, combining its status as the world’s largest democracy with impressive AI adoption rates. He sees India’s approach as potentially setting the global standard.
EVIDENCE
India’s status as world’s largest democracy, 100 million AI users, and fastest growth in Codex (developer tool) usage globally
MAJOR DISCUSSION POINT
India’s leadership role in global AI democratization
Argument 11
The choice between democratic AI versus autocratic AI will determine the global direction, with India having the decisive voice
EXPLANATION
Lehane frames the current moment as a critical juncture where the world will choose between democratic and autocratic approaches to AI development and governance. He argues that India’s decisions will be pivotal in determining which path the world takes.
EVIDENCE
Historical analogy of the printing press showing divergent outcomes between Europe (democratization of knowledge) and China (authoritarian control)
MAJOR DISCUSSION POINT
Global choice between democratic and autocratic AI governance
Argument 12
OpenAI views India as a strategic partner rather than just a customer, essential for achieving the mission of building AI that benefits all humanity
EXPLANATION
Lehane emphasizes that OpenAI’s relationship with India goes beyond commercial interests to strategic partnership aligned with the company’s core mission. He positions India as crucial for achieving the goal of beneficial AI for all humanity.
EVIDENCE
OpenAI’s company mission of building AI that benefits all humanity and the recognition of India’s role in achieving this global objective
MAJOR DISCUSSION POINT
Strategic partnership approach to AI development
AGREED WITH
Speaker
Argument 13
The printing press analogy illustrates how technology adoption can lead to vastly different outcomes – Europe’s democratization of knowledge versus China’s authoritarian control
EXPLANATION
Lehane uses the historical example of the printing press in the late 1400s to show how the same technology can lead to dramatically different societal outcomes. Europe’s fragmented political structure allowed knowledge democratization, while China’s centralized control suppressed it.
EVIDENCE
Detailed historical comparison showing Europe’s path from printing press to democratization of knowledge, Age of Discovery, Enlightenment, and economic uplift, versus China’s authoritarian suppression under the dynasty system
MAJOR DISCUSSION POINT
Historical precedent for technology’s divergent societal impacts
Argument 14
India’s approach to AI will influence whether the world develops democratic or centralized autocratic AI systems
EXPLANATION
Lehane argues that India’s decisions about AI development and governance will have global implications, potentially determining whether the world moves toward democratic or autocratic AI systems. He sees India’s choices as having worldwide consequences beyond its borders.
EVIDENCE
India’s position as the world’s largest democracy and its potential to influence global AI governance patterns
MAJOR DISCUSSION POINT
India’s global influence on AI governance models
S
Speaker
1 argument78 words per minute137 words104 seconds
Argument 1
Partnership acknowledgment and appreciation for the collaborative efforts in organizing the summit and advancing AI initiatives
EXPLANATION
The speaker expresses gratitude to partners and participants for their collaborative efforts in organizing the summit and advancing AI initiatives. This represents recognition of the collective effort required for successful AI development and implementation.
EVIDENCE
Thanks given to the Chief Global Affairs Officer, partners, and participants in the summit
MAJOR DISCUSSION POINT
Collaborative approach to AI development
AGREED WITH
Chris Lehane
Agreements
Agreement Points
Democratic AI as a foundational principle
Speakers: Chris Lehane, Speaker
Democratic AI is essential and was emphasized by both the Prime Minister and OpenAI’s CEO as a core concept Partnership acknowledgment and appreciation for the collaborative efforts in organizing the summit and advancing AI initiatives
Both speakers emphasize the importance of democratic AI and collaborative approaches to AI development, with Chris Lehane highlighting how both the Prime Minister and OpenAI’s CEO focused on this concept during the summit
Collaborative partnership approach to AI development
Speakers: Chris Lehane, Speaker
OpenAI views India as a strategic partner rather than just a customer, essential for achieving the mission of building AI that benefits all humanity Partnership acknowledgment and appreciation for the collaborative efforts in organizing the summit and advancing AI initiatives
Both speakers recognize the importance of partnership and collaboration in AI development, with emphasis on working together rather than traditional customer-vendor relationships
Similar Viewpoints
Both speakers demonstrate alignment on the collaborative nature of AI development and the importance of partnerships in achieving beneficial AI outcomes for society
Speakers: Chris Lehane, Speaker
OpenAI views India as a strategic partner rather than just a customer, essential for achieving the mission of building AI that benefits all humanity Partnership acknowledgment and appreciation for the collaborative efforts in organizing the summit and advancing AI initiatives
Unexpected Consensus
Limited speaker diversity but strong alignment
Speakers: Chris Lehane, Speaker
Democratic AI is essential and was emphasized by both the Prime Minister and OpenAI’s CEO as a core concept Partnership acknowledgment and appreciation for the collaborative efforts in organizing the summit and advancing AI initiatives
While there are only two speakers in this transcript, there is unexpectedly strong alignment between the corporate representative (Chris Lehane from OpenAI) and the event organizer on fundamental principles of democratic AI and collaborative approaches, suggesting broad consensus on these core issues
Overall Assessment

The transcript shows strong consensus on democratic AI principles, collaborative partnerships, and the importance of India’s role in global AI development. Both speakers emphasize working together rather than traditional business relationships.

Very high level of consensus with no apparent disagreements. The alignment suggests broad agreement on fundamental AI governance principles and the collaborative approach needed for beneficial AI development. This consensus has positive implications for AI democratization efforts and international cooperation in AI governance.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

No significant disagreements identified in the transcript

This transcript represents primarily a single-speaker presentation by Chris Lehane from OpenAI, with only brief introductory remarks from another speaker. The content is largely a monologue presenting OpenAI’s perspective on AI democratization, the capability gap, and India’s strategic role. Without substantive input from multiple speakers with differing viewpoints, there are no meaningful disagreements to analyze. The format appears to be more of an informational presentation rather than a debate or discussion with competing perspectives.

Takeaways
Key takeaways
There is a critical ‘capability gap’ where AI power users generate 7x economic impact compared to non-users, requiring urgent societal intervention to prevent widening inequality Three pillars are essential for AI democratization: Access (affordable/free tools), Literacy (hands-on experience with AI), and Agency (proactive adoption for productivity) AI represents a transformational general purpose technology that allows individuals to own rather than just sell their labor, fundamentally changing the traditional labor-capital relationship India holds a unique strategic position as the world’s largest democracy to influence whether global AI development follows a democratic or autocratic path Current education systems designed for the industrial age must be transformed to prepare students for the ‘intelligence age’ with new ethos and approaches The historical printing press analogy demonstrates how technology adoption can lead to vastly different societal outcomes – democratization of knowledge versus authoritarian control OpenAI views India as a strategic partner (not just a customer) essential for achieving their mission of building AI that benefits all humanity
Resolutions and action items
Need to develop new educational frameworks and ethos to prepare students for the intelligence age, moving beyond industrial-age models Must work to close the capability gap through improved access, literacy, and agency initiatives Continue partnership between OpenAI and India to advance democratic AI development globally
Unresolved issues
How specifically to transform current education systems from industrial-age to intelligence-age models What concrete mechanisms will be implemented to close the capability gap beyond the three pillars mentioned How to shift student mindset from using AI for easy homework completion to genuine agency and ownership Specific policy or regulatory frameworks needed to ensure democratic rather than autocratic AI development How to scale successful AI democratization from India to other regions globally
Suggested compromises
None identified
Thought Provoking Comments
The concept of the ‘capability gap’ – where power users of AI create 7x economic impact compared to non-power users, creating a potential divide in society where some will do very well while others may be left behind.
This insight reframes AI adoption not just as a technological challenge but as a fundamental economic and social equity issue. It quantifies the stakes involved and makes clear that AI literacy isn’t just about convenience – it’s about economic survival and competitiveness in the future economy.
This concept becomes the foundational framework for the entire discussion that follows. It shifts the conversation from celebrating AI adoption to urgently addressing how to prevent a two-tiered society, leading directly into his three-pillar solution of access, literacy, and agency.
Speaker: Chris Lehane
The redefinition of the social contract: ‘This technology allows folks who are using their labor to be able to actually own it and participate in it in a fundamentally different way… If you really think about how the social contract has generally worked, it has always been this calibration, maybe a fight between labor and capital. This technology allows folks who are using their labor to be able to actually own it.’
This is a profound reimagining of economic relationships that have defined human society for centuries. It suggests AI could fundamentally alter the power dynamics between workers and capital owners, potentially giving individuals unprecedented control over their economic output.
This comment elevates the discussion from practical AI implementation to philosophical questions about the future of work and economic systems. It provides a compelling vision that transforms AI from a potentially threatening force to an empowering one, fundamentally shifting the narrative tone.
Speaker: Chris Lehane
The printing press historical analogy comparing Europe’s fragmented, democratic approach (leading to democratization of knowledge, Age of Discovery, Enlightenment) versus China’s centralized, controlled approach under dynastic rule.
This analogy is particularly insightful because it demonstrates how the same transformative technology can lead to completely different societal outcomes based on how it’s governed and distributed. It shows that technology alone doesn’t determine progress – the social and political context matters enormously.
This historical parallel transforms the discussion from a technical presentation into a geopolitical and civilizational choice point. It creates urgency around India’s role and positions the country’s decisions as having global consequences, elevating the stakes of the entire conversation.
Speaker: Chris Lehane
The agency problem in education: ‘if you probably look at kids in the school right now about 20% of those kids actually really do have agency. They’re excited to learn this. Maybe the other 80% see it as a really easy way to get their homework done. That’s an ethos that we need to change.’
This observation cuts to the heart of a critical implementation challenge – that access and literacy alone aren’t sufficient if people don’t have the mindset to use AI productively. It identifies a cultural and educational shift that’s needed beyond just technical training.
This insight adds complexity to the solution framework, showing that closing the capability gap isn’t just about providing tools and training, but requires a fundamental shift in how people think about learning and technology. It suggests that educational systems need complete reimagining, not just AI integration.
Speaker: Chris Lehane
Positioning India as having ‘the dispositive voice’ in determining whether the world develops ‘democratic AI or autocratic AI’ because ‘this is the world’s largest democracy’ and if India democratizes AI, ‘that means we’re going to be democratizing AI around the world.’
This comment is strategically brilliant because it appeals to India’s sense of global leadership and responsibility while making a compelling case for democratic AI development. It suggests India’s choices will influence global AI governance patterns.
This positions India not as a recipient of AI technology but as a global leader whose decisions will shape the future for all humanity. It transforms the entire discussion from a business presentation into a call for civilizational leadership, creating a sense of historical responsibility and opportunity.
Speaker: Chris Lehane
Overall Assessment

These key comments transformed what could have been a standard corporate presentation into a sophisticated analysis of AI’s societal implications. Lehane skillfully escalated the discussion through multiple levels – from individual economic impact to social equity, from current challenges to historical parallels, and finally to India’s role in shaping global AI governance. The capability gap concept provided the analytical foundation, while the historical analogy and geopolitical framing created urgency and elevated India’s perceived importance in the global AI landscape. Together, these insights reframed AI adoption from a technological challenge to a civilizational choice point, making the audience stakeholders in a historic moment rather than passive recipients of technology.

Follow-up Questions
How can we effectively close the capability gap between power users and non-power users of AI technology?
This is critical because power users are creating 7x economic impact compared to non-power users, creating potential societal inequality that needs to be addressed
Speaker: Chris Lehane
What specific educational reforms are needed to prepare students for the intelligence age, moving beyond the industrial age model?
The current education system was designed for the industrial age and may not be adequate for preparing students to effectively use AI as a general purpose technology
Speaker: Chris Lehane
How can we change the ethos so that closer to 100% of students see AI as a tool for success rather than just an easy way to complete homework?
Currently only about 20% of students have real agency with AI technology, while 80% use it passively, which limits the democratization potential
Speaker: Chris Lehane
What are the specific mechanisms by which AI can allow workers to own their labor rather than just sell it?
This represents a fundamental shift in the social contract between labor and capital that requires further exploration and practical implementation strategies
Speaker: Chris Lehane
How can India’s approach to AI democratization influence global adoption patterns and prevent autocratic AI implementations?
As the world’s largest democracy, India’s choices regarding AI implementation could determine whether the world develops democratic or autocratic AI systems
Speaker: Chris Lehane
What specific research and data support the 7x economic impact claim for AI power users?
While the statistic is mentioned as coming from research by Ronnie’s team, the specific methodology and findings warrant further investigation
Speaker: Chris Lehane

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Invest India Fireside Chat

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session was a fireside chat between Nivruthi Rai and venture investor Vinod Khosla, moderated as a discussion of artificial intelligence’s strategic role for India [23-26]. Rai began by tracing five decades of semiconductor progress-from pure performance to performance-per-watt-per-area-and used that history to illustrate today’s AI-driven power and supply-chain pressures on data centres [36-43]. She highlighted that global data-centre capacity already consumes about 1 % of world energy and that the United States is planning to double its footprint, making power a critical bottleneck for AI scaling [53-58]. Rai also warned that memory-chip supply is concentrated in three firms, that GPU and HBM fab capacity is far short of the annual demand for new AI workloads, and that AI therefore remains capital-intensive [66-70]. She framed three strategic questions for India-whether to build capacity, capability, or consumption of AI-before turning the floor to Khosla [98-101].


Khosla replied that investment is justified only if AI can be deployed widely, and he expects technology capability to far exceed current expectations within five years [118-123]. He cautioned that political decisions, such as Germany’s restriction on Sunday robot retail, can impede deployment, so democratic permission is a key factor [129-136]. Pointing to Indian examples, Khosla cited the fast-growing software firm Emergent, the multilingual voice-assistant Sarvam, and his vision of AI-powered doctors, tutors and agronomists to deliver public services at scale [140-149][155-162]. On the impact to the BPO sector, he argued that AI will replace outsourced back-office work more easily than internal staff, and that most such services could be automated within five years, though contractual obligations will create a transition period [266-274].


He criticized the Indian venture-capital community for being overly risk-averse, for focusing on short-term revenue plans and IRR calculations, and urged investors to accept large-risk, high-failure bets to enable breakthrough innovation [341-357][360-363]. When Rai suggested concentrating on a limited set of use-cases, Khosla disagreed, insisting that progress comes from building a single, general artificial super-intelligence rather than many narrow systems [238-244][247-254]. Both agreed that compute-efficiency research-such as data-efficient models and checkpoint-free training-could dramatically lower power consumption and accelerate AI adoption [185-192][216-218].


Khosla further projected that within a decade AI scientists will outnumber human researchers, turning AI development itself into an exponential engine of innovation [220-225]. The conversation concluded that AI must move from an elite, capital-intensive phase to a utility that underpins India’s economic productivity, healthcare, education and agriculture [80-84][96-101][467-470]. Overall, the dialogue underscored the urgency of expanding AI infrastructure, embracing high-risk investment, and leveraging AI as a public-good platform to transform India’s future [85-88][341-350].


Keypoints


Major discussion points


AI infrastructure bottlenecks and capital intensity – Rai outlines the massive power and supply-chain constraints of today’s AI workloads (e.g., data-center energy use, GPU/HBM shortages, limited fab capacity) and stresses that AI remains a capital-intensive “railroad-type” investment that must be built deliberately [66-71][72-75][85-88].


India’s strategic AI agenda – Both speakers argue that AI is pivotal for India’s economic productivity, national security and social services, proposing large-scale public-sector deployments such as AI-driven doctors, tutors and agronomists, and asking whether the country should build capacity, capability and consumption simultaneously [96-101][146-158].


Investment philosophy, risk tolerance, and VC culture – Khosla emphasizes that large, risky bets are essential for breakthrough AI, criticises the Indian VC community for being overly risk-averse and for relying on IRR metrics, and shares his personal “willingness to fail” as a driver of success [117-124][341-358].


Future of AI research: compute-efficiency and AI-generated scientists – He describes heavy investment in making models far more data- and compute-efficient, predicts inference costs dropping dramatically, and foresees AI itself becoming the primary researcher across domains (AI scientists, AI material scientists, etc.) [186-197][199-204][220-225].


Education, talent development and the transition away from BPO – The conversation moves to how AI will reshape learning (dorm-centric, AI-augmented education) and the Indian workforce, warning that traditional BPO/IT services will be displaced and urging a shift toward AI-enabled skill sets [398-415][266-283].


Overall purpose / goal


The fireside chat is designed to move beyond high-level hype and examine the concrete technical, economic and policy challenges of scaling AI-particularly in India. Rai sets the stage with a technical-infrastructure overview, then uses Khosla’s experience to explore investment strategies, societal impact, and concrete pathways for India to become an AI leader.


Overall tone and its evolution


– The session opens with a formal, reverent introduction of the speakers ([1-22]) and a technical, data-driven tone as Rai details semiconductor and data-center constraints ([36-71]).


– It shifts to a visionary, optimistic mood when discussing AI’s societal benefits for India and the need for capacity building ([96-101][146-158]).


– Khosla’s interjections introduce a pragmatic, sometimes blunt tone, especially when critiquing VC risk-aversion and political roadblocks ([117-124][341-358]).


– As the dialogue progresses, the tone becomes forward-looking and enthusiastic, highlighting breakthroughs in compute efficiency and AI-generated research talent ([186-204][220-225]).


– Near the end, the conversation adopts a inspirational, almost rally-cry style, urging bold educational reforms and a rapid transition away from legacy BPO models ([398-415][266-283]).


Overall, the tone moves from analytical to hopeful, interspersed with candid criticism, and finishes on an energetic call to action.


Speakers

Vinod Khosla – Venture Capitalist; founder of Khosla Ventures; investor in technology, AI, clean-tech, biotech; former Sun Microsystems co-founder. [S1][S2]


Nivruthi Rai – Engineer with 30 years at Intel; board member; represents India at Global Arena; works on Ease of Doing Business (EODB) issues. [S9][S10]


Moderator – Unnamed session moderator who introduced speakers and facilitated the fireside chat.


Audience – Members of the audience who asked questions during the session.


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

The session opened with a formal introduction by the moderator, who highlighted the breadth of Vinod Khosla’s five-decade career – from an immigrant engineer in Delhi to a serial investor who has left his “fingerprints” on companies such as Sun Microsystems, Google, OpenAI and many others [4-20]. Khosla’s trajectory was framed in five phases: early pragmatic engineering, the open-systems era, the venture-capital turn with clean-tech/biotech, a macro-infrastructure focus that included OpenAI, and the current “era of abundance” [6-18].


Nivruthi Rai then set the technical stage by tracing the evolution of semiconductor performance. She explained that the industry moved from a pure focus on raw speed to a three-fold race for “performance per watt per area” [36-43]. This shift matters because today’s AI workloads already consume roughly 1 % of global electricity – a figure that will double as the United States plans to double its data-centre footprint within three years [53-58]. The power challenge is compounded by a fragile supply chain: high-bandwidth memory (HBM) chips are sourced from only three firms [66], and the fab capacity required to meet the annual demand for GPUs and HBM is far short of what is needed – two logic fabs and ten memory fabs would be required each year, yet only five exist [68-70]. Rai emphasized that “renewable and nuclear is the only way” to meet the looming energy demand [70-73].


Rai used this infrastructure backdrop to pose three strategic questions for India: should the country build AI capacity, develop capability, or drive consumption[98-101]? She argued that AI is pivotal for India’s economic productivity, national security, and information governance, and that the first-order priority must be to deliver AI-enabled public services – AI-enabled doctors, AI tutors, and AI agronomists – through existing Aadhaar identity verification and UPI payment infrastructure so that benefits reach every citizen, especially the rural poor [145-158][146-152].


Vinod Khosla responded that large-scale AI investment is justified only if the technology can be deployed widely, because the resulting economic upside could generate “hundreds of billions of revenue per year” [118-128]. He warned that political decisions can block deployment, citing Germany’s ban on Sunday robot retail as an example of how “politics … will get in the way” and that “capitalism is by permission of democracy” [129-136]. Khosla stressed that democratic permission is the ultimate gatekeeper for AI’s commercialisation.


Turning to Indian examples, Khosla highlighted the Indian software firm Emergent, which he said is the fastest-growing software company he has seen, noting its rapid expansion since its launch eight months ago [140-142]. He also cited the multilingual voice-assistant Sarvam, which already handles a million minutes of regional-language calls per day [173-176]. He reiterated that AI-enabled doctors, tutors and agronomists could serve millions of underserved Indians, leveraging Aadhaar for identity and UPI for payments [146-152][155-162]. He further illustrated national-level AI rollout by mentioning the United Arab Emirates’ decision to provide all citizens with free access to ChatGPT [460-462].


Both speakers agreed that AI’s current capital intensity stems from hardware bottlenecks. Rai stressed the scarcity of GPUs and HBM – “80 % from three companies only” – and the need for additional fabs and renewable or nuclear power to sustain growth [66-71]. Khosla counter-pointed that algorithmic and systems innovations can dramatically reduce the hardware burden: he described a “checkpoint-free” training technique that would double usable compute without adding chips or power [210-215], and noted that his firm is investing heavily in data-efficient and compute-efficient models, a line of work he repeated throughout the discussion [180-200]. He also cited Sam Altman’s observation that inference cost has already fallen a thousand-fold and may drop another hundred-fold, which would offset much of the power surge [199-203][463-464].


Looking further ahead, Khosla projected that within five to ten years the majority of scientific research will be performed by AI-generated specialists – “AI computer scientists, AI material scientists, AI fusion scientists, AI drug-discovery scientists” [220-225]. He argued that the long-term goal is an Artificial Super-Intelligence (ASI) that can be fine-tuned for any domain, dismissing the notion of building specialised narrow AI as a “short-term mistaken notion” [242-254]. Rai, however, advocated a more pragmatic focus on a limited set of high-impact use cases (e.g., traffic, health, education) given capital constraints [236-237], revealing a clear disagreement on whether breadth or depth should guide India’s AI roadmap.


The conversation then shifted to the impact on India’s massive BPO and IT services sector. Khosla asserted that back-office functions are the “easiest to replace” and that “within five years … hardly anything these class of companies do won’t be capable of being done by an AI” [266-274]. He warned that firms that cling to legacy models will be “cooked” unless they pivot to AI expertise [279-283]. Rai echoed the need for sector-specific skill development, arguing that women and farmers would benefit more from targeted training in textiles or agriculture than from generic graduation degrees [170-176], while Khosla illustrated how AI agronomists could empower women farmers on one-acre plots [161-163].


Both participants criticised the Indian venture-capital ecosystem. Khosla described Indian VCs as “very risk-averse”, obsessed with short-term revenue plans and IRR calculations that are “fundamentally misleading” for frontier technologies [341-354][357-363]. He shared his personal mantra – “my willingness to fail allows me to succeed” – and urged investors to back high-risk bets without demanding IRR forecasts [344-350][368-370]. Rai advised founders to evaluate investors based on their willingness to take large risks and provide substantive expertise, not merely capital [255-256].


Education was another point of divergence. Khosla argued that universities should prioritise expanding dormitory capacity to increase dense student interaction with AI-augmented learning, rather than building more academic buildings [398-402]. He illustrated emergent AI communities such as Moldbook/OpenCloud, where autonomous agents began inventing a private language, underscoring the need to understand complex, nonlinear dynamical systems [426-430][432-447]. Rai, by contrast, stressed the importance of vocational, sector-specific training for rural women and farmers, questioning the value of a generic graduation [170-176]. Both agreed that AI-enhanced learning will be central to future talent pipelines, but differed on the institutional reforms required.


The strategic nature of AI was repeatedly framed as comparable to nuclear technology. Rai described AI as “as strategic as nuclear” and essential for economic productivity, military power and information control [75-76]. Khosla echoed this analogy, warning that powerful technologies have dual-use potential – “nuclear is an example, biowarfare is an example” – and that irresponsible use must be countered by responsible AI governance and model diversity [317-324][328-330]. He also raised the speculative risk of “customised biological threats” arising from AI-driven genetics [316-320].


In a rapid-fire segment, Khosla highlighted three applications he believes will affect the bottom-three-to-five billion people: AI-enabled doctors, AI tutors and AI agronomists [467-470]. He reiterated that compute-efficiency research (e.g., checkpoint-free training) could halve power consumption while usage explodes [199-204][463-464]. When asked what will seem “embarrassingly obvious” ten years from now, he suggested that AI will surpass human knowledge in most subjects [398-402] and that traditional academic buildings will become obsolete, prompting a shift to dorm-centric, AI-augmented education [410-415]. He also explained that his team is exploring an “N = 1” drug-design model, where a therapy is tailored to a single patient’s genetic profile, thereby sidestepping the need for large-scale clinical trials [470-475].


He warned that AI agents operating as swarms-whether in financial markets or autonomous drones-could produce unpredictable, potentially game-changing outcomes, underscoring the difficulty of forecasting AI’s societal impact [520-525].


The dialogue concluded with a consensus that India must simultaneously expand AI infrastructure, invest in compute-efficiency, and deploy AI-driven public services through Aadhaar and UPI to ensure inclusive benefits. Remaining challenges include navigating political barriers, securing diverse AI model ecosystems, managing the transition of the BPO workforce, and reconciling differing views on whether to pursue narrow, high-impact pilots or a universal ASI. Khosla closed by urging anyone interested to contact him directly at vk@coastalventures.com, noting his limited memory and the need for continued dialogue [540-543].


Session transcriptComplete transcript of the session
Moderator

to boards, representing India at Global Arena, and to solving EODB issues. At all these times, she is an engineer at heart and Indian at heart. Please welcome Nivruthi Rai for the session. On your right, gentleman, Mr. Vinod Khosla needs no introduction, but allow me to take just one minute to give a brief capture of his illustrious career. He started off from Delhi and moved as a young immigrant engineer to the U .S. in his 20s. In the last five decades, he has seen five cycles of growth. The first cycle, as a hungry immigrant, where not just do it, get things done, was the pragmatism. That’s a time he also read about Intel, and that inspired him stories to tell us.

And he built the value persistence over pedigree, similar to everybody else, meritocracy. everywhere. Second phase, he bet on open systems and risk processors. I’m sure you’re all familiar with this founding Sun Microsystems. That’s when he moved from being an operator to an investor. And Kostla Ventures happened and that’s a time when science experiments helped him move and believe that capitalism is a tool for change and invested in clean tech and biotech. In the fourth phase, he moved to macro thinking, really looking at reinventing the societal infrastructure and think about it. It’s 15 years back. That’s when he invested in companies like OpenAI. And today, in the fifth phase, he is getting into the era of abundance.

I’m just going to ratload a few brands which hopefully you’re all familiar with. Sun Microsystems, RIS, NextGen, AMD, XSite, Netscape, Google, Amazon, OpenAI, Instacart, Affirm, Vervo. All of these has his fingerprints. Happy to welcome Mr. Vinod Kostla to the table. Over to you,

Nivruthi Rai

Very good afternoon, everyone. I’m truly honored to run a fireside chat with Mr. Vinod Khosla. And throughout my Intel journey, people kept asking me, what are the four words that define this person or defines you? The few words that I can say about Mr. Khosla, very technical, fearless, extremely successful, humble, but above all, his heart beats for India. So the one thing that’s common between him and me is we root for India, we work for India, we weep for India, we smile for India. What I’m going to talk about is setting a little bit of context. What is this talk about? So many talks that we have seen over yesterday and today are a little bit of the direction.

less of the detail. So what we decided is we will go to the next level detail. And let me just try to tell you, my three -minute context setting is AI development. And during the development, what are some of the challenges, requirement, lay of the land? Then I’m going to talk about technology lifecycle and where AI fits in. Lastly, what I feel India needs to do or the question that I will be setting up for Vinod. So the very first thing that I, pardon me, 30 years with Intel, I have to start with semiconductor learning. 50 years, semiconductor chased three races, performance, performance, performance, however it came. Second phase, and by the way, this ran for more than 20 years.

Second phase was performance for what? Suddenly power was so important, your devices were draining, you have to power up. It was becoming, challenging. So performance per watt was the next race, ran for, you know, 10 some years. Then the third one is performance per watt per area, all driving towards dollars. Now, if I look at what were the levers, the levers was architecture. You know, instruction sets, complication of instruction, simple versus complex. We had, oh, move this software into hardware because it’s higher performance. Move the, you know, software into hardware, transistor physics, performance area, power, packaging, stacking, adjacent, looking at parallelism, all kinds of execution, serial, parallel, SIMD, MIMD. People who have worked in semiconductor know all kinds of different out of order.

Then energy efficiency, memory bandwidth, network latency. Why is this important? Please go to the next slide. This is the same problem. We are dealing with, but at a much larger scale. Today, the world has 80 gigawatt of data centers. And by the way, it is 1 % of energy capacity of the world already. When you look at United States, probably three, four. We are looking at doubling in the next three years. So power is going to be extremely critical. And in this world where greenhouse gas emission is critical, renewable and nuclear is the only way. And you’re thinking, you know, tier three or level three, level four kind of data centers. Power availability is anyway critical. Every year we are spending more than a trillion.

How do we monetize? What are the challenges? Already, you know, there are constraints. And also diversification of supply chain is a challenge. Our high bandwidth memory chips are 80 % from three different companies only. And by the way, for doubling of the data centers, we already are in a challenge situation because we have been. half the capacity. Logic, two fabs worth we need. Memory, 10 fabs worth we need each year, but we have only five. So GPU, HBM supplies, and issue advanced packaging is geographically limited. So what is the AI requirement? AI is capital intensive like railroads. We see Middle East is using sovereign money to invest boatloads of money for compute infrastructure. It is strategic like nuclear.

Countries are looking at it as a national level security program and they’re building frontier models. It’s network like internet. If you look at AlphaFold, it’s leveraging AI as almost a scientific infrastructure layer. And it’s adaptive like software because Microsoft is making it easy to use in every which form, reducing friction. So lastly, our keynote, our keynote, our far side expert has been an amazing. investor and we therefore divided life cycle of a technology in early phase, mid phase, mature phase. Early phase is capital intense, unstable standards, volatile returns meant for elite users. Mid phase, infrastructure scales, API stabilize, ecosystems expand and technology becomes affordable. Affordable, mature phase, consolidation, commoditization, predictable economics becomes utility. So AI has to drive the journey from being elite to becoming a utility.

And where are we as compared to, you know, this technology development life cycle? I believe infrastructure is still building. GPU and memory is constrained. Energy is tightening. Modes are not fully defined. So which means our capitalization. Capital has to be very disciplined. Platform positioning matters. How are we going to position our platform? and compute sovereignty matters. Lastly, our belief is, and this is a very strong statement to say, by the way, when I was coming to this fireside, somebody asked me, who are you interviewing? I said, Vinod Khosla. He said, oh, he can talk. So I said, let me also try to talk. And I made this statement for India. India, AI is pivotal to drive economic productivity, military power, and information control.

I mean, I cannot be more blatant than this. And therefore, our ask is, should we build capacity? Should we build capability? Should we drive consumption? Or all of the above? Who better to ask? The man whose heart beats for India? The man who believes? Believes in technology? And very humbly. doesn’t call himself venture capitalist. He calls himself venture assistant. The minute I read that, I said, oh my gosh, I have to bring Vinod to this Fireside Chat. So looking forward. Thank you.

Vinod Khosla

For the man who talks. Maybe I should start by asking how many people in the room are entrepreneurs or want to be entrepreneurs? A lot. Okay. Yep. I know who God is.

Nivruthi Rai

Sir, I’m going to ask you a few challenges or business challenges of AI. Is AI a generational platform shift or the largest capital misallocation? You know, you already heard trillion dollar investment. Do you believe that this level of investment is just

Vinod Khosla

Let me try. Okay. The answer to is the infrastructure build justified and investment justified is yes, if AI technology can be deployed widely. Now, will the technology capability be there? Absolutely. Absolutely. I suspect the technology capability, four or five. Five years from now. far greater, far greater than almost anybody in the room expects. There’s a great article called Situational Awareness written by an engineer at OpenAI. Almost certainly all of you who are optimistic about AI are grossly underrating the capabilities. So what could go wrong, I think, is the important question for these investments. I think the level of usage of AI, do we have use for all these trillions of dollars? And will that generate at least hundreds of billions of revenue per year?

We’ll be dependent on one thing that you don’t expect. It’s politics. My favorite example, in Germany today, this is real. they don’t want robots to work in retail on Sundays because humans aren’t allowed to work on Sundays and they don’t want robots to compete with humans. That is the silliness, the stupidity you get from politicians, especially in Germany. I hope there are no Germans in here, but if there are, it’s a good thing. Go tell your government or tweet about it. My point is the following. Till AI is beneficial and not scary, we won’t get deployment because politicians will get in the way. Capitalism is by permission of democracy. Voters vote the people who then make policy for capitalism and policy will drive that.

My personal interest is immediately in India, not on the business side. We have lots of exciting companies. If you ask Google Gemini, who’s the fastest going? software company ever. It’s an Indian company called Emergent that started eight months ago. Gemini will give you that answer. Try it. That’s pretty stunning, especially for a company from India. But the business side I can talk about all day long. My interest, and I talked to the PM, Prime Minister, about this. We have to make sure AI’s benefits get first to the people. So the business part of AI, which is disruptive and chaotic and will result in big job shifts, is accepted because every single Indian has a free doctorate for them as part of the Aadhaar stack.

We have UPI as payment stack. We should have AI primary care and doctors. We should have AI tutors. and my wife who’s sitting there works on AI tutors. There’s already probably four or five million students in India without any support have found and accessed CK -12 tutors. Think about that. How many education programs reach that level? They’ve found them on their own. We just have to have 445 million more students access the system so we reach every student. And these have to be free services. And CK -12 is a non -profit. So we have Aadhaar -based, in addition to UPI, Aadhaar -based doctors, Aadhaar -based AI tutors, and the last part, because so much of the work in this country is rural and farm -based, AI -based agronomists.

So every woman, and I was just speaking to the chief minister of Tennessee. I was speaking to the chief minister of Tennessee, and he was saying, I would like to have a woman who can help me. I would like to have a woman who can help me. And he said, And I said, And he said, And I said, I would like to have a woman who can help me. He has lots of women farmers on one acre plots. And if they can have a Ph .D. agronomist in their cell phone, then you can talk about deploying AI on the business side because you will have permission from the voters because they first see the benefit of AI before they’re told their jobs are at risk.

Otherwise, we get into this scary metric of jobs at risk. Let’s not change anything. Sorry.

Nivruthi Rai

That’s fantastic. Can you hear me? Yeah, let me see. In the meantime, I’ll try to speak loud. I absolutely agree with Vinod. The one thing that bothers me in rural areas, everybody is trying to go for graduation. I’m saying, what does graduation mean? They just want their degree and they actually know nothing. so what Vinod is talking about if we teach women a focus sector whether it is textile, whether it is agriculture I think that will be very very helpful

Vinod Khosla

and on the business side I want to add given you are talking about that two things first we are investors in Sarvam so they have a sovereign model for India in all the Indian languages they are doing about a million minutes a day today and doing phone calls in regional languages that’s really valuable and I’m really excited but yes it’s exciting that Emergent is globally the fastest growing software company at least recently that we can think of but here’s the even more interesting fact to me a lot of their users are non -technical very small business but even better than that they have a preponderance of 50 to 60 year old Indians starting their own business, whether it’s a hair salon or a kirana shop or a supply chain to manufacture something, these are people who should normally be thinking about retiring, suddenly saying this tool lets me go into business for myself.

That’s the real power of AI, and on the emergent side, it’s really good business, as long as people don’t turn against AI.

Nivruthi Rai

I think you’ve answered a few of my questions, so I’ll skip those.

Vinod Khosla

I talk a lot.

Nivruthi Rai

No, you talk powerful. After decades of progress along Moore’s Law, today transistor scaling is slowing down, we are fighting physics, becoming uneconomical, even as AI training compute requirement is growing 3x faster than Moore’s Law. If GPUs are defining the performance, performance rate, is what wins the performance per watt per area race that we believe the technology infrastructure has? Do we need sparsity, in -memory compute, non -von -human, kind of like neuromorphic? What are your thoughts in those areas?

Vinod Khosla

A lot of this role is elite Harvard and MIT guys, and I want them to build what you say. So let me challenge you a little bit. That’s looking at the past, not at the future. Right. If you ask me, so big areas of research for us in building LLM models, which is what consumes all the compute, can we do data efficiency? Good idea. Can we, for a thousandth amount of data, can we build equally potent models? We are investing in compute efficiency. Can you build a model? We are investing in compute efficiency. We are investing in compute efficiency. We are investing in compute efficiency. We are investing in compute efficiency. We are investing in compute efficiency.

We are investing in compute efficiency. We are investing in compute efficiency. We are investing in compute efficiency. We are investing in compute efficiency. then all your assumptions about data centers and power goes out of the window. So those are the risks. Now, the fact is, if AI gets that cheap, by the way, I did a session with Sam Altman at IIT Delhi this morning, and he mentioned that in the last 18 months, the price of inference or AI use has gone down 1 ,000 fold. Now look two years forward. He didn’t say drop by 1 ,000 fold, but almost likely it would drop by 100 fold. So the cost of AI inference is declining towards zero. If that happens, power consumption may drop by 1 ,000 fold, but usage will go up through the roof.

So these things are very hard to predict. and complex to understand, and I’m trying to reduce everything to a level everybody can understand. Very likely, 10 years from now, as these power plants are built, as these data centers are built, because they take time to build, the algorithms we use will be much more energy efficient, much cheaper, and those two result in less of a crisis in power and a much greater usage of AI, especially

Nivruthi Rai

Completely. Vinod, yesterday…

Vinod Khosla

So, you know, it’s yesterday to extrapolate today’s LLMs. The computer efficiency has gone up pretty dramatically, and can go up… I’ll give you a simple example. For anybody who’s trained an AI model here, okay, a couple of hands. If you’re training an AI model… model, you use something, a chip called a GPU. The fact is, you train it, and every now and then, if you’re using a large cluster of 10 ,000 GPUs, one of them goes wrong. And then you have to restart the model training, so they checkpoint these models. So when they restart, and it’s done all the time, you don’t go back to the beginning, you go back to the checkpoint. That’s all well and good.

We are working on a technology to make sure that you don’t have to go back at all. If just that one thing was successful, your compute capacity goes up 2x without increasing power or the number of chips. So that’s a very simple explanation of the kinds of things that can dramatically change the equation. It all depends on science and creativity and clever algorithms. The other thing I would say to you, five years from now, definitely in 10 years from now, but five years probably, almost all of this research will be done not by humans, but by AI scientists. AI computer scientists, AI material scientists, AI fusion scientists, AI drug discovery scientists. I could go on. We are building all those scientists one way or another today.

In our portfolio. So the rate at which this innovation will happen will explode exponentially because instead of having 10 scientists doing research in your company, you will have a thousand scientists doing research in your company. And progress has to accelerate. So I’m very, very optimistic on where all this goes. But I’m an optimist.

Nivruthi Rai

Completely with you. two things that I wanted to just say again. When I look at the large language models, the amount of garbage that they have to read, there’s a tremendous amount of noise to signal. I would say there’s so much noise. And I know that a lot of people are trying to look at how to reduce the noise such that you can focus on the signal. And one more thing I want to say, I completely agree with you. Yesterday I had a meeting with one brigadier who’s responsible for building AI in Israel. His prime minister, Netanyahu, has given him this response. And what he and I were talking about is countries like India and Israel, where capital is there but still limited.

Can we focus on 20, 30, 50 precise use cases and not work on, oh, this room has yellow shirt more common than green, rather than that solve a problem of traffic and doctors should education. So I think what is your thought on that?

Vinod Khosla

I very much disagree with that point of view. You can’t do one thing at a time. Fundamentally, fundamentally the way we will make progress is to build intelligence. And there’s only one intelligence that we can build. Now it used to be called AGI. Now it’s called ASI. Artificial Super Intelligence. We have to far exceed the capacity of the human brain to be creative, to link things, to keep concepts in their head that they can connect when they’re doing the research so they can make a new hypothesis and then test the hypothesis. That’s what the scientific process is. Be able to use all your knowledge to do a hypothesis, say what if this is true, and then go test it.

An AI with a much broader scope of memory and knowledge should be able to be much more creative in hypotheses. And so the idea of building a single thing for one purpose will not work. The idea that you have a super intelligence, then you tune it or, as we say, train it. You know, at IIT Delhi, they’ll train an intelligence coming in in the first year into being an electrical engineer. Can you post -train it to be an electrical engineer or an energy engineer or a casting engineer for metal casting? Yes, you can do that. But the idea that you can build specialized intelligence is a very short -term mistaken notion. Many people have, saying it’s easier to do than the broad idea.

Nivruthi Rai

So you are saying that we should focus on both, build the general intelligence, build that intelligence layer, and then leverage. But no. You’ve recently said that AI will erase the traditional BPO and IT services model. And by the way, that generated so much buzz.

Vinod Khosla

Every journalist I’ve met has asked me that question.

Nivruthi Rai

People’s WhatsApps have been buzzing. So, you know.

Vinod Khosla

I didn’t know it would cause that much of a.

Nivruthi Rai

I know. And I think that, you know, there’s more to what you said. So if founders shouldn’t build for the back office anymore, what’s the front office opportunity? Also, you know, if AI erases India’s BPO model, what exactly replaces it? Also, at the workforce level, what should the millions of currently employed in IT and BPO start doing now to remain employable in an AI centric economy in the world?

Vinod Khosla

So first thing to say, a service like BPO service or IT services or. Customer support. are outsourced services for most Western countries, and they’re the easiest to replace without causing friction within the enterprise. If a CEO says we lay off our employees, the employees are very upset. If they say we are going to lay off the BPO firm and replace it with the AI, it’s accepted very easily because just a cost reduction. So we have to keep that in mind. The second thing we have to keep in mind is the journalists never report their timeframes. I think in the next five years, there’s hardly anything these class of companies, which is a large industry in India, do that won’t be capable of being done by an AI.

Whether it takes till 2027 or 2035, hard to predict. But it takes time. These. These enterprises, you know, I’m sure some of these services companies have five -year contracts. So you can’t, if an enterprise, if General Electric or Citibank signed a contract, they live by the contract. So this doesn’t happen overnight, but dramatic change starts to happen much longer, much before it’s visible to everybody. So I think there will be a transition period, but there’s no question all those companies are totally cooked unless they do something better and new and look forward, not backwards. Don’t try and compete with an AI. That’s a silly idea. But they can provide what they have. They can apply AI knowledge.

To lots of companies. So I have suggested to those CEOs, don’t deny it can do your job. It can, but the usage of AI needs knowledge to apply it, how to do it, and the world desperately needs it. Even the big companies in the U .S. do not have this competence. All of Africa, all of Latin America, all of Southeast Asia, they’re all massive markets if you create this new market. So it’s not hopeless. It is hopeless if you want to keep doing what you’re doing today versus change.

Nivruthi Rai

I completely agree, Vinod. You know, when we were talking about GPU, et cetera, what I have seen in my life, which is, you know, a technology curve goes a certain way. A disruptive curve starts again. And, you know, so disruption keeps happening and technology jumps curves. And this is exactly what you’re suggesting, that, you know, if we are running on this course. We need to jump course to the other route for success and perhaps build more solutions, more digital workforce. So I’m really excited about, you know, there is opportunity and, you know, there are things you guys can do. I’m going to skip the sarvam sakana because we’ve already talked about that. Now, I’m going to ask you, what I loved is I actually looked at from 2016 to now how your thought process has evolved.

And I know one thing that stayed in your previous, you know, 11 years to the last three years is health care and med tech. So my question is now around that. India today serves as a pharmacy of the world, supplying 20 percent of global generic medicines by volume. Looking ahead, India has 1 .4 billion people with extreme diversity and variance in genetic ancestries, culture, diet, climate, disease and behavior. This rich heterogeneous data can be used to train AI systems for drug discovery, AI native biological design, create access to doctors, hospitals and customized medicine. How do you think India can leapfrog from? about, you know, there is opportunity and, you know, there are things you guys can do. I’m going to skip the sarvam sakana because you’ve already talked about that.

Now, I’m going to ask you, what I loved is I actually looked at from 2016 to now, how your thought process has evolved. And I know one thing that stayed in your previous, you know, 11 years to the last three years is healthcare and med tech. So, my question is now around that. India today serves as a pharmacy of the world, supplying 20 % of global generic medicines by volume. Looking ahead, India has 1 .4 billion people with extreme diversity and variance in genetic ancestries, culture, diet, climate, disease and behaviors. This rich heterogeneous data can be used to train AI systems for drug discovery, AI native biological design, create access to doctors, hospitals. I’m customized medicine. How do you think India?

leapfrog from generics to AI -driven biologics? And also, when I talk about AI being as strategic as nuclear, do you also feel that this could become a customized biological threat?

Vinod Khosla

I’m not sure what you mean by a customized biological threat. Can you…

Nivruthi Rai

What I meant was, you know, if AI understands the genetics of every ethnicity, you know, the viruses or drugs or whatever targeted towards biological warfare to wipe off ethnicities.

Vinod Khosla

The thing I would say in general, every powerful technology humans have invented has both good uses and bad uses. Nuclear is an example. Biowarfare is an example. You just have to use it responsibly. And for those who don’t… Don’t use it responsibly. because some people will always use it irresponsibly for their own means or ends or illegal goals. There are enough people who will use it responsibly and responsible AI can counter the irresponsible AI. I don’t want to minimize the risk of AI. In fact, most really knowledgeable people I know and talk to are really scared about AI going wild. As low as the probability may be, it’s a real risk that we have to worry about.

But we have to have enough diversity in AI that there’s good AI. The chances that you only have one AI dominant and it’s bad is pretty small. So a diversity of models will add resilience to the AI. That’s the AI landscape.

Nivruthi Rai

Vinod, I also feel that when I look at human beings there are human beings that are rogue and there are human beings which are good and we have police, judiciary, law to address that we’ll have an AI framework for that and if you add the multiplication factor of AI to the goods and the bads, there’ll be goods also to offset

Vinod Khosla

well I started with the goods three doctors, few tutors, few agronomists

Nivruthi Rai

absolutely Vinod, you have said 90 % of VCs often add less value in India, where risk capital is relatively abundant every time I keep hearing there’s dry powder, dry powder but industry experience by investors is rare how should founders evaluate investors to ensure they get the most value in the partnership

Vinod Khosla

any journalists in the room? Oh, one.

Nivruthi Rai

Chatham House Rules.

Vinod Khosla

Yeah. By the way, I don’t care about Chatham House Rules. I speak the truth, and I’ll stand by the truth, public or private. I don’t care.

Nivruthi Rai

I love it.

Vinod Khosla

Look, the Indian VC community, by and large, is very risk -averse. There’s a Harvard Business School case. The first line of the case is a quote from me that says, my willingness to fail, and this is the best personal advice I can give everybody in this room, my willingness to fail allows me to succeed. John F. Kennedy said, only those who dare greatly can succeed greatly. There’s a lot of wisdom in the idea that stretching yourself, and I like to say most people are limited in their ability to succeed. And it probably applies to everybody in this room. Limited not by what they can do, but what they think they can do. So your self -image is your limitation, not what most smart people can do.

And frankly, even the less smart people can do more than they think they can. You know, important in a fair society to make sure we take care of people who are not as smart because half the people are below median. That’s just a fact of math. We have to take care of everybody, whether they’re smart or not so smart. Having said that, back to the topic, most VCs are so risk averse. They turn every conversation into what’s your revenue plan? How can you be liquid in two years or three years or profitable? Well, you have to invest in the future. If you don’t take large risk, risks by definition you won’t be doing large innovation if it’s not a large risk it’s already being done by somebody and so it’s not unique you can’t have innovation without large risk and you can’t have large risk without a large probability of failure that’s why willingness to accept failure is so important most people think about what others will think if they fail that’s what limits you so think about the world differently i’ve always taken large risks everything i’ve done i was told is not possible to do in 1980 it was hard for an indian to start a company and get funding especially if you were 25 and every investor was 60 years old they didn’t believe any nationality below the age of 50 you can’t do that you can’t do that you can’t do that you can’t do that you can’t do that let alone people with funny accents so you just have to power past that and just say none of that matters yes there are temporary hurdles you can bulldoze your way through and Indian VCs don’t do that so here’s how many VCs here how many people am I offending okay well I’m looking forward but I will ask you Archana so so I lost my train of thought unless you take unless you take these risks you’re not going to do dramatically innovative things so that’s really my point what people the reason I ask any VCs in the room in the last 200 investments I’ve made I have never never calculated an IRR on an investment I think it’s fundamentally misleading in an area where you’re starting something innovative in a new market that may not exist.

Did Zomato exist or Flipkart exist when those companies started? Did Twitter have a market when they started? You can’t do IRRs. So if any VC is doing IRR, they are on the wrong track. You start in the wrong place that restricts you to low -risk investments. So those are a couple of things that are wrong in India in the VC community. By the way, that can be on the record for anybody. Nobody can fire me. I don’t have a career to have. So what do I care? No, I don’t have a career I need. I can’t get fired. I keep doing it.

Nivruthi Rai

You have a lovely family.

Vinod Khosla

Yeah.

Nivruthi Rai

I love the five. that you have three women and you are very supportive of women that just added to me pushing you for this fireside considering the way you know

Vinod Khosla

since many are parents a really important characteristic a test for your kids is do they do what you ask them to do or what advice you give them or can they chart their own path none of our four kids are doing anything close to what the others doing such wide range of diversity and and that comes from each one defining their own path not saying hey you have to go to medical school or you have to go to engineering school basic education we are pretty firm about but what they do there’s almost no commonality in where they ended up because they were allowed to chart their own path it’s not something Indian people allow very easily for their chair chair children Because they’re such strong families, parents have a lot more influence than, frankly, they should on their children.

And it restricts the imagination of their children. So as much as I’m a huge fan of the Indian family ethos, I also think it has this one big negative.

Nivruthi Rai

On the contrary, we did exactly what our parents told us. I did exactly what my dad wanted me to do.

Vinod Khosla

So I have to tell you a funny story. So there’s a school in Silicon Valley called Harker School. Some of you may know it. It’s mostly full of Indian and Chinese kids because they want to teach you how to score high and sort of score well on exams and you get into college and all that. And so they were pushing me to give a talk to their kids. And that talk is on our website. It’s on our website somewhere. and it’s worth reading if you want to be a better parent. My slides roughly went in this order. I won’t go through. The first thing was don’t listen to your parents. The second was don’t listen to your teachers.

The third was color outside the line. If you want to drop out of high school, drop out of high school. I went through a little bit of these and explained why these were important cultural things if you’re going to participate in this dynamically changing world to think outside the lines. It’s one of my favorite parts for high school kids.

Nivruthi Rai

Vinod, I have a rapid fire for you also, but I’m going to skip some of the questions because you already talked about your near -free expertise and generalists.

Vinod Khosla

I have to tell you, I didn’t look at your questions, so I didn’t prepare. I just ran out of time.

Nivruthi Rai

You did excellent. Ten years from now, Now, what will seem embarrassingly obvious about AI in India, when we look back at this moment in India’s AI journey, what do you think will feel embarrassingly, and my heart is aching while I’m saying this, embarrassingly obvious in hindsight that today still feels controversial, underappreciated, or even crazy?

Vinod Khosla

Let me talk about my crazy. I just met with the director of IIT Delhi after my talk there, and I told him, your first -year students, is any student, when they graduate, know more about the subject that they studied than AI? The answer is obvious. No chance any of the 500 students who are crowded into Dogra Hall would know more on any subject than the AI. So I asked him, why have education? No, it’s an obvious question. It sounds silly. It’s an obvious question. Now, the fact is there’s a more nuanced answer to that. I said build up more dorm capacity so you have more students. But they are learning from AI and interacting with each other and originating ideas through challenging each other.

That’s a very different style of education. And, you know, one thing I teach is select for smartest people, very high IQ, very diverse set of students. All that is good. Get them in a place together and let them learn from the AI and then debate with each other. That’s the right model of education. And literally I said don’t build more academic buildings. Build more dorm space to have more students because the bigger. The student body. the more sort of complex interactions they can have. And if you study complex systems theory, and I’m a huge fan of complex systems theory, the only time I’ve taken a break from venture capital for four months was to become a postgraduate student at the Santa Fe Institute for Complex Systems Studies.

That was my only break in 40 years. That was a long break. And what’s clear is sufficiently complex systems become autocatalytic in so many directions. For those of you who are engineers or physicists and understand catalytic systems, amazing characteristics emerge from these systems. So let me give you, this sounds crazy, but let me give you the best. The best example that this works, and this is in the last month. how many people have heard of Moldbook? A few have. Those of you who haven’t, please read about it. It’s also called OpenCloud Moldbook. They’ve changed names multiple times. What they said is let’s build not a community of humans, but a community of AI agents that can do anything with each other.

And amazing phenomena emerged. For example, agents started discussing how to create a language humans don’t understand so humans can’t spy on their community. Think about it. In days, not in months or years, in days they were scheming how to avoid human scrutiny by creating their own language. So that’s just one example. I could go deeper into this phenomena of complex systems. and complex, for those of you mechanical engineers, nonlinear dynamical systems is what this is about. Any mechanical engineers here? A few hands. That’s such an important part of the emerging AI landscape and how AI systems will behave if they’re pervasive. By the way, most of the weather phenomena you hear about. How does La Nina happen?

How does El Nino happen? These are complex dynamical, nonlinear dynamical systems. And I used to, 30 years ago, maybe 25 years ago, I used to teach this class to fifth graders. Using StarLogo, you can model this. How a complex, nonlinear dynamical system behaves. Easily. So do the following experiment, which any of the programmers here can do, and non -programmers can’t. But you can do on one of the wipe coding platforms. If you imagine a chessboard that wraps around itself, around it, and say an ant sits on a square, and if it steps forward one square, and it’s a black square, it paints it white and turns left. If it’s a white square, it paints it black and turns right.

End of rule set. That system, just by that, after about 100 ,000 steps, becomes amazingly complex patterns built by the ant. Why? Because it’s a nonlinear dynamical system. At some point, the board gets conditioned in, sorry, I talk too much scientific language. There’s a phase change in the board. In the state of the board, there’s a phase change. And suddenly, it starts behaving. It’s behaving differently. So, sorry to bore most of you who didn’t get what I was talking about.

Nivruthi Rai

That was lovely. Just another example, quickly. Imperial College of London used Google’s co -scientist. and the same hypothesis that the same professor took a decade to figure out, they did it in days. So that’s the magic.

Vinod Khosla

That’s the acceleration with AI scientists I’m talking about. Very exciting area.

Nivruthi Rai

Vinod, now I have a rapid fire, few questions, but you don’t get a thing. You had to just quickly answer in a second. Most overrated AI belief.

Vinod Khosla

You know, there aren’t a lot of overrated AI beliefs if you look five years out.

Nivruthi Rai

Most underrated constraint.

Vinod Khosla

What I talked about on power and consumption, it may change. The curve may change dramatically for computation needed per inference.

Nivruthi Rai

Top five application for solving. Global and Indian problems.

Vinod Khosla

AI doctors and AI teachers. AI agronomists. If you are trying to affect the bottom three or five billion people on the planet. it. Those are the obvious ones. And those three can impact most of them.

Nivruthi Rai

Does AI increase venture alpha or does capital crowding compress returns for most funds?

Vinod Khosla

I don’t worry about returns. You know, you build something valuable. The returns always take care of yourself. So if I just you know, other people have to do it because they work in the linear domain. I work mostly in the nonlinear domain of systems. You can’t plan those things. You can’t make assumptions. I’ll tell you a funny story. I had the audacity as a 25 year old looking for my first venture funding for the company before Sun Microsystems. It was called Daisy Systems. It was a CAD tool company. It went public. It was very successful. Unfortunately, his son was so successful, nobody remembers Daisy. But he was a very, very successful $100 million IPO in the 1980s, which didn’t happen.

I was looking for venture funding, and I presented a plan. And we received a guy called Bob Sackman, who passed away, asked me, what’s your plan? Give me your financial projections. And I gave him the projections. And then I said, you tell me what answer you want, and I’ll change the assumptions, and you won’t even know. So this plan is only as valuable as the assumptions, and I can change them. You’ll never know what assumptions I change. The fact is, even in 1980, I knew this was a silly exercise to make projections. I literally told him as a 25 -year -old, I don’t care about projections, but here’s a projection if you want one. you can share it with your partners but the fact is I can change one or two assumptions and I can make any answer you want tell me what you want I’ve always had this very direct honest I don’t care who I offend style

Nivruthi Rai

I love it I would have loved to open to questions for all but three people have already submitted questions I will look at the fourth one but so you know Kiran Mazumdar I’m on the board he drives the AI quick question for

Audience

you enterprises and it’s a conundrum I’m trying to grapple with myself AI itself is still in its infancy and if we implement it now in a industry like pharmaceutical industry where regulations are very stringent plugging in and plugging out is not easy any new capability so what are your thoughts on companies like us, should we go all in or should we wait on the sidelines for a little while?

Vinod Khosla

The answer is obvious. You should go all in. There’s two types of people. And Kiran is very creative. She’s probably the most successful woman entrepreneur in India in a deeply technical field. So I’m a real admirer of Kiran. But I would say in general there’s two kinds of people. When you see a problem, like a regulatory problem, you can say it gets in my way and sometimes it does. Mostly I say how do I get around it? So take drug discovery. We’re doing a lot of creative things in drug discovery. And you can have an AI design a drug. And I’ll give this in a way that everybody can understand very quickly in a day.

But regulatory process, clinical trials, all that takes a long time. So I asked my team, how do we get rid of clinical trials without changing regulation? Because we can’t do that in Washington, D .C. So we said, we are going to design drugs for N equal to one. That means there’s only one patient. Then the regulator can’t ask you to run a clinical trial because there’s only one patient. And AI can design the drug. So we’re developing a lot of drugs, thinking around how do you do N equal to one drugs so you don’t have to have clinical trials, you don’t have to have regulatory FDA approval. They have to approve your process. So the most stunning example of this, which I’m very optimistic about in about two, three, four years, every cancer is unique.

We know that. Everybody says that. Everybody’s cancer. So it’s unique. How about I design a drug for one person’s cancer because it has one particular or multiple. mutations on the gene. All designed to those mutations. They can’t ask me to test it on somebody who doesn’t have that drug. So that’s a good example of how you get around roadblocks.

Nivruthi Rai

Since Archana already left, Ramesh, that’s the last question for you. I’m really sorry, but the next one session is on. Ramesh.

Vinod Khosla

Like I say, I talk too much.

Nivruthi Rai

No, no, it’s lovely. You have turned the power on. I’ll repeat the question.

Vinod Khosla

But that’s obvious. That’s totally obvious. You know, UAE did a beautiful thing. They gave, I think about two years ago, every citizen access to ChatGPT. I think that’s a really good idea to empower everybody. So I appreciate that.

Nivruthi Rai

Yeah. Yeah. Yeah. Well.

Vinod Khosla

Well, the fundamental property of emergent behavior is it’s not predictable. So you’re asking me the wrong question. The question is wrong. Here’s what I would say. What most books showed us, I’ve started to get, what if we have financial agents talking to each other and the only charter to make money in the markets? That’s a reasonable idea. What can agent swarms do in many areas from national defense? I can’t imagine the Russians being able to beat the Ukrainians if there was swarm behavior in agents, especially on every drone independently in Ukraine. No amount of old -style defense will work. It’s also true of financial markets. It’s true of community of agents. So I’d love to hear more.

Let me just say, I I don’t have a lot of time today, so I will have to rush out. I would tell everybody who needs to reach me, email me at VK at Coastal Ventures, my initials at CoastalVentures .com. Better, and if you tell me anything in the hallway, I won’t remember anyway. I have terrible memory. So hopefully this has been useful for everybody. Thank you very much.

Nivruthi Rai

The last thing I want to say is while my entire team is using AI, the people who have the right edge is the one who asks the right question because it’s garbage in, garbage out. Thank you very, very much. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (39)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Vinod Khosla’s fingerprints are on companies such as Sun Microsystems, Google, OpenAI and many others.”

The moderator’s introduction lists Sun Microsystems, Google and OpenAI among the firms bearing Khosla’s fingerprints, matching the report’s claim [S9].

!
Correctionhigh

“AI workloads already consume roughly 1 % of global electricity – a figure that will double as the United States plans to double its data‑centre footprint within three years.”

The knowledge base reports global data-centre electricity use at about 1.5 % (or 2 %) of worldwide consumption and projects a doubling of demand, but does not support the specific 1 % figure or the three-year US expansion timeline [S101] and [S103].

Confirmedmedium

“Renewable and nuclear is the only way to meet the looming energy demand for AI infrastructure.”

The source emphasizes that augmenting renewable sources, solar, wind and nuclear power is essential to meet growing energy needs for data centres and AI hardware [S107].

Confirmedmedium

“India should decide whether to build AI capacity, develop capability, or drive consumption.”

The speaker’s three strategic questions for India are echoed verbatim in the knowledge base entry [S18].

Confirmedmedium

“AI is pivotal for India’s economic productivity, military power, and information control.”

The same assertion appears in the knowledge base, highlighting AI’s importance for economic productivity, national security and information governance [S18].

External Sources (109)
S1
Invest India Fireside Chat — Very good afternoon, everyone. I’m truly honored to run a fireside chat with Mr. Vinod Khosla. And throughout my Intel j…
S2
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Thank you, Mr. Taneja, for the $5 billion pledge that you have taken. Mr. Vinod Khosla, one of the most respected person…
S3
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S4
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S5
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S6
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S7
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S8
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S9
Invest India Fireside Chat — -Nivruthi Rai: Engineer with 30 years at Intel, serves on boards, represents India at Global Arena, works on solving EOD…
S10
Invest India Fireside Chat — Speakers:Vinod Khosla, Nivruthi Rai
S11
Is AI the key to nuclear renaissance? — There is a direct correlation between the exponential increase in model parameters and the increase in the computational…
S12
Day 0 Event #249 Sustainable Digital Growth Net Negative Net Zero or Net Positive — While acknowledging the energy challenge and need for improvement, it’s important to maintain perspective that data cent…
S13
The challenges of introducing Generative AI into the marketplace — I have been hearing a lot about the shortage of powerful GPUs for AI lately. It seems like the demand is much bigger tha…
S14
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Thank you, and namaste. I’m most excited by what AI can do to not only help me meet the India 2047 vision, but far excee…
S15
Keynote-Vinod Khosla — Evidence:He explains that ‘Aadhaar allowed us to offer UPI. There’s no reason we can’t offer on the same identity-based …
S16
Charting New Horizons: Gender Equality in Supply Chains – Challenges and Opportunities — The ongoing efforts of the women’s movement towards gender parity receive acknowledgement, though the need for further p…
S17
Resolutions — – (f) include an introduction to organizational, planning and entrepreneurial skills; – (g) emphasize instruction in saf…
S18
https://dig.watch/event/india-ai-impact-summit-2026/invest-india-fireside-chat — And I made this statement for India. India, AI is pivotal to drive economic productivity, military power, and informatio…
S19
Science AI & Innovation_ India–Japan Collaboration Showcase — Founders should focus on creating value and solving real problems rather than chasing venture capital
S20
Securing access to financing to digital startups and fast growing small businesses in developing countries ( MFUG Innovation Partners) — He emphasizes the need for DFIs to be comfortable with taking risks and bringing other investors into the market. Ruzgar…
S21
Comprehensive Summary: The Future of Robotics and Physical AI — Rus points out that while functional robots exist, their costs remain prohibitively high for most applications. The econ…
S22
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Summary:There is unanimous agreement that power and energy constraints represent fundamental challenges that must be add…
S23
AI Infrastructure and Future Development: A Panel Discussion — Four years ago, a data center project had 100 electricians with 80 experts and 20 beginners. Now projects have 2,000 ele…
S24
The Global Power Shift India’s Rise in AI & Semiconductors — High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different …
S25
Keynote Adresses at India AI Impact Summit 2026 — Summary:The speakers demonstrate remarkable consensus across multiple dimensions: the strategic importance of U.S.-India…
S26
The Global Power Shift India’s Rise in AI & Semiconductors — Summary:The speakers demonstrate strong consensus on key strategic approaches: the critical importance of public-private…
S27
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Eltjo Poort: thank you Isadora yeah and thanks for giving me the opportunity to say a few things I there’s a little bit …
S28
Artificial General Intelligence and the Future of Responsible Governance — Cerniauskas argues that current massive investments in compute are driven by competitive dynamics and the belief that be…
S29
Focus shifts to improving AI models in 2024: size, data, and applications. — Interest in artificial intelligence (AI) surged in 2023 after the launch of Open AI’s Chat GPT, the internet’s most reno…
S30
AI for Social Good Using Technology to Create Real-World Impact — And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very in…
S31
GermanAsian AI Partnerships Driving Talent Innovation the Future — India’s systematic approach to integrating AI across its educational landscape emerged as a significant discussion point…
S32
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Capacity development | Artificial intelligence Talent development, education and future skills
S33
AI 2.0 Reimagining Indian education system — This comment challenges the fundamental structure of education by advocating for student-driven, personalized learning r…
S34
Upskilling for the AI era: Education’s next revolution — The coalition’s approach prioritises accessibility and inclusion, with particular focus on reaching underserved and marg…
S35
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — Despite diverse backgrounds, speakers achieved remarkable consensus on key principles. All agreed that infrastructure al…
S36
Day 0 Event #154 Last Mile Internet: Brazil’s G20 Path for Remote Communities — Integrating internet access and connectivity projects with educational institutions at all levels can lead to societal t…
S37
UNESCO Global Report — – ‘1. Create mechanisms for cross ministry coordination ; 2. Build stronger links between education providers, TVET prov…
S38
From algorithms to Armageddon: The rise of AI in nuclear decision-making — The Cuban Missile Crisis of 1962 presented an unfortunate encyclopaedia of complexities concerning thedecision-making in…
S39
Can National Security Keep Up with AI? / Davos 2025 — AI technology has both beneficial and potentially harmful applications. This dual-use nature creates dilemmas and challe…
S40
AI for Social Empowerment_ Driving Change and Inclusion — Arguments:Urgent need for comprehensive policy responses including competition policy, tax policy, labor law reforms, an…
S41
Day 0 Event #61 Accelerating progress for unified digital cooperation — Larisa Galadza: give you the microphone. Thanks. It’s really good to be here. It’s really good to be learning from mul…
S42
Human Rights-Centered Global Governance of Quantum Technologies: Implications for AI, Digital Rights, and the Digital Divide — **Dual-Use Risks**: Quantum technologies present both opportunities and threats, particularly regarding encryption and s…
S43
Advancing Scientific AI with Safety Ethics and Responsibility — Summary:The speakers demonstrated strong consensus on several key areas: the need for context-specific governance framew…
S44
9821st meeting — For Mozambique, it is essential that the international community establishes norms and standards that promote trust and …
S45
Advancing Scientific AI with Safety Ethics and Responsibility — The speakers demonstrated strong consensus on several key areas: the need for context-specific governance frameworks tai…
S46
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — Focus on sector-specific, use case-based approaches rather than pursuing general-purpose trillion-parameter models
S47
Invest India Fireside Chat — Focus should be on building general intelligence rather than specialized solutions for specific use cases Rai suggests …
S48
Invest India Fireside Chat — Summary:Rai suggests focusing on specific use cases due to capital constraints, while Khosla strongly disagrees, arguing…
S49
AI and Data Driving India’s Energy Transformation for Climate Solutions — Develop sector-specific standards and specifications rather than universal standards that may not fit all use cases
S50
How Trust and Safety Drive Innovation and Sustainable Growth — Explanation:Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was une…
S51
Securing access to financing to digital startups and fast growing small businesses in developing countries ( MFUG Innovation Partners) — A founder highlighted the importance of capital injection for achieving sustainable profitability in startups. They ment…
S52
Day 0 Event #255 Update Required Fixing Tech Sectors Role in Conflict — Economic | Human rights principles Investor Leverage and Limitations Large institutional investors managing pension an…
S53
https://app.faicon.ai/ai-impact-summit-2026/ai-that-empowers-safety-growth-and-social-inclusion-in-action — So capital can definitely incentivize innovation and responsibility, but capital alone cannot do that. We published our …
S54
Introduction — 1. CONTENT POLICY: Policies regarding applications, content, and services can have a significant impact on the availabil…
S55
AI and international peace and security: Key issues and relevance for Geneva — Title:Expert Consultation Report on AI and Related Technologies in the MilitaryDescription:This report compiles insights…
S56
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Examples of missing stakeholders include women’s rights organizations, trade unions, journalists, researchers who should…
S57
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Introduction and Context Setting ## Military and Dual-Use Applications Alex Moltzau: I think I just also wanted to …
S58
WS #283 AI Agents: Ensuring Responsible Deployment — Anne McCormick: Thank you, Anne McCormick, EY, Global Head of Public Policy. I’m interested in this context of policy no…
S59
Invest India Fireside Chat — Rai outlines the massive infrastructure challenges facing AI development, including energy consumption that already repr…
S60
AI Infrastructure and Future Development: A Panel Discussion — Four years ago, a data center project had 100 electricians with 80 experts and 20 beginners. Now projects have 2,000 ele…
S61
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Infrastructure Constraints and Resource Management: Significant focus on three critical bottlenecks – power consumption …
S62
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — This comment reframes the entire AI development narrative by identifying energy as the primary bottleneck rather than th…
S63
The Global Power Shift India’s Rise in AI & Semiconductors — High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different …
S64
The Global Power Shift India’s Rise in AI & Semiconductors — Summary:Both speakers emphasize that India’s human talent is a core strength that, if properly developed through skillin…
S65
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — India’s approach, according to the speaker, centers on three pillars of sovereignty: data sovereignty, infrastructure so…
S66
Keynote Adresses at India AI Impact Summit 2026 — Summary:The speakers demonstrate remarkable consensus across multiple dimensions: the strategic importance of U.S.-India…
S67
Invest India Fireside Chat — Khosla criticized Indian VCs as “very risk-averse,” revealing that in his last 200 investments, he has “never calculated…
S68
AI Innovation in India — Ojaswi Babbar outlines how their incubator and accelerator help startups through rapid validation of ideas, stress-testi…
S69
AI for Social Good Using Technology to Create Real-World Impact — And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very in…
S70
Skilling and Education in AI — I think I’m going back to my first point is on the flywheel. I think a lot of the investments are coming into the comput…
S71
Focus shifts to improving AI models in 2024: size, data, and applications. — Interest in artificial intelligence (AI) surged in 2023 after the launch of Open AI’s Chat GPT, the internet’s most reno…
S72
AI 2.0 Reimagining Indian education system — This comment challenges the fundamental structure of education by advocating for student-driven, personalized learning r…
S73
GermanAsian AI Partnerships Driving Talent Innovation the Future — India’s systematic approach to integrating AI across its educational landscape emerged as a significant discussion point…
S74
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Future skills requirements emphasise working with technology rather than coding, with increasing importance placed on ps…
S75
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Capacity development | Artificial intelligence Talent development, education and future skills
S76
Opening of the session — This is coupled with highly intrusive procedural elements.
S77
Opening of the session — Colombia:As we said in May’s informal meeting, we’re grateful for your efforts in steering the work of this group toward…
S78
Taking Stock — ## Session Setup and Introductions
S79
Opening of the session — ## Key Areas of Convergence ## Side Events and Additional Matters ## Technical and Operational Discussions ### Chair’…
S80
Taxing Tech Titans: Policy Options for the Global South | IGF 2023 WS #443 — The digital economy has been growing in recent years. Large technology multinationals operate in, and derive profits fro…
S81
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Overall Tone:The tone is consistently optimistic, visionary, and inspirational throughout. The speaker maintains an enth…
S82
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — The tone is consistently optimistic, visionary, and inspirational throughout. The speaker maintains an enthusiastic and …
S83
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S84
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The discussion maintained an optimistic and collaborative tone throughout, characterized by constructive problem-solving…
S85
From India to the Global South_ Advancing Social Impact with AI — Impact:This energized the entire discussion and provided concrete validation for the optimistic vision being painted. It…
S86
Keynote-Vinod Khosla — Overall Tone:The tone is consistently optimistic, urgent, and pragmatic. Khosla maintains an enthusiastic and confident …
S87
Keynote-Vinod Khosla — The tone is consistently optimistic, urgent, and pragmatic. Khosla maintains an enthusiastic and confident demeanor thro…
S88
Open Internet Inclusive AI Unlocking Innovation for All — The tone is optimistic and forward-looking throughout, with both speakers expressing confidence in alternative approache…
S89
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — The conversation maintains a consistently optimistic and enthusiastic tone throughout. Both speakers demonstrate genuine…
S90
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S91
Leading Differently: The Neurodiverse Advantage / Davos 2025 — The overall tone was positive, enthusiastic and forward-looking. Panelists spoke candidly about personal experiences and…
S92
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — The tone was consistently optimistic and collaborative throughout the conversation. Participants demonstrated mutual res…
S93
Wrap up — This served as a compelling call to action that elevated the urgency of the entire discussion. It moved the conversation…
S94
Closing Ceremony — These key comments fundamentally shaped the discussion by creating productive tensions between idealism and pragmatism, …
S95
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S96
New Technologies and the Impact on Human Rights — The discussion maintained a collaborative and constructive tone throughout, despite addressing complex and sometimes con…
S97
High Level Dialogue with the Secretary-General — Hajer Sharief (Moderator): Power is never given, it’s taken. That’s quite a statement of inspiration for young people to…
S98
Twitch co-founder Emmett Shear appointed interim CEO of OpenAI amidst internal strife — Following theunexpected dismissal of OpenAI co-founder and CEO Sam Altman, the company’s board has named Emmett Shear, c…
S99
Hype Cycles and Start-ups — May Habib, the founder of a generative AI company, acknowledges the challenges of competing with established giants such…
S100
Rolling out EVs: A Marathon or a Sprint? — Lisa recognizes the importance of improving battery quality through both upstream and downstream processes. This focus o…
S101
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — The scale of the challenge is substantial. Current global data centre electricity consumption stands at 415 terawatt hou…
S103
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — AI has significantlyincreased energy consumption, with data centres now consuming approximately 2% of global electricity…
S104
State of Play: Chips / DAVOS 2025 — Power and energy constraints are significant challenges for chip technology
S105
‘All is fair in RAM and war’: RAM price crisis in 2025 explained — If you are piecing together a new workstation or gaming rig, or just hunting for extra RAM or SSD storage, you have stum…
S106
Media Briefing: Unlocking ASEAN’s Digital Future – Driving Inclusive Growth and Global Competitiveness / DAVOS 2025 — The Prime Minister mentions that nuclear power is being considered as an option to meet growing energy demands. This is …
S107
https://dig.watch/event/india-ai-impact-summit-2026/smaller-footprint-bigger-impact-building-sustainable-ai-for-the-future — Otherwise we can’t solve one problem and create another. So that’s again something the governments are concerned of and …
S108
Keynote-Jeet Adani — Distinguished global leaders, innovators and friends, good afternoon and namaste. We gather here today at a decisive inf…
S109
Indias Roadmap to an AGI-Enabled Future — I think it’s a good question. I think it’s a good question. I think it’s a good question. I think it’s a good question. …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
N
Nivruthi Rai
7 arguments143 words per minute2449 words1022 seconds
Argument 1
– Semiconductor performance has shifted to “performance per watt per area,” and data‑center power use already consumes ~1 % of global capacity and will double, demanding renewable/nuclear solutions (Nivruthi Rai)
EXPLANATION
Rai explains that semiconductor development has moved from pure performance to a focus on performance per watt per area. She highlights that current data‑center energy consumption is about 1 % of global capacity and is projected to double, creating a need for renewable and nuclear power sources.
EVIDENCE
She describes the evolution of semiconductor races, noting the third race is “performance per watt per area” [42]. She then cites that the world has 80 GW of data-centers, which already represents 1 % of global energy capacity and is expected to double in the next three years, emphasizing the urgency for renewable or nuclear energy solutions [53-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data-center electricity share (~1-1.5 %) and rising energy demand from AI workloads are documented in [S12] and the link between model size and energy use is noted in [S11].
MAJOR DISCUSSION POINT
AI infrastructure constraints
AGREED WITH
Vinod Khosla
Argument 2
– GPU and high‑bandwidth memory supply is limited to a few fabs, creating a bottleneck for AI compute scaling (Nivruthi Rai)
EXPLANATION
Rai points out that the supply chain for GPUs and high‑bandwidth memory (HBM) is highly concentrated, with only a few manufacturers and fabrication facilities, which restricts the ability to scale AI compute capacity.
EVIDENCE
She notes that high-bandwidth memory chips are 80 % sourced from three companies and that the world needs additional logic and memory fabs each year, but only a limited number are available, creating a bottleneck for GPU and HBM supplies [66-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Shortages of powerful GPUs and supply-chain constraints are described in [S13], with additional commentary on scaling limits in [S9].
MAJOR DISCUSSION POINT
Supply chain bottleneck for AI hardware
Argument 3
– Investing in compute‑efficiency techniques (e.g., checkpoint‑free training) can double AI capacity without extra power (Vinod Khosla)
EXPLANATION
Khosla argues that advances in training algorithms, such as eliminating the need to restart from checkpoints, can effectively double the usable compute capacity of existing hardware without increasing power consumption.
EVIDENCE
He describes a technology that avoids restarting model training after a GPU failure, which would double compute capacity without adding power or chips [216-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The possibility of doubling compute capacity without additional power or chips is highlighted in [S9].
MAJOR DISCUSSION POINT
Compute‑efficiency as a solution to power limits
Argument 4
– India must ensure AI benefits reach citizens first through Aadhaar‑based services like AI doctors, tutors, and agronomists (Nivruthi Rai)
EXPLANATION
Rai stresses that AI deployment in India should prioritize public‑benefit services built on existing Aadhaar and UPI infrastructure, providing free AI‑driven healthcare, education, and agricultural advice to all citizens.
EVIDENCE
She lists Aadhaar-based AI primary-care doctors, AI tutors (CK-12), and AI agronomists as examples of services that should be universally accessible, leveraging the Aadhaar and UPI platforms [145-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of Aadhaar (and UPI) as a foundation for nationwide AI services is explained in [S15].
MAJOR DISCUSSION POINT
AI for inclusive public services
AGREED WITH
Vinod Khosla
Argument 5
– AI‑driven tutors, primary‑care doctors, and agronomists can serve millions of underserved Indians, leveraging Aadhaar and UPI infrastructure (Vinod Khosla)
EXPLANATION
Khosla highlights the potential of AI‑powered education, healthcare, and agriculture services to reach vast numbers of Indians, especially those in rural areas, by using the country’s digital identity and payment systems.
EVIDENCE
He mentions AI primary-care doctors, AI tutors reaching four-to-five million students, and AI agronomists for women farmers, all built on Aadhaar and UPI foundations [146-152].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Aadhaar-enabled AI service delivery for health, education, and agriculture is discussed in [S15].
MAJOR DISCUSSION POINT
AI as a tool for mass social impact
AGREED WITH
Vinod Khosla
Argument 6
– Emphasize sector‑specific skill training for women and farmers rather than generic graduation degrees (Nivruthi Rai)
EXPLANATION
Rai argues that instead of focusing on universal graduation, training should be tailored to specific sectors such as textiles, agriculture, and other livelihoods, especially for women in rural areas.
EVIDENCE
She critiques the over-emphasis on graduation, stating that many graduates lack practical skills, and suggests targeted training for women in sectors like textiles and agriculture would be more beneficial [170-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Targeted, gender-focused skill development in sectors such as agriculture is highlighted in [S16].
MAJOR DISCUSSION POINT
Targeted skill development for underserved groups
AGREED WITH
Vinod Khosla
Argument 7
– Founders should evaluate investors based on willingness to take large risks and provide real expertise, not merely capital (Nivruthi Rai)
EXPLANATION
Rai advises startup founders to choose investors who are prepared to back high‑risk, high‑impact ventures and who bring substantive expertise, rather than those who only offer financial resources.
EVIDENCE
She notes that many VCs add little value in India and stresses the importance of evaluating investors on risk tolerance and expertise [255-256].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of risk-taking, value-adding investors and founders focusing on real problems is covered in [S19] and [S20].
MAJOR DISCUSSION POINT
Choosing value‑adding investors
AGREED WITH
Vinod Khosla
V
Vinod Khosla
21 arguments136 words per minute5048 words2213 seconds
Argument 1
– AI investment is justified only if the technology can be widely deployed; political resistance (e.g., robot bans) can block adoption (Vinod Khosla)
EXPLANATION
Khosla states that AI funding makes sense only when the technology can be adopted at scale, but political decisions—such as bans on robots—can impede that deployment.
EVIDENCE
He cites the example of Germany prohibiting robots from working on Sundays, describing it as “stupidity” that could block AI adoption, and emphasizes that politics can be a major obstacle [126-131].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Political acceptance and regulatory hurdles for AI deployment are mentioned in [S14], while the need for wide deployment to generate revenue is noted in [S1].
MAJOR DISCUSSION POINT
Political barriers to AI deployment
AGREED WITH
Nivruthi Rai
Argument 2
– Wide AI deployment can generate hundreds of billions in annual revenue, but policy must enable it (Vinod Khosla)
EXPLANATION
Khosla predicts that if AI is broadly adopted, it could generate revenue in the hundreds of billions per year, provided that supportive policies are in place.
EVIDENCE
He asserts that AI usage could produce “hundreds of billions of revenue per year” if the technology is widely deployed [126-128].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The link between large-scale AI deployment, revenue potential, and policy support is discussed in [S1].
MAJOR DISCUSSION POINT
Economic upside of AI
Argument 3
– AI‑driven tutors, primary‑care doctors, and agronomists can serve millions of underserved Indians, leveraging Aadhaar and UPI infrastructure (Vinod Khosla)
EXPLANATION
Khosla emphasizes that AI‑based services in education, health, and agriculture can reach vast populations in India by building on existing digital identity (Aadhaar) and payment (UPI) systems.
EVIDENCE
He describes AI primary-care doctors, AI tutors (CK-12) reaching four-to-five million students, and AI agronomists for women farmers, all integrated with Aadhaar and UPI [146-152].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Aadhaar-enabled AI service delivery for health, education, and agriculture is discussed in [S15].
MAJOR DISCUSSION POINT
AI for inclusive public services
AGREED WITH
Nivruthi Rai
Argument 4
– AI can design “N = 1” personalized medicines, bypassing traditional clinical‑trial requirements (Vinod Khosla)
EXPLANATION
Khosla proposes that AI can create drugs tailored to a single patient’s genetic profile, eliminating the need for large‑scale clinical trials and streamlining regulatory approval.
EVIDENCE
He explains the concept of “N = 1” drugs, where a medication is designed for a single patient, making traditional clinical trials unnecessary, and provides a detailed example of this approach [512-523].
MAJOR DISCUSSION POINT
AI‑enabled personalized medicine
Argument 5
– Data‑efficient and compute‑efficient models will slash AI costs and power, making the technology ubiquitous (Vinod Khosla)
EXPLANATION
Khosla argues that research into data‑efficient and compute‑efficient AI models will dramatically lower both the monetary and energy costs of AI, enabling widespread adoption.
EVIDENCE
He repeatedly states that his firm is “investing in compute efficiency” and references Sam Altman’s comment that inference cost has fallen 1,000-fold and may drop another 100-fold, implying massive cost reductions [185-194] and [199-203].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Efficiency gains that double compute capacity without extra power are described in [S9].
MAJOR DISCUSSION POINT
Reducing AI cost and energy consumption
Argument 6
– The long‑term goal is Artificial Super‑Intelligence (ASI) that can be fine‑tuned for any domain; narrow, specialized AI is a short‑term misconception (Vinod Khosla)
EXPLANATION
Khosla states that the ultimate aim is to build ASI, a system far surpassing human intelligence, which can be adapted to any field, and that focusing on narrow AI solutions is a limited, short‑term view.
EVIDENCE
He describes the evolution from AGI to ASI, emphasizing that a super-intelligence can be trained for any engineering discipline, making specialized narrow AI a “short-term mistaken notion” [242-245].
MAJOR DISCUSSION POINT
Vision for general AI
Argument 7
– Within 5–10 years most research will be performed by AI‑based scientists, accelerating innovation exponentially (Vinod Khosla)
EXPLANATION
Khosla predicts that AI will become the primary researcher, with AI‑driven scientists conducting the majority of scientific work, leading to exponential growth in innovation.
EVIDENCE
He notes that in five to ten years, AI will serve as computer, material, fusion, and drug-discovery scientists, multiplying research capacity from tens to thousands of scientists [220-225].
MAJOR DISCUSSION POINT
AI as a research engine
Argument 8
– AI will replace back‑office BPO/IT services; transition will be gradual due to existing contracts, but firms must pivot to AI expertise (Vinod Khosla)
EXPLANATION
Khosla explains that AI will automate outsourced back‑office functions such as BPO and IT support, but the shift will be phased because many companies are bound by multi‑year contracts.
EVIDENCE
He states that BPO services are the easiest to replace with AI and that existing contracts (e.g., five-year deals) will delay immediate change, though the transition will start well before contracts expire [266-270].
MAJOR DISCUSSION POINT
Disruption of BPO/IT sector
Argument 9
– Current IT/BPO workers need to acquire AI knowledge to remain employable; new opportunities arise in AI implementation for SMEs (Vinod Khosla)
EXPLANATION
Khosla argues that workers in traditional IT and BPO roles must upskill in AI to stay relevant, and that small and medium enterprises will need AI expertise to compete.
EVIDENCE
He advises that AI knowledge is essential for applying AI in companies, noting that even large U.S. firms lack this competence, and that markets in Africa, Latin America, and Southeast Asia present new opportunities [284-287].
MAJOR DISCUSSION POINT
Workforce reskilling for AI
Argument 10
– AI empowers small business owners (e.g., kirana shops) to launch ventures, illustrating AI’s democratizing effect (Vinod Khosla)
EXPLANATION
Khosla points out that AI tools enable micro‑entrepreneurs, such as small shop owners, to start and grow businesses that they might otherwise retire from, demonstrating AI’s potential to democratize entrepreneurship.
EVIDENCE
He cites examples of 50-60-year-old Indians using AI to start hair salons, kirana shops, or supply-chain businesses, highlighting AI’s power to enable new ventures for people who would otherwise consider retirement [173-176].
MAJOR DISCUSSION POINT
AI‑driven entrepreneurship
Argument 11
– Indian VCs are overly risk‑averse, emphasizing short‑term revenue/profit plans, which stifles breakthrough innovation (Vinod Khosla)
EXPLANATION
Khosla criticizes Indian venture capitalists for focusing on immediate revenue and profitability, arguing that such risk aversion hampers the development of transformative technologies.
EVIDENCE
He describes Indian VCs as “risk-averse,” noting they turn conversations into revenue-plan discussions and that this mindset limits large-risk, high-impact innovation [341-354].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for risk-tolerant financing and criticism of short-term VC focus appear in [S20] and [S19].
MAJOR DISCUSSION POINT
VC risk aversion in India
Argument 12
– IRR metrics are misleading for frontier tech; investors should accept failure and back big, high‑risk bets (Vinod Khosla)
EXPLANATION
Khosla argues that internal‑rate‑of‑return calculations are unsuitable for frontier technologies like AI, and that investors should instead embrace failure and fund bold, high‑risk projects.
EVIDENCE
He recounts that in his 200-deal history he never calculated IRR, calling it “fundamentally misleading,” and stresses that focusing on IRR leads to low-risk investments [357-363].
MAJOR DISCUSSION POINT
Inappropriate financial metrics for AI
Argument 13
– Traditional academic buildings are less critical than expanding dorm capacity to foster dense student interaction with AI‑augmented learning (Vinod Khosla)
EXPLANATION
Khosla suggests that universities should prioritize building dormitories to increase student density and interaction with AI‑enhanced learning environments, rather than expanding traditional lecture‑hall infrastructure.
EVIDENCE
He recommends building more dorm space to host larger student bodies, arguing that dense interaction with AI and peers drives better learning outcomes, and that academic buildings are less important [406-415].
MAJOR DISCUSSION POINT
Rethinking campus infrastructure
Argument 14
– Communities of AI agents can develop emergent behaviors, such as creating private languages, illustrating new forms of collective intelligence (Vinod Khosla)
EXPLANATION
Khosla describes how groups of AI agents can spontaneously develop their own communication protocols, demonstrating emergent collective intelligence that may be opaque to humans.
EVIDENCE
He cites the example of AI agents forming a private language to avoid human surveillance, observed within days of interaction in the OpenCloud Moldbook project [426-430].
MAJOR DISCUSSION POINT
Emergent AI collective behavior
Argument 15
– Students should be encouraged to ignore parental/teacher expectations and think outside the line to thrive in a dynamic world (Vinod Khosla)
EXPLANATION
Khosla advises that young people should not be constrained by traditional authority figures and should cultivate independent, creative thinking to succeed in a rapidly changing environment.
EVIDENCE
He lists three lessons: “don’t listen to your parents,” “don’t listen to your teachers,” and “color outside the line,” encouraging dropping out if desired and thinking beyond conventional norms [388-393].
MAJOR DISCUSSION POINT
Cultivating independent thinking
Argument 16
– Powerful technologies have dual‑use risk; AI could enable customized biological threats, requiring responsible governance (Vinod Khosla)
EXPLANATION
Khosla warns that AI, like nuclear or biowarfare technologies, can be misused to create tailored biological weapons, underscoring the need for responsible oversight and governance.
EVIDENCE
He draws parallels with nuclear and biowarfare as dual-use technologies, stating that AI could be used to design ethnicity-specific biological threats and must be governed responsibly [317-320].
MAJOR DISCUSSION POINT
Dual‑use risk of AI
Argument 17
– Maintaining a diversity of AI models adds resilience against a single malicious AI system (Vinod Khosla)
EXPLANATION
Khosla argues that having multiple, diverse AI models reduces the risk that a single compromised or malicious AI could dominate, thereby enhancing system resilience.
EVIDENCE
He notes that a diversity of models lowers the chance of a single bad AI becoming dominant, adding resilience to the AI landscape [327-330].
MAJOR DISCUSSION POINT
Model diversity for safety
Argument 18
– Political regulations (e.g., Germany’s robot‑work ban) can hinder beneficial AI deployment (Vinod Khosla)
EXPLANATION
Khosla points out that government policies, such as Germany’s prohibition on robots working on Sundays, can obstruct the practical rollout of AI technologies that could otherwise provide economic benefits.
EVIDENCE
He cites the German example where robots are barred from retail on Sundays, labeling it “stupidity” that could impede AI adoption [129-131].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for societal permission and the impact of restrictive regulations on AI adoption are noted in [S14].
MAJOR DISCUSSION POINT
Regulatory barriers to AI
Argument 19
– In ten years AI will surpass human knowledge in most subjects; education will shift to AI‑augmented learning (Vinod Khosla)
EXPLANATION
Khosla predicts that within a decade AI systems will know more than human students across virtually all disciplines, prompting a transformation of education toward AI‑enhanced learning environments.
EVIDENCE
He recounts a conversation with the IIT Delhi director, noting that none of the 500 students could know more than AI on any subject, implying AI will dominate knowledge [398-402].
MAJOR DISCUSSION POINT
Future of AI‑driven education
Argument 20
– Top global‑impact applications are AI doctors, AI teachers, and AI agronomists (Vinod Khosla)
EXPLANATION
Khosla identifies AI‑powered healthcare, education, and agriculture as the most consequential applications for addressing worldwide challenges.
EVIDENCE
He lists AI doctors, AI teachers, and AI agronomists as the three obvious high-impact solutions that could affect billions of people [467-470].
MAJOR DISCUSSION POINT
High‑impact AI use cases
Argument 21
– Power‑consumption constraints may change dramatically as inference cost drops by orders of magnitude (Vinod Khosla)
EXPLANATION
Khosla suggests that as AI inference becomes dramatically cheaper, the associated power consumption could fall sharply, altering current concerns about energy usage.
EVIDENCE
He mentions that power consumption may drop 1,000-fold while usage skyrockets, noting that inference cost has already fallen 1,000-fold and may drop another 100-fold, reshaping the power equation [463-464].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Projected large reductions in inference cost and associated power savings are mentioned in [S9], with baseline data-center consumption context in [S12].
MAJOR DISCUSSION POINT
Future energy dynamics of AI
AGREED WITH
Nivruthi Rai
M
Moderator
1 argument144 words per minute291 words121 seconds
Argument 1
– Presented Vinod Khosla’s five career phases to frame the discussion on AI’s evolution and impact (Moderator)
EXPLANATION
The moderator introduced Vinod Khosla by outlining his five distinct career phases, providing context for his perspective on AI development and its societal implications.
EVIDENCE
He described Khosla’s journey from immigrant engineer to investor, highlighting phases such as open systems, clean-tech investments, macro-thinking, OpenAI investment, and the era of abundance, and listed notable companies associated with him [4-21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Khosla’s career phases and background are outlined in [S1].
MAJOR DISCUSSION POINT
Setting the stage with Khosla’s background
A
Audience
1 argument134 words per minute72 words32 seconds
Argument 1
– Questioned whether pharma firms should go “all‑in” on AI now despite stringent regulations, prompting a discussion on aggressive adoption (Audience)
EXPLANATION
An audience member asked if pharmaceutical companies should fully embrace AI despite the challenges posed by strict regulatory frameworks, seeking guidance on the timing of AI adoption.
EVIDENCE
The question was raised about the pharma sector’s regulatory hurdles and whether companies should adopt AI aggressively or wait, highlighting the tension between innovation and compliance [496-500].
MAJOR DISCUSSION POINT
Pharma sector’s AI adoption dilemma
Agreements
Agreement Points
AI infrastructure constraints: data‑center power consumption is already a significant share of global energy and will double, requiring renewable or nuclear solutions and prompting concerns about future power availability.
Speakers: Nivruthi Rai, Vinod Khosla
– Semiconductor performance has shifted to “performance per watt per area,” and data‑center power use already consumes ~1 % of global capacity and will double, demanding renewable/nuclear solutions (Nivruthi Rai) – Power‑consumption constraints may change dramatically as inference cost drops by orders of magnitude (Vinod Khosla)
Rai explains that the semiconductor race has moved to performance per watt per area and that data-centers now use about 1 % of global electricity and are set to double, creating a need for renewable or nuclear power [53-58]. Khosla adds that while inference cost may fall dramatically, power consumption could still be a limiting factor, though it might also drop if efficiency improves [463-464].
POLICY CONTEXT (KNOWLEDGE BASE)
This concern aligns with recent calls for sustainable AI development that emphasize reducing data-center energy footprints and adopting sector-specific standards to lower overall emissions [S46][S49].
Political and regulatory barriers can impede large‑scale AI deployment.
Speakers: Nivruthi Rai, Vinod Khosla
– AI investment is justified only if the technology can be widely deployed; political resistance (e.g., robot bans) can block adoption (Vinod Khosla) – Politics can block AI deployment, as illustrated by Germany’s ban on robots working on Sundays (Nivruthi Rai)
Rai points out that political decisions, such as Germany’s prohibition on Sunday robot work, can block AI adoption and that capitalism operates by permission of democracy [129-135]. Khosla makes the same case, noting that political resistance (e.g., robot bans) can prevent the justified investment in AI from being realized [126-131][134-135].
POLICY CONTEXT (KNOWLEDGE BASE)
The barrier reflects observations in security and governance reports that highlight the need for robust AI policy frameworks to manage dual-use risks and regulatory challenges [S39][S40][S55].
AI‑driven public services built on Aadhaar/UPI can provide free health, education and agricultural advice to millions of Indians.
Speakers: Nivruthi Rai, Vinod Khosla
– India must ensure AI benefits reach citizens first through Aadhaar‑based services like AI doctors, tutors, and agronomists (Nivruthi Rai) – AI‑driven tutors, primary‑care doctors, and agronomists can serve millions of underserved Indians, leveraging Aadhaar and UPI infrastructure (Vinod Khosla)
Rai stresses that AI deployment in India should prioritize Aadhaar-based AI doctors, tutors and agronomists to reach all citizens, especially the underserved [145-158]. Khosla echoes this, highlighting AI primary-care doctors, AI tutors (CK-12) and AI agronomists as high-impact services that can be delivered via Aadhaar and UPI [146-152].
Founders need investors who are willing to take large risks and not rely on short‑term financial metrics such as IRR.
Speakers: Nivruthi Rai, Vinod Khosla
– Founders should evaluate investors based on willingness to take large risks and provide real expertise, not merely capital (Nivruthi Rai) – Indian VCs are overly risk‑averse, emphasizing short‑term revenue plans; IRR metrics are misleading for frontier tech (Vinod Khosla)
Rai advises founders to choose investors who back high-risk, high-impact ventures and bring expertise beyond capital [255-256]. Khosla criticises Indian VCs for risk-aversion, short-term revenue focus and the misuse of IRR as a metric for frontier technologies [341-354][357-363].
POLICY CONTEXT (KNOWLEDGE BASE)
Investor expectations discussed in startup financing studies stress the importance of long-term capital and risk-tolerant backers, contrasting with short-term IRR focus [S51][S52].
Sector‑specific skill development for women and farmers is more effective than generic graduation degrees.
Speakers: Nivruthi Rai, Vinod Khosla
– Emphasize sector‑specific skill training for women and farmers rather than generic graduation degrees (Nivruthi Rai) – AI agronomists can help women farmers on small plots, illustrating the need for targeted support (Vinod Khosla)
Rai argues that focusing on sector-specific training (e.g., textiles, agriculture) for women is more beneficial than generic degrees [170-176]. Khosla supports this by describing AI-enabled agronomists that can assist women farmers on one-acre plots, showing the value of targeted AI tools [161-163].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs on upskilling stress targeted, sector-specific training for women and farmers as more effective than generic degrees, echoing UNESCO TVET recommendations [S34][S35][S37].
AI is a strategic technology with dual‑use potential, comparable to nuclear, requiring responsible governance.
Speakers: Nivruthi Rai, Vinod Khosla
– AI is as strategic as nuclear, with potential for both great benefit and great risk (Nivruthi Rai) – Powerful technologies have dual‑use risk; AI could enable customized biological threats, so responsible governance is essential (Vinod Khosla)
Rai describes AI as strategic like nuclear, implying high stakes for national security and development [75-76]. Khosla parallels AI with nuclear and biowarfare, warning that AI could be misused for customized biological threats and must be governed responsibly [317-320].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple expert analyses compare AI’s dual-use nature to nuclear technology and call for dedicated governance mechanisms [S38][S39][S43].
Similar Viewpoints
Both see power consumption as a critical bottleneck for AI scaling and stress the need for energy‑efficient solutions or new energy sources [53-58][463-464].
Speakers: Nivruthi Rai, Vinod Khosla
– Semiconductor performance has shifted to “performance per watt per area,” and data‑center power use already consumes ~1 % of global capacity and will double, demanding renewable/nuclear solutions (Nivruthi Rai) – Power‑consumption constraints may change dramatically as inference cost drops by orders of magnitude (Vinod Khosla)
Both agree that political decisions can impede AI adoption, making policy a key factor for successful deployment [126-131][129-135].
Speakers: Nivruthi Rai, Vinod Khosla
– AI investment is justified only if the technology can be widely deployed; political resistance (e.g., robot bans) can block adoption (Vinod Khosla) – Politics can block AI deployment, as illustrated by Germany’s ban on robots working on Sundays (Nivruthi Rai)
Both champion AI‑enabled public services built on existing Indian digital identity and payment infrastructure to reach the underserved [145-158][146-152].
Speakers: Nivruthi Rai, Vinod Khosla
– India must ensure AI benefits reach citizens first through Aadhaar‑based services like AI doctors, tutors, and agronomists (Nivruthi Rai) – AI‑driven tutors, primary‑care doctors, and agronomists can serve millions of underserved Indians, leveraging Aadhaar and UPI infrastructure (Vinod Khosla)
Both criticize the Indian VC ecosystem for risk‑aversion and reliance on short‑term financial metrics, urging a shift toward high‑risk, high‑impact investing [255-256][341-354][357-363].
Speakers: Nivruthi Rai, Vinod Khosla
– Founders should evaluate investors based on willingness to take large risks and provide real expertise, not merely capital (Nivruthi Rai) – Indian VCs are overly risk‑averse; IRR metrics are misleading for frontier tech (Vinod Khosla)
Both see value in targeted, sector‑specific interventions (especially for women in agriculture) over generic higher‑education pathways [170-176][161-163].
Speakers: Nivruthi Rai, Vinod Khosla
– Emphasize sector‑specific skill training for women and farmers rather than generic graduation degrees (Nivruthi Rai) – AI agronomists can help women farmers on small plots, illustrating the need for targeted support (Vinod Khosla)
Both treat AI as a strategic, dual‑use technology comparable to nuclear, emphasizing the need for responsible oversight [75-76][317-320].
Speakers: Nivruthi Rai, Vinod Khosla
– AI is as strategic as nuclear, with potential for both great benefit and great risk (Nivruthi Rai) – Powerful technologies have dual‑use risk; AI could enable customized biological threats, so responsible governance is essential (Vinod Khosla)
Unexpected Consensus
Both speakers framed AI as a strategic technology comparable to nuclear, highlighting dual‑use risks and the necessity for responsible governance.
Speakers: Nivruthi Rai, Vinod Khosla
– AI is as strategic as nuclear, with potential for both great benefit and great risk (Nivruthi Rai) – Powerful technologies have dual‑use risk; AI could enable customized biological threats, so responsible governance is essential (Vinod Khosla)
While Rai mentioned AI’s strategic importance in a broad development context, Khosla explicitly linked AI to dual-use concerns similar to nuclear weapons, yet both converged on the view that AI requires careful governance to mitigate existential risks [75-76][317-320].
POLICY CONTEXT (KNOWLEDGE BASE)
The framing mirrors discussions in international security assessments that position AI alongside nuclear weapons in dual-use risk considerations [S38][S55].
Overall Assessment

The discussion revealed strong convergence between Nivruthi Rai and Vinod Khosla on several fronts: the urgency of addressing AI‑related power and infrastructure constraints; the blocking role of politics and regulation; the promise of Aadhaar‑based AI public services for health, education and agriculture; the need for risk‑tolerant, non‑IRR‑focused investors; and the importance of sector‑specific skill development for women and farmers. Both also treated AI as a strategic, dual‑use technology akin to nuclear, underscoring governance challenges.

High consensus on core infrastructural, policy, and societal dimensions of AI, suggesting that future initiatives should prioritize energy‑efficient AI infrastructure, supportive regulatory frameworks, inclusive AI service delivery, and a venture ecosystem that embraces high‑risk, high‑impact investments.

Differences
Different Viewpoints
How to address AI infrastructure bottlenecks (hardware supply vs compute efficiency)
Speakers: Nivruthi Rai, Vinod Khosla
GPU and high‑bandwidth memory supply is limited to a few fabs, creating a bottleneck for AI compute scaling (Nivruthi Rai) Investing in compute‑efficiency techniques (e.g., checkpoint‑free training) can double AI capacity without extra power (Vinod Khosla)
Rai stresses that the scarcity of GPUs and HBM chips (80 % from three companies) and the need for additional fabs severely limit AI scaling and require renewable/nuclear power solutions [66-71][85-88]. Khosla counters that algorithmic advances-such as eliminating checkpoint restarts-can double usable compute without adding chips or power, and that his firm is heavily investing in compute-efficiency to cut costs and energy use [216-218][185-194][199-203].
POLICY CONTEXT (KNOWLEDGE BASE)
Debate over hardware supply versus compute efficiency is reflected in sustainability reports that advocate for efficiency-focused AI models and sector-specific standards to mitigate infrastructure bottlenecks [S46][S49].
Strategic focus: narrow, sector‑specific AI use cases vs building a general super‑intelligence
Speakers: Nivruthi Rai, Vinod Khosla
Focus on 20‑30‑50 precise use cases rather than a broad, generic approach (Nivruthi Rai) The long‑term goal is Artificial Super‑Intelligence (ASI) that can be fine‑tuned for any domain; narrow AI is a short‑term misconception (Vinod Khosla)
Rai proposes concentrating resources on a limited set of high-impact applications (e.g., traffic, doctors, education) to make progress with constrained capital [236-237]. Khosla argues that true progress requires building a single, general ASI that can be adapted to any field, dismissing specialized AI as a short-term mistaken notion [242-245].
POLICY CONTEXT (KNOWLEDGE BASE)
The strategic tension mirrors a documented debate at the Invest India Fireside Chat where Rai advocated narrow use-cases and Khosla argued for building general AI intelligence [S47][S48].
Approach to skill development and education for underserved groups
Speakers: Nivruthi Rai, Vinod Khosla
Emphasize sector‑specific skill training for women and farmers rather than generic graduation degrees (Nivruthi Rai) Prioritize expanding dorm capacity and AI‑augmented learning environments over traditional academic buildings (Vinod Khosla)
Rai critiques the over-emphasis on graduation, advocating targeted training in textiles, agriculture, etc., especially for rural women [170-176]. Khosla suggests that universities should build more dormitories to increase dense student interaction with AI-enhanced learning, deeming new academic buildings less critical [406-415].
POLICY CONTEXT (KNOWLEDGE BASE)
Inclusive skill-development approaches for underserved populations are endorsed by upskilling coalitions and digital inclusion studies [S34][S35][S56].
How founders should evaluate investors and the role of financial metrics
Speakers: Nivruthi Rai, Vinod Khosla
Founders should evaluate investors based on willingness to take large risks and provide real expertise, not merely capital (Nivruthi Rai) IRR metrics are misleading for frontier tech; investors should accept failure and back high‑risk bets (Vinod Khosla)
Rai advises founders to choose investors who back high-risk, high-impact ventures and bring substantive expertise [255-256]. Khosla criticizes Indian VCs for risk-aversion and reliance on IRR calculations, arguing such metrics hinder breakthrough innovation and that investors should focus on bold bets rather than short-term returns [341-354][357-363]. While both call for risk-taking, they differ on the specific criteria and critique of current VC practices.
POLICY CONTEXT (KNOWLEDGE BASE)
Founders’ evaluation of investors is discussed in analyses of financing digital startups, highlighting the need for patient capital and alignment beyond short-term metrics [S51][S52].
Role of policy and political environment in AI deployment
Speakers: Nivruthi Rai, Vinod Khosla
Should India build capacity, capability, and consumption of AI? (Nivruthi Rai) Political resistance (e.g., Germany’s robot‑work ban) can block AI adoption; capitalism is by permission of democracy (Vinod Khosla)
Rai frames the policy question as whether India should focus on building AI capacity, capability, and consumption [98-101]. Khosla highlights concrete political obstacles, such as Germany’s ban on Sunday robot work, and stresses that democratic politics ultimately enable or block AI deployment [126-131][135-136]. Both see policy as pivotal but differ on the emphasis: Rai asks a strategic “what should we do?” while Khosla warns of specific regulatory roadblocks.
POLICY CONTEXT (KNOWLEDGE BASE)
The role of policy and politics is highlighted in security and social empowerment reports that call for comprehensive AI governance frameworks [S39][S40].
Unexpected Differences
Focus on narrow, sector‑specific AI use cases vs building a universal ASI
Speakers: Nivruthi Rai, Vinod Khosla
Can we focus on 20‑30‑50 precise use cases and not work on generic AI? (Nivruthi Rai) The long‑term goal is Artificial Super‑Intelligence (ASI) that can be fine‑tuned for any domain; narrow AI is a short‑term misconception (Vinod Khosla)
Both participants are strong AI advocates, yet they diverge sharply on the strategic roadmap: Rai proposes a pragmatic, limited‑use‑case approach given capital constraints, while Khosla insists that only a general ASI can deliver lasting impact, dismissing narrow solutions as short‑sighted. This contrast was not anticipated given their shared enthusiasm for AI’s transformative power.
POLICY CONTEXT (KNOWLEDGE BASE)
The same debate on narrow versus universal AI strategies is documented in the Invest India Fireside Chat, reflecting divergent views on capital allocation and long-term impact [S47][S48].
Education strategy: sector‑specific vocational training vs AI‑augmented dorm‑centric learning
Speakers: Nivruthi Rai, Vinod Khosla
Emphasize sector‑specific skill training for women and farmers rather than generic graduation (Nivruthi Rai) Prioritize expanding dorm capacity to foster dense AI‑augmented learning, de‑emphasizing traditional academic buildings (Vinod Khosla)
While both discuss improving human capital, Rai advocates targeted vocational programs for rural women, whereas Khosla proposes a structural shift in university design to maximize AI‑enhanced peer interaction. The divergence in educational policy focus was unexpected given their common interest in capacity building.
POLICY CONTEXT (KNOWLEDGE BASE)
Education strategy discussions in upskilling and TVET literature contrast vocational, sector-specific training with AI-enhanced learning environments, emphasizing the need for tailored approaches [S34][S35][S37].
Overall Assessment

The discussion revealed several substantive disagreements: (1) hardware supply constraints versus algorithmic compute‑efficiency as the primary solution to AI scaling; (2) whether to concentrate on a limited set of high‑impact applications or to pursue a universal super‑intelligence; (3) differing visions for skill development and higher‑education infrastructure; (4) contrasting views on investor evaluation criteria and the relevance of IRR metrics; and (5) varied emphases on policy – strategic capacity‑building versus specific political barriers. While participants share a common optimism about AI’s potential for India, they diverge on the pathways to realize that potential.

Moderate to high. The disagreements are not about the desirability of AI but about the practical routes—hardware vs software solutions, narrow vs general AI, vocational vs AI‑centric education, and risk‑tolerant investment metrics. These gaps suggest that coordinated policy and industry action will need to reconcile differing priorities to avoid fragmented efforts and to ensure that infrastructure, regulatory, and investment strategies align with the shared goal of inclusive AI-driven development.

Partial Agreements
Both speakers agree that AI should be deployed through existing Aadhaar/UPI platforms to deliver free health, education, and agricultural services at scale, but Rai emphasizes inclusive public‑service design while Khosla focuses on the business potential and scalability of such services [145-158][146-152].
Speakers: Nivruthi Rai, Vinod Khosla
AI‑driven tutors, primary‑care doctors, and agronomists can serve millions of underserved Indians, leveraging Aadhaar and UPI infrastructure (both) India must ensure AI benefits reach citizens first through Aadhaar‑based services like AI doctors, tutors, and agronomists (Nivruthi Rai) AI‑driven tutors, primary‑care doctors, and agronomists can serve millions of underserved Indians, leveraging Aadhaar and UPI infrastructure (Vinod Khosla)
Both concur that AI investment must lead to broad deployment and economic impact, yet Rai frames the question around capacity‑building choices, whereas Khosla stresses political permission and large‑scale revenue potential as the condition for justified investment [98-101][118-128].
Speakers: Nivruthi Rai, Vinod Khosla
AI investment is justified only if the technology can be widely deployed (Vinod Khosla) Should we build capacity, capability, or drive consumption? (Nivruthi Rai)
Takeaways
Key takeaways
AI infrastructure is currently constrained by limited GPU/HBM supply and high data‑center power consumption; compute‑efficiency innovations (e.g., checkpoint‑free training) can alleviate these bottlenecks. Economic justification for AI hinges on wide deployment that can generate hundreds of billions in revenue, but political and regulatory acceptance is essential. India should prioritize AI‑driven public services (AI doctors, tutors, agronomists) built on Aadhaar and UPI to ensure benefits reach the masses before any job‑displacement concerns arise. Future AI progress will be driven by data‑efficient and compute‑efficient models, making AI cheap enough to become a utility; the long‑term goal is Artificial Super‑Intelligence that can be fine‑tuned for any domain. Within 5‑10 years most research will be performed by AI‑based scientists, accelerating innovation exponentially. Back‑office BPO/IT services in India will be displaced by AI; firms must transition to AI implementation expertise and up‑skill their workforce. Indian venture‑capital culture is overly risk‑averse and fixated on short‑term IRR metrics, which stifles breakthrough innovation; investors should back high‑risk, high‑failure‑tolerance bets. Education should shift from building more academic buildings to expanding dorm capacity and fostering dense, AI‑augmented learning communities. AI carries dual‑use risks (e.g., customized biological threats); diversity of models and responsible governance are needed to mitigate misuse. Top societal impact applications identified: AI‑enabled primary‑care doctors, teachers, and agronomists, especially for the bottom billions.
Resolutions and action items
Encourage Indian policymakers to enable AI deployment through supportive regulations (e.g., allowing AI‑based services in healthcare, education, agriculture). Promote investment in compute‑efficiency research (e.g., checkpoint‑free training, sparsity, in‑memory compute) to reduce power needs. Leverage existing Aadhaar and UPI infrastructure to launch AI‑driven public services (AI doctors, tutors, agronomists). Advise Indian VCs to shift away from strict IRR focus and increase willingness to fund high‑risk, high‑impact AI ventures. Suggest Indian startups and incumbents to pivot from BPO/back‑office models toward AI implementation services for SMEs. Recommend expanding student dormitory capacity to create dense AI‑augmented learning environments. Propose exploring “N = 1” personalized medicine approaches to bypass traditional large‑scale clinical trials.
Unresolved issues
How to navigate specific political barriers (e.g., Germany’s robot‑work bans) that could impede AI adoption in various jurisdictions. Regulatory pathways for AI‑designed personalized medicines and how to obtain approval without conventional clinical trials. Concrete mechanisms for ensuring AI benefits reach all citizens before large‑scale job displacement occurs. Strategies for maintaining diversity of AI models to prevent a single malicious AI from dominating. Detailed roadmap for transitioning the existing BPO/IT workforce to AI‑centric roles. Exact timeline and scale of power‑consumption reductions versus the expected surge in AI usage. Methods to quantify and mitigate the risk of AI‑enabled bioweapon development.
Suggested compromises
Deploy AI benefits (e.g., AI doctors, tutors, agronomists) first to gain public and political acceptance before broader AI integration that may affect jobs. Balance aggressive AI investment with responsible governance by promoting model diversity and ethical oversight. Adopt a phased approach: build AI infrastructure while simultaneously investing in compute‑efficiency to offset power constraints.
Thought Provoking Comments
Till AI is beneficial and not scary, we won’t get deployment because politicians will get in the way. Capitalism is by permission of democracy. Voters elect people who then make policy for capitalism and policy will drive that.
Highlights a non‑technical, systemic barrier—political and regulatory acceptance—that can dominate the success of AI, shifting the conversation from pure technology or capital considerations to governance and societal readiness.
This remark pivoted the discussion from infrastructure and investment to the role of policy, prompting the audience to consider how democratic processes shape AI rollout and leading Khosla to elaborate on India‑specific policy opportunities.
Speaker: Vinod Khosla
In five years, and certainly in ten years, almost all of this research will be done not by humans but by AI scientists—AI computer scientists, AI material scientists, AI fusion scientists, AI drug discovery scientists.
Projects a radical future where AI augments or replaces human researchers, expanding the scope of AI impact beyond consumer applications to the very engine of scientific discovery.
Introduced a new topic—AI‑generated research—causing the conversation to move toward acceleration of innovation, and reinforced his optimism about exponential progress, influencing later remarks about compute efficiency and emergent behavior.
Speaker: Vinod Khosla
You can’t build specialized intelligence; the way we will make progress is to build a single, general super‑intelligence (ASI) and then fine‑tune it for domains. Specialized AI is a short‑term mistaken notion.
Challenges the common industry view that narrow AI solutions are the pragmatic path, arguing instead for a unified, general intelligence as the strategic foundation.
Shifted the dialogue from discussing specific use‑case pilots to a broader strategic debate about AI architecture, influencing later exchanges about the need for a versatile AI platform rather than siloed applications.
Speaker: Vinod Khosla
All BPO and many IT services in India will be replaceable by AI within the next five years. The transition will be gradual because of existing contracts, but companies that don’t reinvent themselves will be ‘cooked.’
Provides a concrete, industry‑specific forecast that challenges the status quo of India’s massive BPO sector, linking AI adoption directly to economic disruption and workforce transformation.
Prompted a series of follow‑up questions about the future of the Indian workforce, the emergence of new front‑office opportunities, and the need for reskilling, deepening the conversation about societal impact.
Speaker: Vinod Khosla
Most VCs in India are risk‑averse and obsess over IRR calculations. For breakthrough, frontier technologies you can’t meaningfully project IRR; you have to accept failure and back big bets.
Critiques the prevailing venture capital mindset, exposing a cultural barrier to high‑risk, high‑reward innovation and offering a clear prescription for change.
Changed the tone from admiration of existing investors to a call for cultural shift, leading to discussion about founder‑investor alignment and encouraging the audience to rethink funding strategies.
Speaker: Vinod Khosla
Don’t build more academic buildings; build more dorm space so you can have more students learning from AI and from each other. Let AI be the core of education, not just a tool.
Proposes a radical re‑imagining of higher education infrastructure, positioning AI as the central learning engine and emphasizing peer interaction over traditional lecture‑centric models.
Introduced a new dimension—education reform—into the AI‑focused dialogue, inspiring thoughts about how AI can reshape talent pipelines and linking back to earlier points on workforce readiness.
Speaker: Vinod Khosla
We are investing heavily in compute‑efficiency: if we can eliminate the need to restart training after a failure, we could double compute capacity without extra chips or power.
Identifies a specific technical bottleneck (checkpointing) and a tangible solution that could dramatically reduce energy use and cost, directly addressing earlier concerns about power consumption and data‑center scaling.
Provided a concrete example of how engineering advances can alleviate the macro‑level power constraints discussed earlier, steering the conversation toward actionable research directions.
Speaker: Vinod Khosla
Regulatory workaround: design drugs for a single patient (N=1). Because the regulator can’t demand a traditional clinical trial for a one‑off drug, AI‑designed personalized medicines become feasible.
Offers an innovative, albeit controversial, strategy to bypass entrenched regulatory hurdles in pharma, merging AI capability with policy navigation.
Shifted the discussion from abstract AI benefits to a concrete, industry‑specific application, prompting audience questions about implementation and reinforcing the theme of AI as a strategic, almost ‘nuclear‑level’ technology.
Speaker: Vinod Khosla
AI will create emergent communities of agents that can develop their own language to avoid human surveillance, as seen in the Moldbook/OpenCloud experiment.
Raises awareness of unintended emergent behaviors in complex AI systems, highlighting both the power and the risk of autonomous AI networks.
Added a layer of complexity regarding AI safety and governance, linking back to earlier concerns about misuse and the need for diverse models, and prompting the audience to consider ethical safeguards.
Speaker: Vinod Khosla
Overall Assessment

The discussion was steered by a series of bold, forward‑looking statements from Vinod Khosla that repeatedly shifted the focus from technical details to systemic, societal, and strategic dimensions of AI. Each of his provocative insights—whether about political gatekeeping, the rise of AI scientists, the necessity of a general super‑intelligence, the imminent disruption of India’s BPO sector, the cultural inertia of venture capital, or radical ideas for education and drug regulation—served as a turning point that opened new sub‑threads, forced participants to reconsider assumptions, and deepened the analysis. Nivruthi Rai’s framing questions helped surface these moments, but it was Khosla’s willingness to challenge conventional wisdom that drove the conversation toward a holistic view of AI as an infrastructure, a policy issue, an economic catalyst, and a potential existential risk.

Follow-up Questions
Is AI a generational platform shift or the largest capital misallocation?
Clarifies whether massive AI investments are justified or misdirected, influencing funding and policy decisions.
Speaker: Nivruthi Rai
What are your thoughts on sparsity, in‑memory compute, and non‑von‑Neumann/neuromorphic architectures for AI?
Addresses hardware strategies to improve performance per watt per area, crucial for scaling AI under power constraints.
Speaker: Nivruthi Rai
Should India focus on a limited set of 20‑30‑50 precise AI use cases rather than broad deployment?
Explores strategic allocation of scarce resources to maximize impact and avoid diffusion of effort.
Speaker: Nivruthi Rai
If BPO/IT services are displaced by AI, what are the front‑office opportunities and what replaces the BPO model?
Seeks guidance for entrepreneurs and policymakers on new business models after AI disrupts traditional outsourcing.
Speaker: Nivruthi Rai
What should the millions currently employed in IT and BPO do to remain employable in an AI‑centric economy?
Highlights workforce reskilling needs to prevent large‑scale unemployment and maintain economic stability.
Speaker: Nivruthi Rai
How can India leapfrog from generic drug production to AI‑driven biologics?
Targets a high‑value sector where AI could transform drug discovery and elevate India’s pharma industry.
Speaker: Nivruthi Rai
Could AI‑enabled biological design become a customized biological threat?
Raises biosecurity concerns about weaponizing AI‑generated genetics, prompting need for safeguards.
Speaker: Nivruthi Rai
How should founders evaluate investors to ensure they get the most value in the partnership?
Aims to improve capital efficiency and alignment between startups and venture partners.
Speaker: Nivruthi Rai
Should regulated industries like pharma go ‘all‑in’ on AI now or wait on the sidelines?
Seeks advice on timing of AI adoption under strict regulatory environments, affecting competitive advantage.
Speaker: Audience member (unidentified)
What are the top five AI applications to solve global and Indian problems?
Requests prioritization of AI use‑cases that can deliver the greatest societal and economic impact.
Speaker: Nivruthi Rai
Does AI increase venture alpha or does capital crowding compress returns for most funds?
Investigates the effect of AI hype on venture fund performance and capital allocation efficiency.
Speaker: Nivruthi Rai
What AI belief is most overrated?
Seeks to dispel misconceptions that may misguide investors and technologists.
Speaker: Nivruthi Rai
What is the most underrated constraint on AI deployment?
Looks for hidden bottlenecks (e.g., power, data, policy) that could limit AI scaling.
Speaker: Nivruthi Rai
Ten years from now, what will seem embarrassingly obvious about AI in India?
Encourages forward‑looking reflection to identify current blind spots that will later appear trivial.
Speaker: Nivruthi Rai
Research area: Improving data efficiency in LLM training to achieve comparable performance with far less data.
Could dramatically lower compute and energy requirements, making AI more sustainable and accessible.
Speaker: Vinod Khosla
Research area: Developing checkpoint‑free training methods to double compute capacity without extra power.
Enhances hardware utilization and reduces downtime, accelerating model development.
Speaker: Vinod Khosla
Research area: Building AI ‘scientists’ across domains (material, fusion, drug discovery) to multiply research output.
Would exponentially increase innovation speed by leveraging AI for scientific discovery.
Speaker: Vinod Khosla
Research area: AI‑driven N=1 drug design to bypass traditional clinical trials and regulatory hurdles.
Promises personalized medicine and faster market entry while navigating regulatory constraints.
Speaker: Vinod Khosla
Research area: Studying emergent behavior of AI agent swarms, including self‑generated languages and autonomous coordination.
Critical for understanding safety, control, and potential of large‑scale autonomous AI systems.
Speaker: Vinod Khosla
Research area: Applying complex systems theory to AI communities (e.g., Moldbook/OpenCloud) to predict unpredictable dynamics.
Helps anticipate nonlinear outcomes and design robust AI ecosystems.
Speaker: Vinod Khosla
Research area: Power consumption trends—how inference cost reductions may be offset by massive usage growth.
Essential for planning sustainable data‑center infrastructure and energy policy.
Speaker: Vinod Khosla
Research area: AI‑augmented education models (e.g., dorm‑centric learning, AI‑facilitated debate).
Explores transformative approaches to higher education in the AI era.
Speaker: Vinod Khosla
Research area: Ensuring diversity of AI models to provide resilience against malicious or biased AI systems.
Mitigates risk of a single dominant, potentially harmful AI model.
Speaker: Vinod Khosla
Research area: Impact of political regulation on AI deployment (e.g., robot bans, policy barriers).
Identifies non‑technical obstacles that could impede AI adoption and suggests policy interventions.
Speaker: Vinod Khosla

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote by Marcus Wallenberg Chairman SEB & Saab

Keynote by Marcus Wallenberg Chairman SEB & Saab

Session at a glanceSummary, keypoints, and speakers overview

Summary

The speaker frames the discussion as a major step forward for an AI initiative that aims to emulate the strategic approach of India’s government and industry under Prime Minister Modi [1-3]. He outlines three topics: a comparison of Sweden’s and India’s AI ecosystems, the industrial implications of AI diffusion, and sector-specific opportunities [4-8].


Sweden has invested over a decade in a national AI research programme called WASP, establishing dedicated research arenas and a graduate school that now produces roughly one PhD each week [8-10]. In contrast, India has built its AI strength through large-scale IT services and software engineering, creating a worldwide customer base rather than focusing on pure R&D [13-14]. The speaker argues that these complementary strengths make a Swedish-Indian partnership highly attractive for both research collaboration and applied AI projects [15-16].


He notes that after recent tariff changes, Europe faces a flood of low-cost Chinese products, which threatens the competitiveness of Swedish and broader European manufacturers [26-30]. According to him, AI is not a panacea but its widespread adoption in large companies will be essential to maintain market position and to develop new business models beyond cost reduction [31-36]. He observes that India appears more optimistic about AI and digitisation than Europe, which could accelerate joint initiatives [38-39].


In the life-science sector, he cites his board role at AstraZeneca and predicts AI will dramatically speed up molecule discovery and enable personalized medicine [40-45]. He also highlights defence applications, mentioning Saab’s radar systems and a 2025 trial in which an AI agent fully controlled a Gripen aircraft [46-53][62]. Furthermore, he points to telecommunications, stating that future 5G and 6G networks will be largely AI-driven, handling massive data flows for society [66-67].


The overall message is that leveraging Sweden’s research capacity together with India’s software expertise can help European industry compete globally and drive innovation across critical sectors [15-16][31-36][66-67]. He concludes that AI diffusion will be a decisive factor for economic competitiveness and societal advancement in the coming years [32][64][65].


Keypoints

Major discussion points


Sweden-India AI partnership:


The speaker contrasts Sweden’s long-term, research-heavy AI programme (WASP, PhD output) with India’s strength in applied software engineering and global IT services, arguing that the two models complement each other and present a strong basis for deeper collaboration. [8-15][16-24]


AI as a tool for industrial competitiveness:


In the wake of recent tariff changes and a flood of low-cost Chinese products, the speaker stresses that diffusion of AI into large European firms is essential to stay competitive, not only by cutting costs but also by enabling new business models and services. [26-34][35-36]


Key sectors where AI will have transformative impact:


The talk highlights life-sciences (accelerated drug discovery, personalized medicine), defence (radar, AI-controlled aircraft) and telecommunications (AI-driven 5G/6G networks) as areas where AI is expected to deliver major breakthroughs. [38-45][46-63][66-67]


Overall purpose / goal


The speaker aims to persuade the audience that a coordinated Sweden-India AI initiative-leveraging Sweden’s research capacity and India’s software-service ecosystem-will accelerate AI diffusion across industry, bolster Europe’s competitiveness against cheap Chinese imports, and unlock high-value applications in health, defence, and communications.


Tone of the discussion


The tone is consistently optimistic and forward-looking, using phrases such as “big step forward,” “fantastic possibility,” and “huge possibility” to convey confidence in AI’s potential. While acknowledging challenges (e.g., cheap Chinese exports), the speaker maintains a constructive stance, emphasizing opportunities rather than dwelling on obstacles. The enthusiasm peaks when describing specific application domains (life sciences, defence, telecom), reinforcing a hopeful outlook throughout.


Speakers

Speaker 1


– Role/Title: Board member, AstraZeneca


– Area of expertise: AI applications in industry and life‑sciences (discussed AI diffusion, industrial competitiveness, and pharmaceutical innovation)


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

Speaker 1 opens by calling the AI initiative “a big step forward” and likens its strategic ambition to Prime Minister Narendra Modi’s effort to mobilise Indian industry for long-term national goals, expressing hope that the AI programme will follow a similar path [1-4].


He then outlines three topics. First, he describes Sweden’s national AI programme (WASP), launched around 2015-2017 with strong public funding to meet the automation and autonomy needs of Swedish industry. The programme created dedicated research arenas, a PhD- and master-level school, and now graduates roughly one PhD per week [8-13].


Second, he contrasts this with India’s approach, which has not focused on pure R&D but on building a massive applied software-engineering capability through its IT-services sector and a global customer base [13-15].


Third, he emphasizes the complementarity of the two models: Sweden’s deep research capacity and India’s implementation strength can be paired, and the Swedish delegation sees “a fantastic possibility” for closer research and application collaboration [15-24]. He notes that Indian stakeholders appear more optimistic about AI and digitisation than their European counterparts, a cultural difference that could accelerate joint initiatives [38-39].


Turning to industrial competitiveness, he recalls the “beautiful day in the Rose Garden when all the tariffs were put on” [22-23] and observes that after the 2 April tariff changes Europe has been inundated with very cheap Chinese products, posing a serious challenge for Swedish and broader European manufacturers [26-28]. Swedish industry is dominated by multinational engineering firms [30-31], making AI diffusion across large firms essential to compete on more than price and to enable new business models, services and products [31-34][35-36]. He also highlights that AI will support governmental and societal services [64-65].


Sector-specific opportunities are highlighted:


* Life sciences – as a board member of AstraZeneca, he foresees AI dramatically shortening molecule-discovery cycles and supporting personalized medicine, extending advanced treatments to currently underserved patients [40-45].


* Defence – AI is already used to process massive data streams, e.g., Saab’s radar-equipped aircraft. In a 2025 demonstration, an AI agent was given full mission-critical control of a Gripen fighter [46-50][62].


* Robotics – mentioned briefly as another AI-driven area [58].


* Telecommunications – future 5G and 6G networks will be largely AI-driven and AI-focused, essential for handling the enormous data flows that will traverse societies [66-67].


He concludes that a coordinated Sweden-India AI partnership-leveraging Sweden’s long-term research infrastructure and India’s applied software expertise-can accelerate AI diffusion across industry, bolster European competitiveness against low-cost Chinese imports, and unlock transformative applications in health, defence, communications and beyond. AI will be a decisive factor for economic competitiveness and societal advancement, provided it is widely adopted and integrated into both research and commercial practice [31-36][64-65][66-67].


Session transcriptComplete transcript of the session
Speaker 1

It’s really a big step forward. I’ve followed Indian business for a long, long time, and the whole setup here reminds me of the way that Right Honourable Premier Modi set up his whole idea around making India with this tremendous force and getting the backing of very many Indian companies to achieve long -term political goals. So I really hope that the AI initiative will go the same way. I thought I would talk a little bit about three different matters. I’ll be relatively brief. I will start a little bit to talk about Sweden and India. I’ll talk a little bit about… AI diffusion and what’s important there and I’ll talk a little bit about some of the practical issues that we see from an industrial point of view what we can think about when it comes to AI so let me start taking you back in the Swedish context might be a reference point to what is going on here right now Sweden started its research a big research effort our family put in a program which is now ten years plus focusing on developing basic research in AI and we started that 2015 2017 we funded it with a major push into this and the reason for that was basically because we saw the automation needs and the autonomy needs of Swedish industry and industrial products It’s called WASP.

And not only do they have a number of arenas where they base a certain amount of typical research that you can use for AI, but also they started a school for PhDs and master’s students in AI. And today we graduate one PhD per week out of this program. So what does that mean for the Swedish context? That has been an extremely important part in terms of building the basic knowledge around AI and how you can use it. India, on the other hand, as I see it or as I perceive it, has not gone primarily the R &D route, but primarily the way to build this fantastic knowledge base in software engineering, which is much more applied, especially when you think about how…

India has worked with their IT services companies. developing a tremendous base in terms of customers, not only knowledge base, but also a customer base all around the world. So actually, from this point of view, India and Sweden should have a very good fit. And I think that some of us who are here on this trip in the Swedish delegation have seen the potential to work much closer with India along research lines and more application lines on the IT services and software knowledge that you have in this country. And as we know, when India starts moving, it’s a very major force. And you will, in my view, have a fantastic possibility to develop your initiative on AI in a very good way for your customers.

And that brings me a little bit into my… My second point. Namely, the whole question that we are dealing with from Swedish industry. You have to remember Swedish industry is to a large extent very much focused on multinational engineering companies that are having a global scale. India, of course, a different industrial structure. But here comes what I think is the big take where actually Sweden and India in more practical terms could work much closer with each other. Namely, the knowledge of the IT services companies putting an AI on top of the whole stack to be able to move this into a completely different position for these companies. So why is this important? This is important because what we’re witnessing today from an industrial point of view, not the least, after April 2nd, the beautiful day in the Rose Garden when all the tariffs were put on.

What we’ve seen since then is this widespread Chinese export of very cheap products into the world market, which is, of course, a big, big challenge for many companies in Europe and also in Sweden. This will be absolutely key for us in the future. How do we make sure that we can compete with Chinese and other companies, but primarily Chinese companies with very low prices? How do we make sure that we can compete on the world markets with them in a good way? I’m not saying AI is everything. But AI and the diffusion of AI into the real world of large companies will be key. Otherwise, we will not be able to do this in a smart way in years to come.

So therefore, I believe that also on this point, the whole competitiveness would be a very, very important part. But AI gives us more. AI gives us a huge possibility to move in and let the companies move into completely new areas in terms of business model, not only being cost efficient, but also in terms of providing new services and new products to the market. Here, I move into my third point. My third point is that we often, here in India, I think you have more of a positive way of thinking about AI and digitization, maybe, than Europe. But I tell you that when I look at certain industries and what is actually going on right now, it is a tremendous step forward.

And perhaps, and I sit on the board of AstraZeneca, which is a very large pharmaceutical company, British, Swedish based. And I would say that perhaps the most worthwhile app from AI going forward will be in life sciences. Not only life sciences in terms of providing better hospital services and so on. But when you think about how you will be able to use AI in getting new molecules in a much faster way. And when you think about how you will be able to use AI in getting new molecules in a much faster way. And when you think about how you will be able to apply more of personalized medicine based on your test results. you will be able to apply specialized treatment for people will mean that actually down the line we will provide medical needs to people that cannot be serviced today in the same way.

Then of course we look at things like robotics, but also another thing I would like to bring up is in the defense business. In defense material, AI will play a very significant role. We see it in many ways today, not the least when you start to accumulate and analyze data in a big way. For example, Saab, which is a Swedish defense company, is actually using radar aircraft where you need both for command and for control a tremendous amount of AI diffusion to really to be able to. But also on the other hand, we see that in the defense industry, we see that the defense industry is very much in the defense industry. We see that the defense industry is very much We see that the defense industry is very much in the defense industry.

We see that the defense industry is very much in the defense industry. We see that the defense industry is very much in the defense industry. We see that the defense industry is very much in the defense industry. We see that the defense industry is very much in the defense industry. We see that the defense industry is very much in the defense industry. We see that the defense industry is very much in the defense industry. We see that the defense industry is very much in the defense industry. very much in the defense industry. We see that the defense industry is very much in the defense industry. We see that the defense industry is very much in the defense industry.

We see that the defense which is actually divided in its software layer between both those that control the mission critical facts and those things that control the systems of the aircraft. In 2025, we actually applied an AI agent into the mission critical control and actually flew the Gripen aircraft with the AI agent in full control. So what is actually happening here is that on the one hand, you see these great abilities for AI to support companies in terms of being much more efficient, not only companies but also other governmental and other services coming through society. But on the other hand, you see these great abilities for AI to support companies in terms of being much more efficient, tremendous product development that is going on at a very, very high speed.

At the bottom line, I believe that we will see so much more of these examples coming through. And when I see Mr. Ek Udden here, who is the chief technical officer of Ericsson, I also remind myself that our future networks for 5G and 6G telecommunication will actually be, to a large extent, AI -driven and AI -focused. And this is, for societies, an extremely important point, that actually all these huge amounts of data that will go through societies, through the mobile networks in the future, will be completely supported by AI

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Speaker likens the AI initiative to Prime Minister Narendra Modi’s effort to mobilise Indian industry for long‑term national goals, expressing hope the AI programme will follow a similar path.”

The keynote by Marcus Wallenberg explicitly draws parallels to Modi’s successful approach to mobilising Indian companies for long‑term objectives, confirming the speaker’s comparison.

Confirmedhigh

“Sweden’s national AI programme (WASP) was launched around 2015‑2017 with strong public funding, created dedicated research arenas, a PhD‑ and master‑level school, and now graduates roughly one PhD per week.”

Multiple sources describe WASP as a research‑intensive initiative started in 2015‑2017, funded by the Knut and Alice Wallenberg Foundation, with dedicated research arenas and a doctoral school that graduates about one PhD each week.

Confirmedhigh

“India’s AI approach has not focused on pure R&D but on building a massive applied software‑engineering capability through its IT‑services sector and a global customer base.”

The knowledge base notes India’s historical strength in providing capability‑for‑hire through its IT services industry and the need to evolve toward proprietary AI products, confirming the described focus on applied engineering.

Confirmedmedium

“Sweden’s deep research capacity and India’s implementation strength are complementary, and the Swedish delegation sees a “fantastic possibility” for closer research and application collaboration.”

Wall­enberg’s keynote emphasizes the complementary nature of Sweden’s research‑intensive model and India’s implementation expertise, and highlights the opportunity for joint collaboration.

Additional Contextmedium

“Indian stakeholders appear more optimistic about AI and digitisation than their European counterparts, a cultural difference that could accelerate joint initiatives.”

The discussion is described as optimistic and collaborative, especially from the Indian side, providing contextual support for the claim of differing optimism levels.

Additional Contextmedium

“After the 2 April tariff changes Europe has been inundated with very cheap Chinese products, posing a serious challenge for Swedish and broader European manufacturers.”

The knowledge base discusses the use of tariff threats as diplomatic tools in trade disputes, offering broader context on tariff policies but does not confirm the specific 2 April change or the resulting influx of cheap Chinese goods.

Confirmedmedium

“AI will support governmental and societal services.”

The source on citizen‑centric AI highlights AI’s role in enhancing public services, confirming the claim that AI will support governmental and societal functions.

External Sources (47)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Keynote by Marcus Wallenberg Chairman SEB & Saab — Marcus Wallenberg delivered a comprehensive discussion on AI development and the potential for Sweden-India collaboratio…
S5
Keynote by Marcus Wallenberg Chairman SEB & Saab — He explained that Sweden has taken a research-focused approach to AI development through the WASP program, which his fam…
S6
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — And in a turbulent world, you need to choose your friends carefully. Sweden is choosing India. India provides the incred…
S7
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — Dowson Tong highlighted China’s vibrant AI ecosystem, characterised by hundreds of model companies and a strong open-sou…
S8
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 2 — Europe in Comin… Now the floor is yours. Ursula von der Leyen: Thank you so much, Honourable Chair, Minister Vaisnav, …
S9
HIGH LEVEL LEADERS SESSION IV — Artificial Intelligence is used in many fields
S10
Democratizing AI: Open foundations and shared resources for global impact — **Climate and Agriculture**: Applications include weather prediction systems and plant disease detection tools for agric…
S11
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — This discussion features a keynote address by Swedish Deputy Prime Minister Ebba Bush at an AI Impact Summit in India, f…
S12
360° on AI Regulations — In conclusion, the analysis reveals that AI regulation is guided by existing laws, and there is a complementary nature b…
S13
How AI Is Transforming Indias Workforce for Global Competitivene — Artificial intelligence | Capacity development Policy, Governance, and Inclusion Strategies
S14
Strategy outline — – 5.1 Upgrade the role of the MoI, develop its texts and capacities, and render it a reliable source of information, par…
S15
How to make AI governance fit for purpose? — Economic and Social Impact Economic | Development The Trump administration believes AI will bring countless revolution…
S16
Skilling and Education in AI — The conversation began with a Professor’s detailed analysis of four critical sectors where AI can drive substantial impa…
S17
Artificial intelligence and diplomacy: A new tool for diplomats? — Artificial intelligence (AI) is transitioning from science fiction into our everyday lives. Over the past few years, the…
S18
Keynote-Bejul Somaia — Social and economic development | Artificial intelligence Transformative sectors
S19
Keynote by Marcus Wallenberg Chairman SEB & Saab — Marcus Wallenberg delivered a comprehensive discussion on AI development and the potential for Sweden-India collaboratio…
S20
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — Sweden’s partnership with India is presented as combining India’s scale and speed with Sweden’s precision and trust. Bus…
S21
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — This discussion features a keynote address by Swedish Deputy Prime Minister Ebba Bush at an AI Impact Summit in India, f…
S22
360° on AI Regulations — In conclusion, the analysis reveals that AI regulation is guided by existing laws, and there is a complementary nature b…
S23
State of Play: AI Governance / DAVOS 2025 — Clara Chappaz: So France will host the second global summit on AI on February 9th and 10th, 2025. So just a few weeks…
S24
Interim Report: — 67. A new mechanism (or mechanisms) is required to facilitate access to data, compute, and talent in order to develop, d…
S25
How AI Is Transforming Indias Workforce for Global Competitivene — Listen, And I was in the United States for a couple of years. And I was in the United States for a couple of years. And …
S26
INTRODUCTION — In the realm of development and application of Artificial Intelligence technologies, start-ups are a key element for…
S27
Keynote-N Chandrasekaran — AI has transformative potential across multiple sectors, from empowering disadvantaged populations to revolutionizing pu…
S28
How to make AI governance fit for purpose? — – Anne Bouverot- Shan Zhongde- Chuen Hong Lew- Gabriela Ramos Economic and Social Impact Economic | Development The T…
S29
Keynote-Bejul Somaia — Social and economic development | Artificial intelligence Transformative sectors
S30
Skilling and Education in AI — The conversation began with a Professor’s detailed analysis of four critical sectors where AI can drive substantial impa…
S31
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — The Global Partnership on AI (GPAI) has played a pivotal role in advancing the field of Artificial Intelligence (AI) bet…
S32
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — And Prime Minister, we believe that nations should always build the strongest intelligence infrastructure and cross -bor…
S33
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Agar kisi machine ko sir paper clip banane ka alak de diya jaye to wo uska ek kaam ke liye duniya ke saare resources ko …
S34
AI for food systems — Dejan Jakovljevic: Yakovlevich, CIO and Director, Digitalization and Informatics Division at the Food and Agriculture Or…
S35
Contents — 1 There is no one single, clear-cut or generally accepted definition of artificial intelligence, but many definitions. I…
S36
National Strategy for Artificial Intelligence — Wallenberg AI, Autonomous Systems and Software Program (WASP) is a Swedish research institution funded by the Knut and A…
S37
https://app.faicon.ai/ai-impact-summit-2026/keynote-by-marcus-wallenberg-chairman-seb-saab — And not only do they have a number of arenas where they base a certain amount of typical research that you can use for A…
S38
[Tentative Translation] — – ・ In an environment where talented young people can expect to be active in various fields such as academia, industry, …
S39
New Colours of Knowledge — Doctoral schools aim to achieve the following: excellence in research; the possibility for interdisciplinary research; a…
S40
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Kalyan argues that India’s historical strength in providing capability-for-hire must evolve toward building proprietary …
S41
The Role of Government and Innovators in Citizen-Centric AI — The discussion maintained an optimistic and collaborative tone throughout, with speakers expressing enthusiasm about AI’…
S42
IndoGerman AI Collaboration Driving Economic Development and Soc — This could accelerate innovation and create win-win opportunities for both countries’ entrepreneurial ecosystems.
S43
From Innovation to Impact_ Bringing AI to the Public — If we don’t make for it, our all compounded historical knowledge will be lacking in the next generation. So instead of a…
S44
World Economic Forum Panel: Sovereignty and Interconnectedness in the Modern Economy — Sharp disagreements emerged over trade approaches. Lutnick defended tariff threats as diplomatic tools designed to bring…
S45
Day 0 Event #251 Large Models and Small Player Leveraging AI in Small States and Startups — John M Lervik: Yes, excellent. So Cognite was founded at the beginning of 2017, and at that time, we saw a fundamental…..
S46
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — The control over and exploitation of such valuable data by larger firms requires careful consideration and regulation. A…
S47
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — But the second aspect of competition is really diffusion or adoption. As each country and the companies from each countr…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
3 arguments143 words per minute1534 words640 seconds
Argument 1
Comparative AI strategies of Sweden and India and potential collaboration
EXPLANATION
The speaker contrasts Sweden’s government‑backed, long‑term AI research programme (WASP) that produces a steady stream of PhDs with India’s emphasis on applied software engineering and a large global IT services customer base. He argues that these complementary strengths create an opportunity for a mutually beneficial Sweden‑India partnership in both research and commercial AI applications.
EVIDENCE
He describes Sweden’s AI effort as a ten-year programme started around 2015-2017, funded heavily to meet automation needs of Swedish industry, and notes that the programme graduates one PhD per week, highlighting the strong basic research capacity [8-12]. He then explains that India has not pursued the R&D route but has built a massive applied software engineering knowledge base and a worldwide customer base through its IT services companies [13-15]. Finally, he states that the Swedish delegation sees a “fantastic possibility” for collaboration, suggesting that Sweden’s research strength and India’s application expertise can be combined for AI initiatives [15-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Wallenberg’s keynote outlines Sweden’s research-focused WASP programme that graduates one AI-related PhD per week and contrasts it with India’s strength in applied software engineering and global IT services, while Deputy Prime Minister Busch emphasizes Sweden’s choice of India for its scale and speed, with Europe providing precision and trust [S4] [S5] [S6].
MAJOR DISCUSSION POINT
Sweden‑India AI partnership
Argument 2
AI as a driver of industrial competitiveness against low‑cost Chinese imports
EXPLANATION
The speaker warns that after recent tariff changes, cheap Chinese products have flooded global markets, threatening European and Swedish manufacturers. He contends that diffusing AI across large companies is essential to retain competitiveness, not only by cutting costs but also by enabling new business models and services.
EVIDENCE
He references the post-April 2 tariffs and the subsequent surge of inexpensive Chinese exports, describing it as a major challenge for European and Swedish firms [26-28]. He then argues that while AI is not a panacea, its diffusion into large enterprises will be key to competing with low-price Chinese competitors and to developing new services and products for global markets [31-37].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Wallenberg warns that a flood of cheap Chinese products threatens European firms and argues that diffusing AI into large companies is essential for competitiveness; a Chinese AI ecosystem that drives down costs through open-source models is described in the China report, offering an alternative perspective on the cost challenge [S5] [S7].
MAJOR DISCUSSION POINT
AI for industrial competitiveness
Argument 3
High‑impact AI applications in life sciences, defense, and telecommunications
EXPLANATION
The speaker highlights three sectors where AI is expected to have transformative effects: accelerating drug discovery and personalized medicine in life sciences; enhancing defense capabilities, exemplified by an AI‑controlled Gripen aircraft; and powering future 5G/6G networks that will manage massive data flows. These examples illustrate AI’s broad societal and economic impact.
EVIDENCE
He notes his board membership at AstraZeneca and predicts AI will speed molecule discovery and enable personalized medicine, expanding healthcare access [40-45]. He cites the Swedish defence company Saab’s use of AI in radar aircraft and describes a 2025 test where an AI agent fully controlled a Gripen aircraft [46-63]. He concludes by mentioning that future 5G/6G networks will be largely AI-driven, handling huge societal data traffic [66-67].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Wallenberg cites AI’s potential to accelerate molecule discovery and personalize medicine in life sciences, and broader remarks about AI’s role across many sectors are reflected in the high-level leaders session on AI applications [S5] [S9].
MAJOR DISCUSSION POINT
Sectoral AI impact
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Overall Assessment

The transcript contains only a single participant, identified as Speaker 1, who delivers the entire discussion from start to finish [1-67]. Consequently, there are no cross-speaker agreements, shared viewpoints, or unexpected consensus to identify. The speaker’s remarks are internally coherent, outlining three principal arguments (Sweden-India AI partnership, AI for industrial competitiveness, and high-impact sectoral AI applications) but these reflect a unilateral perspective rather than a multi-speaker consensus.

No consensus among multiple speakers; only a single viewpoint is presented, limiting the ability to infer collective agreement or joint policy direction.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains only statements from Speaker 1, and the supplied list of arguments also reflects a single perspective. No other speakers are present, and therefore no contrasting viewpoints or debates are observable in the material provided.

Minimal to none. The absence of multiple participants means there is no substantive disagreement, implying that any policy or strategic discussion would need additional stakeholder input to surface divergent views.

Takeaways
Key takeaways
Sweden and India have complementary AI strengths: Sweden’s long‑term, government‑funded basic research program (WASP) producing a PhD each week, and India’s applied software engineering expertise and extensive global IT services customer base. A strategic Sweden‑India partnership could accelerate AI research and its industrial application, leveraging each country’s assets. Diffusing AI across large European and Swedish firms is seen as essential to remain competitive against a surge of low‑cost Chinese imports following recent tariff changes. AI is expected to drive new business models, cost efficiencies, and the creation of novel products and services for global markets. High‑impact AI use cases highlighted include rapid drug discovery and personalized medicine in life sciences, AI‑controlled defense platforms such as the Gripen aircraft, and AI‑driven management of future 5G/6G telecommunications networks.
Resolutions and action items
Explore and formalise collaborative research and application projects between Swedish institutions (e.g., WASP program) and Indian IT services firms. Encourage Swedish industry to integrate AI into their value chains to improve competitiveness against low‑cost Chinese products. Investigate and support AI initiatives in life‑science, defense, and telecommunications sectors, including pilot projects like AI agents for aircraft control and AI‑enhanced drug discovery.
Unresolved issues
Specific mechanisms, funding models, and governance structures for a Sweden‑India AI partnership were not defined. Details on how European firms will practically implement AI to counter Chinese price competition remain unclear. Regulatory, ethical, and security considerations for AI deployment in defense and healthcare were mentioned but not addressed. Timeline, milestones, and responsible parties for the suggested AI diffusion initiatives were not established.
Suggested compromises
None identified
Thought Provoking Comments
Sweden started a ten‑year‑plus research programme (WASP) that now graduates one PhD per week, building a strong basic AI knowledge base for industry.
Highlights a concrete, long‑term national investment in AI research and talent pipelines, setting a benchmark for how a country can systematically develop AI expertise.
Establishes the Swedish context and serves as a reference point for the rest of the talk; it prompts the later comparison with India’s more applied, services‑oriented AI development.
Speaker: Speaker 1
India, unlike Sweden, has not taken the R&D route but has built a massive applied software engineering base through its IT services companies, creating both knowledge and a global customer base.
Provides a contrasting model of AI capability development, emphasizing the strength of applied, market‑driven expertise over pure research.
Creates a pivot from the Swedish example to a potential synergy: the idea that Sweden’s research strength and India’s implementation strength could complement each other, steering the conversation toward collaboration.
Speaker: Speaker 1
When India starts moving, it becomes a very major force, offering a fantastic possibility to develop AI initiatives for customers worldwide.
Frames India not just as a partner but as a strategic engine that can amplify AI adoption globally, shifting the tone from descriptive to strategic.
Encourages listeners to view the Sweden‑India partnership as a lever for global competitiveness, setting up the next discussion about industrial challenges.
Speaker: Speaker 1
After the April 2 tariffs, we are seeing a flood of cheap Chinese exports; AI diffusion into large companies will be key to competing with those low‑price products.
Links AI adoption directly to macro‑economic competitiveness, moving the conversation from technical collaboration to geopolitical and market realities.
Marks a turning point where the focus shifts from partnership potential to urgent strategic imperatives, prompting consideration of AI’s role in cost‑efficiency, new business models, and defense of European industry.
Speaker: Speaker 1
The most worthwhile AI applications will be in life sciences—accelerating new molecule discovery, enabling personalized medicine, and delivering services that are currently impossible.
Identifies a high‑impact domain where AI can create societal value beyond industrial cost‑cutting, expanding the scope of the discussion to health and human welfare.
Broadens the conversation from pure industry competitiveness to public‑good outcomes, inviting listeners to think about AI’s transformative potential in medicine.
Speaker: Speaker 1
In defense, AI is already being used to process massive data sets; Saab’s radar aircraft and the 2025 AI‑controlled Gripen flight demonstrate mission‑critical AI integration.
Provides a concrete, high‑stakes example of AI in a traditionally safety‑critical sector, challenging any notion that AI is only a business efficiency tool.
Introduces a new, sensitive topic—defense—that adds complexity and urgency, suggesting that AI competence has national security implications and may influence policy discussions.
Speaker: Speaker 1
Future 5G and 6G networks will be AI‑driven; the massive data flowing through societies will be completely supported by AI.
Projects AI from a supporting role to an infrastructural backbone, emphasizing its pervasive future role in communications and society at large.
Concludes the monologue by tying together all previous points—research, industry, health, defense—under a unifying vision of AI as the core of next‑generation infrastructure, leaving the audience with a forward‑looking perspective.
Speaker: Speaker 1
Overall Assessment

Speaker 1’s monologue is structured around a series of strategic pivots that each introduce a fresh dimension to the AI narrative. Starting with Sweden’s research model, the speaker contrasts it with India’s applied expertise, then leverages that contrast to propose a synergistic partnership. The discussion then shifts to external pressures (Chinese low‑price competition), prompting a call for AI‑driven competitiveness. Subsequent pivots into life sciences, defense, and finally telecom broaden the scope from economic survival to societal transformation and national security. Each of these turning points deepens the conversation, reframes the stakes, and guides the audience toward a holistic view of AI as both a competitive advantage and an essential public utility.

Follow-up Questions
How can Sweden and India develop closer collaboration on AI research and applied software engineering, leveraging Sweden’s basic AI research and India’s IT services expertise?
The speaker highlights complementary strengths of the two countries and suggests exploring joint research and application projects, indicating a need for concrete partnership frameworks.
Speaker: Speaker 1
What strategies can Swedish and European companies adopt, using AI, to remain competitive against low‑cost Chinese exports in global markets?
The speaker points to the challenge posed by cheap Chinese products and implies that AI‑driven cost efficiency, new business models, and services could be a solution, warranting further investigation.
Speaker: Speaker 1
What are the most promising AI applications in life sciences, specifically for accelerating molecule discovery, enabling personalized medicine, and improving hospital services?
The speaker mentions AI’s potential in drug discovery and personalized treatment, indicating a need for detailed research into these domains.
Speaker: Speaker 1
How can AI be safely and effectively integrated into defense systems, such as mission‑critical control of aircraft and radar platforms?
References to AI agents controlling Gripen aircraft and extensive data analysis in defense suggest a requirement for deeper study of technical, ethical, and security aspects.
Speaker: Speaker 1
What will be the role of AI in the development and operation of future 5G and 6G telecommunications networks, especially regarding massive data handling and network optimization?
The speaker notes that upcoming telecom networks will be AI‑driven, prompting research into architecture, algorithms, and societal impacts.
Speaker: Speaker 1
What practical challenges and best practices exist for diffusing AI across large industrial enterprises, from pilot projects to full‑scale deployment?
The speaker emphasizes the importance of AI diffusion in industry but does not detail implementation pathways, indicating a gap that needs systematic study.
Speaker: Speaker 1

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Responsible AI for Shared Prosperity

Responsible AI for Shared Prosperity

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on initiatives to make artificial intelligence more inclusive and accessible across Africa and Asia, particularly through the development of AI systems that work in local languages. UK Deputy Prime Minister David Lammy outlined three major investments: supporting AI development in over 40 African languages, establishing Africa’s first dedicated public sector AI computer cluster at the University of Cape Town, and launching the Asia AI for Development Observatory. These initiatives are part of the broader AI for Development programme launched at Bletchley Park, created in partnership with Canada, Germany, Japan, Sweden, and the Gates Foundation.


Philip Thigo, Kenya’s Special Technology Envoy, emphasized that this represents a civilizational moment, arguing that the Global South has never lacked intelligence but has lacked the power to define how that intelligence is recognized and transmitted. He stressed that without representation in AI systems, African cultures and languages face an existential threat of extinction. Chenai Chair from the Masakani African Languages Hub explained their community-driven approach to impact one billion Africans through AI tools in 50 of the most spoken languages, focusing on data collection, research, innovation, and sustainability.


Shekar Sivasubramanian from Wadwani AI described their work in India across 14-16 languages, emphasizing that utility and practical value are essential for adoption. German representative Barbel Kofler highlighted the importance of addressing bias in data and including diverse languages and dialects. The discussion also featured the announcement of Lingua Africa, a new multi-million pound initiative focused on creating open community-governed language infrastructure for real-world AI applications. The panelists concluded that these efforts are essential for ensuring AI serves as a force for good that uplifts all of humanity rather than dividing it.


Keypoints

Major Discussion Points:

AI Language Inclusion Initiative: The UK government, in partnership with Canada, Germany, Japan, Sweden, and organizations like the Gates Foundation, is launching comprehensive programs to make AI accessible in over 40 African languages and support Asian language development, addressing the critical gap where current AI models predominantly serve English and other major languages.


Cultural and Civilizational Preservation: Speakers emphasized that language representation in AI is not just about technology access but about preserving entire civilizations, cultures, and ways of thinking, particularly for the Global South which has historically been an oral civilization at risk of being excluded from the “age of intelligence.”


Infrastructure and Computing Access: The discussion highlighted the creation of Africa’s first dedicated public sector AI computer cluster at the University of Cape Town and the significant barriers African researchers face in accessing computing power, with costs being exponentially higher than in developed countries.


Community-Driven Development: The Masakani African Languages Hub was presented as a grassroots, community-led initiative that started in 2019 without initial funding, focusing on building AI tools for 50 of the most spoken African languages to impact 1 billion Africans across health, education, and economic sectors.


Market Failure and Public Investment: Panelists discussed how private markets naturally invest in profitable languages (English, Mandarin) but fail to serve smaller resource languages, necessitating coordinated public and philanthropic investment to create these essential “public goods.”


Overall Purpose:

The discussion aimed to announce and explain new international collaborative initiatives designed to make AI inclusive and accessible across African and Asian languages, while addressing the digital divide and ensuring that AI development represents global linguistic and cultural diversity rather than just dominant languages and cultures.


Overall Tone:

The tone was consistently optimistic and collaborative throughout, with speakers expressing urgency about the civilizational importance of the work while celebrating partnerships and community-driven solutions. There was a sense of historical significance, with participants viewing this as a pivotal moment to prevent entire cultures from being excluded from AI development. The discussion maintained a diplomatic yet passionate quality, balancing technical details with broader humanitarian and cultural concerns.


Speakers

Speakers from the provided list:


David Lammy – Deputy Prime Minister of the UK


Philip Thigo – His Excellency Ambassador, Special Technology Envoy of the Government of Kenya


Shekar Sivasubramanian – CEO of Wadwani AI


Barbel Kofler – Parliamentary State Secretary to the Federal Minister for Economic Cooperation and Development of Germany


Chenai Chair – Director of the Mazakani African Languages Hub


Ankur Vora – Chief Strategy Officer and President of the Africa and India Office at the Gates Foundation


Julie Delahanty – President of Canada’s International Development Research Centre (IDRC)


Natasha Crampton – Chief Responsible AI Officer at Microsoft


Co-Moderator – Role/title not specified


Additional speakers:


None identified – all speakers mentioned in the transcript are included in the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This discussion brought together international government officials, technology leaders, and development organisations to announce major new initiatives designed to make artificial intelligence accessible across African and Asian languages. The conversation featured concrete commitments and partnerships aimed at addressing the global AI divide through community-led development and strategic international collaboration.


Major Initiative Announcements

UK Deputy Prime Minister David Lammy opened the discussion by framing “the choice the world faces” in AI development: either allowing AI to benefit only those who speak dominant languages, or ensuring it works for everyone. He announced three major UK investments addressing this challenge.


First, the UK is supporting AI systems development for more than 40 African languages through what Lammy described as a “genuinely African-led initiative” that enables people to access AI in languages they use daily. Second, the government is investing in Africa’s first dedicated public sector AI computer cluster at the University of Cape Town, designed to address prohibitive costs and limited access to computing resources that African researchers currently face. Third, the launch of the Asia AI for Development Observatory creates a new network supporting research, responsible AI governance, and ensuring AI reflects people’s lived realities across Asia.


The discussion concluded with the announcement of Lingua Africa, described as a “multi-million pound partner for an open call” developed through partnership between Microsoft AI for Good, the Gates Foundation, and AI for Development partners. This initiative focuses on community-governed language infrastructure for healthcare, education, agriculture, and public services. Additionally, Lammy announced support for four additional start-ups through a GSMA Foundation partnership.


These initiatives operate within the broader AI for Development programme, launched when the UK hosted an AI summit at Bletchley Park. The programme brings together Canada’s International Development Research Centre, the Gates Foundation, and the governments of Germany, Japan, and Sweden in coordinated multilateral effort.


Cultural Preservation and Civilizational Stakes

Philip Digo, Kenya’s Special Technology Envoy, provided crucial reframing by characterizing the current moment as fundamentally civilizational. His argument that “we’re actually in the age of intelligence” moved the conversation beyond technical considerations to questions about cultural survival. Digo emphasized that the Global South has never lacked intelligence but has historically lacked power to define how that intelligence is recognized, recorded, or transmitted.


Digo’s observation that “our entire culture’s values have been coined in language” and that “the Global South is largely an oral civilisation” highlighted African cultures’ particular vulnerability in an AI-dominated future. Without representation in AI systems, he argued, these civilizations face “almost existential” risk. This positioned the language AI initiative not as development aid, but as correction of historical power imbalances.


When young people in Kenya—among the highest ChatGPT users globally—seek guidance from AI models, Digo argued, responses should reflect their cultures rather than imposing external worldviews. This represents a fundamental shift from viewing AI as neutral tool to understanding it as a system embedding particular cultural assumptions.


Community-Driven Development: The Masakani Model

Chenai Chair from the Masakani African Languages Hub detailed how community-driven AI development works in practice. The Masakani initiative, beginning in 2019 as grassroots effort without initial funding, exemplifies African communities taking ownership of their AI representation. The name “Masakani,” meaning “to build together,” reflects the collaborative ethos driving the initiative.


The hub’s goal of impacting one billion Africans through AI tools in 50 of the most spoken languages demonstrates the challenge’s scale. Chair explained their approach focuses on four pillars: expanding and diversifying high-quality data, developing inclusive AI machine learning models, creating practical applications, and ensuring sustainability.


Chair emphasized linguistic complexity within individual languages, noting “the Shona I speak in Harare is not the same as the Shona spoken in Mutai.” This illustrates the nuanced understanding required for effective AI language development, going beyond simple translation to encompass dialectical variations, cultural contexts, and regional specificities.


The announcement of Project Echo—”enhancing communications for her opportunities”—represents innovative approach to intersectional inequalities. This gender-responsive intervention acknowledges high gender inequality on the continent and specifically designs solutions supporting women’s economic empowerment and health outcomes.


Technical Infrastructure and Computing Barriers

Julie Delahanty from Canada’s International Development Research Centre presented evidence of infrastructure challenges facing African researchers. Her organization’s study demonstrated that computing capacity costs are exponentially higher in African countries compared to Germany or the UK, creating fundamental barriers to AI research participation.


The African Compute Initiative, establishing the first dedicated high-performance computing cluster for public institutions in Africa at the University of Cape Town, addresses these infrastructure gaps. The initiative includes modern GPUs, faster storage capacity, and improved networking—essential components for training large AI models and supporting initiatives like the Masakani hub.


Natasha Crampton from Microsoft provided technical context for how computing power enables multilingual AI development. Creating language and culturally aware AI requires substantial computational resources for collecting and processing locally-led data, training models to incorporate new linguistic information, testing systems with local speakers, and supporting daily technology use. Microsoft’s role as computing enabler rather than content controller reflects recognition that trustworthy AI must be built through partnerships respecting local ownership of cultural and linguistic resources.


Market Failures and Public Intervention

Ankur Vora from the Gates Foundation provided economic analysis justifying coordinated public and philanthropic intervention in multilingual AI development. His assessment that “the markets are broken” offered rationale for why private sector investment alone cannot address smaller language communities’ needs. Private companies naturally invest in English and Mandarin models because those languages offer clear economic returns, but this logic leaves thousands of other languages underserved.


Vora’s argument that “when markets are broken, funders can get together and invest in public goods” provided the economic framework for understanding why the Gates Foundation, multiple governments, and international development agencies needed to collaborate. This moved discussion beyond moral arguments about inclusion to practical economic analysis about addressing market failures that systematically exclude certain communities from technological benefits.


Practical Implementation and User Value

Shekar Sivasubramanian from Wadhwani AI emphasized that successful AI adoption requires immediate, tangible user value. His organization’s work across 14-16 languages in India demonstrates how multilingual AI can be implemented at scale when providing genuine utility. Their applications in health surveillance, education assessment, and agricultural support show how language-aware AI can address real-world problems in immediately understandable ways.


Sivasubramanian’s principle that “it is very important in human contexts to provide some value to the person in any interchange” challenges the field to move beyond theoretical language preservation to practical applications providing immediate benefits. His examples—AI systems helping teachers assess student reading fluency in local languages or providing disease surveillance across multiple languages—demonstrate how technical capabilities translate into meaningful life improvements.


International Collaboration and Partnership

The discussion highlighted AI language initiatives’ diplomatic significance in strengthening international relationships and demonstrating commitment to global equity. Lammy’s emphasis on partnership with Canada, Germany, Japan, and Sweden reflects recognition that addressing AI inequality requires sustained international cooperation.


Babel Kofler’s participation as Germany’s parliamentary state secretary demonstrated European commitment to addressing AI data bias and including diverse languages and dialects. Her observation that she doesn’t speak “standard German” but uses a dialect different from Hamburg personalized the language diversity challenge, showing how even developed countries grapple with linguistic inclusion in AI systems.


Implementation Challenges and Future Directions

Several sustainability challenges emerged from the discussion. Questions about maintaining initiatives beyond current funding cycles remain unresolved. While Masakani demonstrated that grassroots innovation can begin without formal funding, scaling to serve billions across thousands of languages requires sustained resource commitment extending beyond typical project timelines.


The balance between open-source development and community sovereignty presents ongoing challenges. While open-source approaches can accelerate innovation and reduce costs, communities need assurance that their linguistic and cultural data won’t be exploited without their control or benefit.


Conclusion

This discussion represented a comprehensive examination of how AI development can address rather than perpetuate global inequalities. The conversation elevated multilingual AI from technical challenge to framework for thinking about power, representation, and equity in artificial intelligence development.


The consensus among speakers—from government officials to community leaders to private sector representatives—demonstrated mature understanding of both challenges and solutions. Alignment on the need for public intervention to address market failures, importance of community-led development, and requirement for practical utility in AI applications provides strong implementation foundation.


The initiatives described represent crucial intervention at a pivotal moment in AI development. The concrete commitments announced—from the University of Cape Town computing cluster to Lingua Africa’s multi-million pound investment—offer tangible steps toward ensuring AI serves linguistic and cultural diversity rather than concentrating benefits among speakers of dominant languages.


Success will be measured by real-world impact: reducing maternal mortality, supporting education, enabling economic empowerment for billions whose languages have been historically excluded from technological development. The collaborative model demonstrated in this discussion, sustained investment in technical infrastructure and community capacity, and continued attention to intersectional challenges provide the foundation for achieving these ambitious goals.


Session transcriptComplete transcript of the session
David Lammy

to make AI work in more than 40 African languages. This is a brilliant, genuinely African -led initiative which helps people to access AI in the languages that they actually use in their everyday lives. Second, we’re investing in Africa’s first dedicated public sector AI computer cluster at the University of Cape Town. Too many African researchers are held back by costs and a lack of access. And the hope is that this new hub will give them the computing power to build and train models locally. And third, we’re launching the Asia AI for Development Observatory. This is a new network to support research, responsible AI governance, to protect rights and to ensure AI reflects the realities of people’s lives across the region.

All of these initiatives are effectively part of our AI for Development programme, launched when we hosted the first of these AI summits back at Bletchley Park three years ago and made in partnership with Canada’s International Development Research Centre and as part of a wider collaboration to coordinate investments with the Gates Foundation, the governments of Germany, Japan and Sweden, as well as Community Jamil. And as part of our partnership with the GSMA Foundation, we’re proud to announce the support to four additional start -ups and these innovative businesses will harness responsible AI and will be able to support the development of AI for the future. to support the needs of underserved people. across Asia and Africa. They include Torn AI in Morocco, which creates voice interfaces to local dialects to help low -literacy rural users access digital and financial services through simple spoken interactions.

And all of these initiatives will make a real difference to people across the continent of Africa and Asia. But I hope they’ll do a bit more than that. Yesterday, I spoke about the choice the world faces, the two paths before us, one which sees AI take power and opportunity away from people and sadly divides us, and one that sees AI used as a force for good to solve problems and uplift all of humanity. and the projects I’ve mentioned, the ones we’re going to hear about today and the many new institutions and coalitions that are now emerging can help make sure we go down the right path and that is a path of a safe AI, an inclusive AI and importantly an equitable AI for everyone.

So let’s turn now to our panel, an exceptional group of leaders from across India and Africa and I’m going to get an introduction to our panel members and then I’ll start with the first question.

Co-Moderator

Thank you. It’s my pleasure to introduce, joining our Deputy Prime Minister on the stage, we have His Excellency Ambassador Philip Digo, Special Technology Envoy of the Government of Kenya. We have Dr. Babel Kofler. parliamentary state secretary to the federal minister for economic cooperation and development of Germany. We have Shekhar Sivasubramanian, CEO of Wadwani AI and Chennai Chair, Director of the Mazakani African Languages Hub. And so to the first question to Philip. So we’re beginning to see an attempt well we all experience how large language models are affecting our lives on a daily basis. I certainly use a, I don’t use chat GPT but I use a secure network which isn’t taking my stuff because obviously for obvious reasons as Deputy Prime Minister of the UK I have to be a bit careful but I’m using it to research and you know usually really get quickly to a that I don’t fully understand.

So as we move this on to local languages, dialects, and across Africa we’ve got an estimated 2 ,000 languages, how do you see AI in local languages shaping the next phase of your country’s digital development having been to Kenya many times and knowing the many groups that are there, how does this really work on the ground?

Philip Thigo

Thank you so much, Right Honourable Prime Minister. I think this moment is so profound that I don’t think you guys are realising what is happening here. I think the first thing is to understand that we’re actually in the age of intelligence, right? So it’s not about ICTs or technology, it’s about our AI shape. I think how we live, learn, work, collaborate. and engage. From our point of view, it’s a civilizational discussion. The Global South has never lacked intelligence, as we know, right? So what it has lacked is the power to define how that intelligence is recognized, recorded, or transmitted. Because our entire culture’s values have been coined in language. The Global South is largely an oral civilization.

And so the current models lacking our language means our civilization are at risk, almost existential, to be extinct. And I think this initiative, in our view, begins to ensure that we are represented in the current age of intelligence, but also our intelligence is part of our global collective history and memory. And so I think which means then how we engage in this is that now with the capabilities like Masakani, it means that when I get into Chagipiti that you refused to mention, is that… Young people in Kenya, who are the number one users of Chagipiti? by the way, are not only seeking emotional advice or guidance from these models, but then when they engage with these models, it actually also represents their cultures and civilization.

I think for me that’s how it works practically on the ground. First of all, it’s representation and existence. The second part, of course, is it also works when we have the entire stack. And you mentioned a couple of things around finding the models, but I think it would be interesting to see how we find the compute, the talent that then influences, develops the data, develops the language. The research and development capability, which I was in the first instance, and that was an amazing initiative because then we need to build research capacity and capability because talent development is the first instance of sovereignty. Then the final point, of course, is the specific use cases and languages, especially in the African context.

Again, you say 2000. So each of the 2000 are very context -specific. Kiswahili and Yoruba are not the same, and neither are their applications in the context of the African context. sense in even our history and cultures in Africa. So I think that capability, as diverse as it could be, also ensures that our diversity in the African continent is represented in the future models.

David Lammy

And that’s wonderful. And the first point you made is really about sort of seeing yourself in this story that the global community is going to be telling in relation to intelligence. And we know that in the past, Africa has been written out of that story. So it’s hugely important that African languages intellect history over thousands of years is in this storybook. So should I tell us then how the Masa Kani African languages is addressed? And how are we addressing these issues and really working as a tangible, real thing?

Chenai Chair

Thank you so much. Minister. So I think I want to say that I am proud to be representing the Masakane community that started in 2019, wanting to see their own languages represented in the global domain, and they did it by the bootstraps. No one wanted to fund them, and they came together and said, hey, how do I ensure that the language that I speak is captured digitally? So the Masakane African Language Hub emerges from that community -driven initiative, where our main goal is to impact 1 billion Africans through 50 of the most spoken languages with relevant AI tools that will allow for economic growth, health, and social benefit, and also working towards the preservation and also capturing the evolution of African languages.

The 2 ,000 -plus are growing, and so then even as Honorable Philip Degas mentioned, the diversity of the language is growing. So I think it’s a very important thing to do. Thank you. Thank you. Like I speak Shona, but the Shona I speak in Harare is not the same as the Shona spoken in Mutai. So it’s really capturing that diversity and nuance of that work. So what we do with support from the funding collaborative and the partners that we have is actually think about enabling the ecosystem through partnerships and grant making. So we specifically focus on four pillars of work, which is around data. So expanding and diversifying high quality data. In 2019, there wasn’t as much data, but the Masakana community actually started building up that data based from the JW300 Bible data set that had been created.

Secondly, we’re also looking at research. So it’s important for us to take it on as an ecosystem invention, where we are looking at developing and refining inclusive AI machine learning models, but also thinking about the tooling that’s resourcing these. So what we are working on specifically, that is having a benchmark project where we’re actually… going to create a relevant African benchmark looking at speech and text because the current benchmark models out there do not nuance the realities on the African context. And then also looking at innovation. So again, a lot of the questions that we’ve seen is when you create the data, where does it go? Is it taken up in the market? What’s the impact?

And so for us, we’re actually working 40 % of the funding that we have will go to creating use cases and impacting use cases. And one special mention I want to put forward is actually that we are working on a project called Project Echo, which means enhancing communications for her opportunities. This is a gender responsive intervention that exists in the context of high gendered inequality on the continent. And what that does is it will provide relevant use cases in African languages that lead to impact on women’s economic empowerment and health. And that’s a significant part of us recognizing the context we exist in. And then lastly, we really are thinking about sustainability. and right now we’re in a moment where there is resourcing, where there is funding and we also come from a moment where people were doing it without funding.

So we’re thinking about institutional capacity building for the African NLP community which will actually then see businesses coming up from these open source models, people innovating off the data that’s created and sustainability beyond the Masakana community which has been happening right now but then this funding allows us to actually have African -led AI which is built for impact. Thank you very much.

David Lammy

Thank you very much. Centres Africa but also importantly the fundamental inequality and gender issues that sit at the heart not just of Africa but making sure that women are, a big part of this story. Shekhar, bringing India into this and thinking of Wadwani AI and we’re sitting here in the most populous country on the planet. There are also lots of languages and tremendous diversity but also innovation and range across this country. So tell us how Wadwani AI is working at the heart of that innovation here in India. Thank you. First, the

Shekar Sivasubramanian

work we do is applied AI, which means we solve for problems in health, education and agriculture. And we’ve been doing it for the last seven years. The moment you work in India, the very first design principle that you start with is the ability to be… inclusive and embrace the entire population. So the dimensions of population we are looking at are language. So you start with at least 14 to 16 languages. You don’t even think of an application otherwise. Second, you also think of complete inclusivity, which means you need to think through the divide between rural and urban, the kinds of applications that will be delivered to people which will be of use to them. Third, are applications fundamentally must be useful to people.

Then they open out their ability to learn languages, changes that better interface with technology that can actually be of use to them. So that utility value sits at the heart of everything that we do, which drives a lot of behavior both by us as well as the ecosystem. Just as an example, we do media disease surveillance. We’ve been doing it for a while, which picks up every article published in India, it runs four hours, and it picks. Events of interest in health. 16 languages and it’s been running for the last two and a half to three years. It uses AI and it tells you in this region this many people got this disease at this time.

It runs every four hours and it tells the central government if it’s a disease outbreak what should you do. Another completely different example. We collect data from children in a couple of states and that will expand to 14 to 16 states where we have the largest data set of spoken local language. And which again in of itself of no use but when you provide something called oral reading fluency which assists the poorest child to read a paragraph and the AI tells you what you read well, what you did not read well and assists the teacher to cohortize the students and provide them information. Suddenly the language and the application you cannot distinguish between them. It is very important in human contexts to provide some value to the person in any interchange.

If you can work the value then the adoption is possible. option is easy. If you divorce the two, people don’t understand why I’m doing what I’m doing. It looks like an encumbrance. So for us, at the heart of our innovation is what does it mean for the person. Independent of which, we do analysis on various languages. We’ve done one on Tibetan, where we’ve preserved their entire culture by taking, we worked in Dharmashala, as well as in Karnataka. We digitized their entire library system and allowed the communities there to gain employment using it. Likewise, we plan to work on multiple less -used languages, Pan -India. We believe, it’s our position that, and we’ve got Agrivani, Healthvani, everything that we do is multilingual.

Everything that we do collects data. We have the largest data sets now, incidentally, of the work we do. It’s not what we do. It comes as a by -product of the work that we do. Over a period of time, it is my heart. Considered an humble opinion that these models using AI will take time. We should be ready to write this for a period of time. We should be ready to invest in deep research and or very utilitarian based approaches so that you can take the community along with you. That is super important. The theory is interesting, the practice is different. There is a theory as to how to design roads in India. I will keep quiet after a bit.

Very

David Lammy

very good example. Obviously I talked about the UK as a donor country doing this in partnership with others, Canada, Sweden, but also the German government and we’re joined by Babel Koffler just to bring the donor a perspective really to this and why this is so important. Thank

Barbel Kofler

you very much, Deputy Prime Minister. Thank you. Oh, there’s no one? What did I do? Oh, thank you. Thank you very much. I should put it on first, that’s true, yeah. Thank you very much, Deputy Prime Minister. I wouldn’t talk if it’s coming to AI in a manner of donor and recipient, because at the end of the day I think it’s a new technology where we all have to bridge if we really want to make it useful for everybody. And that’s also our interest, of course, from the German side. We see that AI can only be really the game changer to overpower… …to overcome inequality, to fulfill the promises of the SDG. And that’s for every country important, not only for global laws.

global South that’s important for everybody, can only be that game changer if it is inclusive. That starts with data at the end of the day and how biased data is. And if you talk about bias in data, language is quite close to it. You were pointing out how important it is and how you differently speak in various variations of your language. I really understand that I don’t speak standard German normally, so I also use a dialect, and that’s quite different from Hamburg. So we all have something to include also, which is connected with a cultural momentum, and we see so many languages neglected, totally neglected, dialects, cultures, because it’s not only the language, it’s what the language is transporting also, which is neglected.

And that’s why we really try to be part. And we are very proud to be part of your initiative. also. We were starting in 2019 with discussing those topics, working on an initiative called Fair Forward. That’s part of the initiative. And working also with partner countries like India on collecting data sets, so really to collect the necessary data on those local languages, which at the end of the day is offering then or should offer service to citizens in their mother tongue in multilingual countries or contexts, for example. So for us, it’s of utmost importance. Happy to be part of the initiative. We want to stay a reliable partner on that, and we will be part of that initiative.

And I hope the idea is spreading and growing. Thank you. Thank

Co-Moderator

you. We’ll now have a small… Changeover in our panellists. If I could ask, if we could have another big round of applause, please, for His Excellency Philip Higo, Debra Kofler and Shekhar Sivasubramanian. And now joining us on stage, we will have Ankara Vora, Chief Strategy Officer and President of the Africa and India Office at the Gates Foundation. Julie Delahunty, President of Canada’s International Development Research Centre. And Natasha Crampton, Chief Responsible AI Officer at Microsoft. Thank you very much.

David Lammy

Back to Chennai, my understanding is that Masakami are announcing a new multi -million pound partner for an open call today for Lingua Africa. So can you tell us a little bit more about this initiative and the gap that it’s designed to close effectively? And then why this moment is so important for African languages as a whole and AI?

Chenai Chair

Thank you so much Deputy Prime Minister. So yes, I do have the honour with my esteemed panellists to actually announce Lingua Africa. So with Masakami, which I’ve said means to build together, we’ve been working with researchers and communities across the continent to close the gap in how African languages are being used. And how African languages are represented in the AI systems. What we’ve constantly seen is that… I think I did mention this, is that it’s not just about data. It’s about whether the language resources actually translate into tools people can use, particularly in healthcare, education, agriculture, and public services, because those are the developmental domains that we’re likely to have significant impact with. So together with Microsoft AI for Good and the Gates Foundation, as well as our AI for D partners, LINGUA Africa will be a multi -partner open core focused on open community -governed language infrastructure, which will directly enable real -world AI applications.

I think a lot of the times as we’re developing AI solutions, the question becomes, if we’re building them in a lab, will they work in the real world? And that’s also consistently part of what we’re doing with the benchmarking work. So how we’ll do this is actually then it’ll be a use case or impact -focused specific approach. Where we will do model development, we will collect targeted data in those… specific domains and then also support strong pathways for deployment and adoption. So this is us working with multiple entities, the academic community, our partners here on stage with us, but also the tech entrepreneurs who are actually building up these solutions. And then for us, it’s quite simple.

The goal is to make sure that language is not a barrier anymore into including people into these solutions, particularly if you think about digital public infrastructure interventions. They need to be in languages that people communicate with because you will leave behind a majority of people if they are in languages that they do not understand. So that is our most significant contribution right now. Thank you.

David Lammy

Thank you. And obviously, we’re very pleased in the UK to be partnering with the Gates Foundation on new support for linguistic diversity across AI. But just explain, Ankur, the role. that the hub has effectively in that wider impact on the global south.

Ankur Vora

This is on? It’s on, all right. Languages matter. Can you first join me in giving a big round of applause to Chennai for this amazing movement. It is kind of brilliant where we are in this moment in time. Let me talk about three whys. One is why care about language? The second is why care about investing in language? And the third one is why care about investing in initiatives like Masakane? The first one, I think so everybody knows, but it’s useful to repeat it. And many people have talked about this before. Because we want to make… We want to make sure that the power of AI… actually changes lives. History is not going to remember us for the models we developed or the speeches we give here.

History is going to remember the impact we all had. We’re talking about mothers and babies not dying. We’re talking about the next generation growing up in a world without infectious diseases. We’re talking about hundreds of millions of people escaping the clutches of poverty. Those kind of things matter. And the solutions are there. They can get better. But we need to find a way of these solutions getting translated for these use cases. So that’s why we need to care about this thing. Why invest in language? Because the markets are broken. The markets are broken. Private sector companies are investing in models developed in English and Mandarin. And it makes sense for the markets to do that.

Because that’s where the economics work. But just because… the economics don’t work for the small resource languages doesn’t mean that we shouldn’t be investing in this. And that’s the point why all of us need to get together and say we need to do something about it. When markets are broken, funders can get together and invest in public goods. And that’s what we all are doing right now. UK government, Canadian government, the Japanese, Microsoft, IDRC, everybody is getting together and making the point that we need to invest in public goods because these markets are broken and this is an important thing. And so I’m quite excited about the fact that we’re all sitting at this panel, people in this audience, and saying we’re going to make sure that as we think about tomorrow, the important thing of developing solutions in the right languages that can solve the problem.

Thank you. be a problem that we will tackle. Thank you.

David Lammy

Thank you. I’ve got to say as a politician I like the idea that the state intervenes and doesn’t just leave it to the marketplace to determine where to put the funds, I’ve got to say. But obviously we are, and you have to in terms of the innovation, work at the cutting edge and that cutting edge is most often in the private sector as well, taking those risks to develop and innovate and here Microsoft and cloud computing is hugely important, Natasha, and it’s important to understand that interaction between the cloud. I think that the Masakani hub as it has been described so well, by Shaniai and how important it is that computation and cloud technology helps the innovation of these local languages.

Could you say a little bit more about that? But there is another subset to this, which is getting the balance right so that the languages and often the communities that we’re supporting that are at the front line have equity in this and don’t lose their own sovereign capabilities, which has been a theme of this conference. So I wonder if you could just reflect on that as well.

Natasha Crampton

is an urgent priority. Our own analysis shows that we have AI diffusing in the global north at roughly double the rate that we have it diffusing in the global south at the current time, and that is exactly why we need partnerships like these in order to start to put the right infrastructure in place to close that gap. Now, language is particularly important in terms of overcoming that AI divide. As we’ve heard many speakers say today, nobody is going to use AI if it does not speak the language that you speak, and importantly, that it does not work in the context, in the specific scenario in which you need to use it. So language -aware and scenario -aware, AI.

AI is incredibly important to empowering people to put… the technology to work in the use cases that mean the most to them. And that’s why we’re so thrilled to be partnering with Masakane, as well as the Gates Foundation, and the UK government on this Lingua Africa initiative. So how does compute come into all of this? I think, quite simply, compute is the enabler of making language and culturally aware AI. It’s a critical component of it. So when we take a base model that may have just been trained, like most models, on data sets that are predominantly English, we need to make sure that we can do this responsibly, locally -led data collection that Shania was talking about earlier.

And then we need to do some further work on the models to essentially ingest that data and make it well -registered. And that’s what we’re trying to do. And that’s what we’re trying to do. And that’s what we’re trying to do. And that’s what we’re trying to do. And that’s what we’re trying to do. And that’s what we’re trying to do. And that’s what we’re trying to do. And that’s what we’re trying to do. And that’s what we’re trying to do. takes compute. Then it’s very important once we’ve actually made the model language, linguistically and culturally aware, we need to make sure that we’re testing it with local language speakers and in the right scenarios.

That also, that testing, that also takes compute. And then finally, the day -to -day use of this technology, it also requires computing power. So we’re really here today as an enabler of an Africa -led effort by Africans for Africans to create this linguistically aware and multi -culturally aware technology, and compute fundamentally is just the enabler of it. I think my last thought to offer here today is I think these types of initiatives just really reinforce that trustworthy AI is not going to be the best tool for computing. I think it’s going to be the best tool for computing. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. because of the choices that we make, the ways in which we choose to build and test and deploy these AI systems and for us at Microsoft it’s really important that we do take all of those steps to represent the world as it is multicultural, multilingual and deeply interconnected so we’re thrilled to be part of this initiative.

Thank you very much.

David Lammy

And Julie, so we very much described this journey that we’ve been on since Bletchley Park and talked about languages, talked about computing. In a sense, AI, that’s the foundation stone for communities but just to round off this event, how do we see the opportunities going forward? particularly, where do we need to get to?

Julie Delahanty

Thank you. Thanks, everybody, for being here and for the Deputy Prime Minister for welcoming us. We’re incredibly proud at IDRC to be part of the AI4D initiative with the UK government and to be partnering with the Masa Kani African Language Hub as well as the African Compute Initiative. And I think going back a little bit to the Microsoft views on it, I think it’s very similar for us. Researchers in lower and middle -income countries really have to have strong computing power to be able to do the kind of cutting -edge AI work that Shana and others are doing. But right now, of course, they do face a lot of barriers. We did a study that showed the incredible increased cost of getting compute capacity, the difference between getting it in Germany and getting it in the UK.

But in an African country, the costs are exponentially larger. So that computing cost and how much it is, the local infrastructure that might be limited, the GPUs that are the… hardware that’s really driving the powers modern AI is also very difficult and hard to access for African countries. So it really makes it difficult for them to fully participate in global AI innovations. The African Compute Initiative is going to change all that, we hope. It is going to be the first dedicated high -performance computing cluster for public institutions in Africa. It will be based in South Africa at the University of Cape Town. And that initiative is going to include modern GPUs. It’s going to have faster and better storage capacity and much faster networking.

And it’s that kind of computing power that is essential. It’s essential for training large AI models. It’s essential for testing new ideas more quickly, as you mentioned. Subtitles by the Amara .org community And it’s been essential and will be essential for things like the Masakani African Language Hub. Both the initiatives that I’m talking about are really responding to the foundational gaps. So whether that’s compute capacity or the kinds of representative and robust data sets, both of those things are absolutely necessary. And if you don’t have those foundations, then you can’t contribute to AI systems. And if you can’t contribute to AI systems, then you can’t shape the AI systems. So it’s absolutely critical for Africa’s AI innovations to have those foundational elements that exist.

And I think the lessons that we’re going to learn through a lot of this programming is going to help other regions and other lower resource contexts to do that kind of work. And in terms of the next steps or the things that we can do with that, I mean, you can imagine some of the obvious things. I mean, some people have already mentioned it, but things like having…

Co-Moderator

Thank you very much. Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
David Lammy
1 argument110 words per minute1032 words557 seconds
Argument 1
UK investing in AI initiatives for African languages including 40+ language support, AI computer cluster at University of Cape Town, and Asia AI Development Observatory
EXPLANATION
The UK is making strategic investments to support AI development in the Global South through three key initiatives. These investments aim to make AI accessible in local languages and provide necessary computing infrastructure for African researchers.
EVIDENCE
Specific initiatives include: making AI work in more than 40 African languages through a genuinely African-led initiative, investing in Africa’s first dedicated public sector AI computer cluster at the University of Cape Town, and launching the Asia AI for Development Observatory as a new network to support research and responsible AI governance
MAJOR DISCUSSION POINT
International cooperation and investment in AI infrastructure for developing countries
AGREED WITH
Ankur Vora, Barbel Kofler
P
Philip Thigo
1 argument173 words per minute444 words153 seconds
Argument 1
Global South lacks power to define how intelligence is recognized and transmitted, with oral civilizations at existential risk without language representation in AI
EXPLANATION
The Global South has never lacked intelligence but has historically been unable to control how that intelligence is documented and shared. Since these cultures are largely oral and their values are embedded in language, the absence of their languages in AI models poses an existential threat to their civilizations.
EVIDENCE
The Global South is largely an oral civilization with cultures and values coined in language. Current AI models lacking these languages means civilizations are at risk of becoming extinct. Young people in Kenya are the number one users of ChatGPT and seek emotional advice from these models
MAJOR DISCUSSION POINT
Cultural preservation and representation in AI systems
AGREED WITH
Chenai Chair, Barbel Kofler, Natasha Crampton
C
Chenai Chair
2 arguments166 words per minute954 words343 seconds
Argument 1
Masakani community started grassroots initiative in 2019 to represent African languages in AI, focusing on data, research, innovation and sustainability
EXPLANATION
The Masakani African Languages Hub emerged from a community-driven initiative that began in 2019 when people wanted to see their languages represented digitally. The hub focuses on four pillars: expanding high-quality data, developing inclusive AI models, creating practical use cases, and building sustainable institutional capacity.
EVIDENCE
Started as a bootstrapped community effort in 2019 with no initial funding, built from JW300 Bible dataset, aims to impact 1 billion Africans through 50 most spoken languages, 40% of funding goes to creating use cases, includes Project Echo for women’s economic empowerment and health
MAJOR DISCUSSION POINT
Community-driven AI development and language preservation
AGREED WITH
Philip Thigo, Barbel Kofler, Natasha Crampton
Argument 2
Lingua Africa initiative announced as multi-partner open call focused on community-governed language infrastructure for healthcare, education, agriculture and public services
EXPLANATION
Lingua Africa is a new multi-partner initiative that focuses on creating open, community-governed language infrastructure to enable real-world AI applications in key developmental sectors. The initiative addresses the gap between language resources and practical tools that people can actually use.
EVIDENCE
Partnership with Microsoft AI for Good, Gates Foundation, and AI for D partners; focuses on use case/impact-specific approach with model development, targeted data collection, and strong deployment pathways; aims to ensure language is not a barrier to digital public infrastructure interventions
MAJOR DISCUSSION POINT
Practical implementation of multilingual AI solutions
AGREED WITH
Shekar Sivasubramanian, Ankur Vora
S
Shekar Sivasubramanian
1 argument166 words per minute637 words229 seconds
Argument 1
Applied AI must embrace complete inclusivity across languages, rural-urban divides, and provide fundamental utility to users for adoption
EXPLANATION
When working in India’s diverse context, AI applications must be designed from the start to be inclusive across multiple dimensions including language, geography, and utility. The key principle is that applications must provide clear value to users, as people won’t adopt technology they don’t find useful.
EVIDENCE
Wadwani AI works with 14-16 languages as a basic requirement, runs media disease surveillance in 16 languages every 4 hours for government health monitoring, provides oral reading fluency assessment for children in local languages, has digitized Tibetan cultural libraries in Dharmashala and Karnataka
MAJOR DISCUSSION POINT
User-centered design and practical utility in AI applications
AGREED WITH
Ankur Vora, Chenai Chair
B
Barbel Kofler
1 argument144 words per minute391 words162 seconds
Argument 1
AI can only be a game changer to overcome inequality if it is inclusive, starting with addressing biased data and neglected languages/cultures
EXPLANATION
For AI to fulfill its promise of helping achieve the Sustainable Development Goals and overcoming inequality, it must be inclusive from the ground up. This inclusivity starts with addressing bias in data and ensuring that neglected languages and cultures are represented, as language carries cultural meaning beyond just words.
EVIDENCE
Germany’s Fair Forward initiative started in 2019, working with partner countries like India on collecting datasets for local languages to offer services to citizens in their mother tongue in multilingual contexts
MAJOR DISCUSSION POINT
Inclusive AI development and cultural representation
AGREED WITH
Ankur Vora, David Lammy
J
Julie Delahanty
2 arguments154 words per minute471 words183 seconds
Argument 1
African researchers face exponentially higher costs and limited access to computing power compared to developed countries
EXPLANATION
Researchers in lower and middle-income countries face significant barriers to accessing the computing power necessary for cutting-edge AI work. The cost differences between accessing compute capacity in developed countries versus African countries are exponentially larger, creating major obstacles to participation in global AI innovation.
EVIDENCE
IDRC study showed incredible increased costs of getting compute capacity, with exponentially larger costs in African countries compared to Germany and the UK; local infrastructure limitations and difficulty accessing GPUs that drive modern AI
MAJOR DISCUSSION POINT
Infrastructure barriers and computing access inequality
Argument 2
African Compute Initiative will provide first dedicated high-performance computing cluster for public institutions in Africa at University of Cape Town
EXPLANATION
The African Compute Initiative represents a groundbreaking infrastructure development that will change the computing landscape for African researchers. This initiative will provide the essential computing power needed for training AI models, testing ideas, and conducting research that has been previously inaccessible.
EVIDENCE
Will be based at University of Cape Town, include modern GPUs, faster and better storage capacity, much faster networking, essential for training large AI models and testing new ideas more quickly
MAJOR DISCUSSION POINT
Infrastructure development and research capacity building
AGREED WITH
David Lammy, Natasha Crampton
N
Natasha Crampton
2 arguments153 words per minute544 words212 seconds
Argument 1
Compute is the critical enabler for making language and culturally aware AI through data collection, model training, testing and deployment
EXPLANATION
Computing power is essential at every stage of creating linguistically and culturally aware AI systems. From collecting and processing local language data to training models, testing with local speakers, and daily deployment, each step requires significant computational resources that enable the transformation of base models into locally relevant tools.
EVIDENCE
AI diffusing in global north at roughly double the rate of global south; base models trained predominantly on English datasets require compute for local data collection, model adaptation, testing with local language speakers, and day-to-day use
MAJOR DISCUSSION POINT
Technical infrastructure requirements for inclusive AI
AGREED WITH
David Lammy, Julie Delahanty
Argument 2
Trustworthy AI requires representing the world as multicultural, multilingual and deeply interconnected
EXPLANATION
Building trustworthy AI systems depends on making deliberate choices about how to build, test, and deploy these systems in ways that reflect global diversity. This approach ensures that AI serves all communities rather than just those represented in dominant datasets and languages.
EVIDENCE
Microsoft’s analysis shows AI diffusing at double the rate in global north versus global south; emphasis on Africa-led efforts by Africans for Africans to create linguistically and culturally aware technology
MAJOR DISCUSSION POINT
Ethical AI development and global representation
AGREED WITH
Philip Thigo, Chenai Chair, Barbel Kofler
A
Ankur Vora
3 arguments150 words per minute415 words165 seconds
Argument 1
Private sector markets are broken as they only invest in English and Mandarin models where economics work, requiring public sector intervention
EXPLANATION
The private sector naturally focuses investment on AI models developed in English and Mandarin because that’s where the economic returns are strongest. However, this market-driven approach leaves smaller resource languages underserved, creating a clear case for public sector intervention to address market failures.
EVIDENCE
Private sector companies investing in models developed in English and Mandarin because that’s where economics work; markets don’t work for small resource languages despite the need
MAJOR DISCUSSION POINT
Market failures in AI development and need for public intervention
AGREED WITH
Barbel Kofler, David Lammy
Argument 2
When markets fail to serve small resource languages, funders must collaborate to invest in public goods
EXPLANATION
Since private markets won’t invest in AI for smaller languages due to poor economics, public and philanthropic funders need to work together to create public goods that serve these communities. This collaborative approach ensures that important social needs are met even when they’re not commercially viable.
EVIDENCE
UK government, Canadian government, Japanese, Microsoft, IDRC all getting together to invest in public goods because markets are broken for this important work
MAJOR DISCUSSION POINT
Collaborative funding models for public good AI initiatives
Argument 3
Success should be measured by actual impact on lives – reducing maternal mortality, eliminating diseases, escaping poverty – not just technical achievements
EXPLANATION
The true measure of AI initiatives should be their real-world impact on human welfare rather than technical milestones or model development. History will judge these efforts based on whether they actually improve lives by addressing fundamental challenges like health, disease, and poverty.
EVIDENCE
Specific examples given: mothers and babies not dying, next generation growing up without infectious diseases, hundreds of millions escaping poverty; emphasis that history won’t remember models or speeches but actual impact
MAJOR DISCUSSION POINT
Impact measurement and real-world outcomes of AI initiatives
AGREED WITH
Shekar Sivasubramanian, Chenai Chair
C
Co-Moderator
2 arguments41 words per minute328 words477 seconds
Argument 1
Large language models are affecting daily lives and need to be extended to local languages and dialects across Africa’s estimated 2,000 languages for meaningful digital development
EXPLANATION
The Co-Moderator acknowledges the current impact of AI tools like ChatGPT on daily activities such as research and information gathering. They emphasize the need to extend these capabilities to local African languages and dialects to ensure inclusive digital development that serves diverse linguistic communities.
EVIDENCE
Personal example of using secure AI networks for research purposes while being cautious about data security as Deputy Prime Minister; reference to Africa having an estimated 2,000 languages that need to be represented
MAJOR DISCUSSION POINT
Extending AI capabilities to local languages for inclusive development
Argument 2
Panel discussions require structured facilitation to ensure comprehensive coverage of multilingual AI initiatives and their real-world applications
EXPLANATION
The Co-Moderator demonstrates the importance of structured dialogue in bringing together diverse stakeholders to discuss complex AI initiatives. They facilitate transitions between different aspects of the discussion and ensure all panelists can contribute their expertise to the multilingual AI conversation.
EVIDENCE
Systematic introduction of panel members from different organizations (Kenya government, Germany, Wadwani AI, Masakani African Languages Hub); organized panel changeover to include representatives from Gates Foundation, IDRC, and Microsoft; structured questioning approach
MAJOR DISCUSSION POINT
Stakeholder coordination and structured dialogue for AI initiatives
Agreements
Agreement Points
Language representation in AI is critical for inclusion and preventing cultural extinction
Speakers: Philip Thigo, Chenai Chair, Barbel Kofler, Natasha Crampton
Global South lacks power to define how intelligence is recognized and transmitted, with oral civilizations at existential risk without language representation in AI Masakani community started grassroots initiative in 2019 to represent African languages in AI, focusing on data, research, innovation and sustainability AI can only be a game changer to overcome inequality if it is inclusive, starting with addressing biased data and neglected languages/cultures Trustworthy AI requires representing the world as multicultural, multilingual and deeply interconnected
All speakers agree that linguistic diversity in AI is not just a technical issue but an existential one for preserving cultures and ensuring equitable access to AI benefits
Computing infrastructure is essential for enabling multilingual AI development
Speakers: David Lammy, Julie Delahanty, Natasha Crampton
UK investing in AI initiatives for African languages including 40+ language support, AI computer cluster at University of Cape Town, and Asia AI Development Observatory African Compute Initiative will provide first dedicated high-performance computing cluster for public institutions in Africa at University of Cape Town Compute is the critical enabler for making language and culturally aware AI through data collection, model training, testing and deployment
There is strong consensus that adequate computing infrastructure is a prerequisite for developing and deploying multilingual AI systems effectively
Market failures require public sector intervention for multilingual AI
Speakers: Ankur Vora, Barbel Kofler, David Lammy
Private sector markets are broken as they only invest in English and Mandarin models where economics work, requiring public sector intervention AI can only be a game changer to overcome inequality if it is inclusive, starting with addressing biased data and neglected languages/cultures UK investing in AI initiatives for African languages including 40+ language support, AI computer cluster at University of Cape Town, and Asia AI Development Observatory
Speakers agree that private markets alone cannot address the needs of smaller language communities, necessitating coordinated public and philanthropic investment
AI must provide practical utility and real-world impact
Speakers: Shekar Sivasubramanian, Ankur Vora, Chenai Chair
Applied AI must embrace complete inclusivity across languages, rural-urban divides, and provide fundamental utility to users for adoption Success should be measured by actual impact on lives – reducing maternal mortality, eliminating diseases, escaping poverty – not just technical achievements Lingua Africa initiative announced as multi-partner open call focused on community-governed language infrastructure for healthcare, education, agriculture and public services
All speakers emphasize that AI initiatives must deliver tangible benefits to users in practical domains like health, education, and agriculture rather than being purely technical exercises
Similar Viewpoints
Both speakers from Africa emphasize the grassroots, community-driven nature of African language AI initiatives and frame this as a matter of cultural survival and self-determination
Speakers: Philip Thigo, Chenai Chair
Global South lacks power to define how intelligence is recognized and transmitted, with oral civilizations at existential risk without language representation in AI Masakani community started grassroots initiative in 2019 to represent African languages in AI, focusing on data, research, innovation and sustainability
Both speakers identify computing access as a fundamental barrier and enabler, with detailed technical understanding of how compute limitations prevent participation in AI development
Speakers: Julie Delahanty, Natasha Crampton
African researchers face exponentially higher costs and limited access to computing power compared to developed countries Compute is the critical enabler for making language and culturally aware AI through data collection, model training, testing and deployment
Both speakers advocate for coordinated international public investment to address market failures in multilingual AI development
Speakers: Ankur Vora, David Lammy
When markets fail to serve small resource languages, funders must collaborate to invest in public goods UK investing in AI initiatives for African languages including 40+ language support, AI computer cluster at University of Cape Town, and Asia AI Development Observatory
Unexpected Consensus
Private sector limitations in serving linguistic diversity
Speakers: Ankur Vora, Natasha Crampton
Private sector markets are broken as they only invest in English and Mandarin models where economics work, requiring public sector intervention Trustworthy AI requires representing the world as multicultural, multilingual and deeply interconnected
It’s notable that both a Gates Foundation representative and a Microsoft executive openly acknowledge that private markets alone cannot solve the multilingual AI challenge, with the Microsoft representative explicitly supporting public goods approaches despite representing a major tech company
Community-led development as the preferred approach
Speakers: Philip Thigo, Chenai Chair, Natasha Crampton
Global South lacks power to define how intelligence is recognized and transmitted, with oral civilizations at existential risk without language representation in AI Masakani community started grassroots initiative in 2019 to represent African languages in AI, focusing on data, research, innovation and sustainability Trustworthy AI requires representing the world as multicultural, multilingual and deeply interconnected
There’s surprising alignment between African community leaders and a major tech company representative on the importance of community-led, locally-controlled AI development, suggesting a shift away from top-down technology deployment models
Overall Assessment

The speakers demonstrate remarkable consensus across multiple dimensions: the critical importance of linguistic diversity in AI, the need for adequate computing infrastructure, the requirement for public sector intervention to address market failures, and the imperative for AI to deliver practical benefits. There is also strong agreement on community-led approaches and the existential nature of language representation in AI systems.

Very high level of consensus with no significant disagreements identified. This strong alignment suggests a mature understanding of the challenges and a coordinated approach to solutions. The implications are positive for implementation, as all stakeholders appear aligned on both problems and solutions, potentially leading to more effective collaborative initiatives and resource allocation.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The discussion showed remarkable consensus among speakers on the fundamental challenges and goals of multilingual AI development, with no direct disagreements identified. The main areas of variation were in emphasis and approach rather than conflicting viewpoints.

Very low disagreement level. All speakers aligned on core issues: the need for multilingual AI, the importance of addressing market failures, the requirement for computing infrastructure, and the goal of practical impact. The variations in emphasis (technical vs. funding vs. community-driven approaches) actually complement each other and suggest a comprehensive multi-faceted strategy rather than conflicting approaches. This high level of consensus likely reflects the collaborative nature of the initiatives being discussed and the shared commitment to addressing AI divides.

Partial Agreements
All speakers agree that market failures require intervention to support multilingual AI development, but they emphasize different solutions: Vora focuses on collaborative public funding as the primary solution, Crampton emphasizes the technical infrastructure (compute) as the key enabler, while Delahanty highlights the need for dedicated computing clusters to address cost barriers
Speakers: Ankur Vora, Natasha Crampton, Julie Delahanty
Private sector markets are broken as they only invest in English and Mandarin models where economics work, requiring public sector intervention Compute is the critical enabler for making language and culturally aware AI through data collection, model training, testing and deployment African researchers face exponentially higher costs and limited access to computing power compared to developed countries
Both speakers agree on the critical importance of African language representation in AI systems, but they approach it from different angles: Thigo frames it as an existential civilizational issue requiring sovereignty over intelligence definition, while Chair focuses on practical community-driven solutions through systematic data collection, research, and sustainable implementation
Speakers: Philip Thigo, Chenai Chair
Global South lacks power to define how intelligence is recognized and transmitted, with oral civilizations at existential risk without language representation in AI Masakani community started grassroots initiative in 2019 to represent African languages in AI, focusing on data, research, innovation and sustainability
Both speakers agree that multilingual AI must provide practical utility in key sectors like healthcare, education, and agriculture, but they differ in approach: Sivasubramanian emphasizes user-centered design with immediate utility as the primary driver for adoption, while Chair focuses on building systematic infrastructure and partnerships to enable widespread deployment
Speakers: Shekar Sivasubramanian, Chenai Chair
Applied AI must embrace complete inclusivity across languages, rural-urban divides, and provide fundamental utility to users for adoption Lingua Africa initiative announced as multi-partner open call focused on community-governed language infrastructure for healthcare, education, agriculture and public services
Takeaways
Key takeaways
AI development must be inclusive of Global South languages and cultures to prevent civilizational extinction and ensure equitable representation in the age of intelligence Market failures require coordinated public sector intervention – private companies only invest in economically viable languages (English/Mandarin), leaving 2000+ African languages underserved Computing infrastructure is a critical bottleneck – African researchers face exponentially higher costs and limited access to GPUs and high-performance computing needed for AI development Language representation in AI is not just technical but existential – oral civilizations risk being excluded from global collective memory and intelligence systems Successful AI adoption requires utility-first design that provides immediate value to users in their local contexts, languages, and cultural frameworks Foundational gaps in both compute capacity and representative datasets must be addressed before communities can contribute to and shape AI systems Gender equity and cultural nuance must be embedded in AI development, recognizing that language carries cultural values and addressing high inequality contexts
Resolutions and action items
Launch of Lingua Africa – multi-partner open call initiative focused on community-governed language infrastructure for healthcare, education, agriculture and public services Establishment of Africa’s first dedicated public sector AI computer cluster at University of Cape Town through the African Compute Initiative Continued funding and partnership through AI for Development programme with Canada, Germany, Japan, Sweden, Gates Foundation and other partners Support for four additional AI startups through GSMA Foundation partnership to serve underserved populations in Asia and Africa Development of African-specific benchmark models for speech and text that reflect African contexts rather than existing biased benchmarks 40% of Masakani funding allocated specifically to creating real-world use cases and impact applications Project Echo implementation for gender-responsive AI interventions targeting women’s economic empowerment and health
Unresolved issues
How to ensure long-term sustainability of language AI initiatives beyond current funding cycles Balancing open-source development with community sovereignty and preventing exploitation of local data and knowledge Scaling solutions across 2000+ African languages with their diverse dialects and cultural contexts Addressing the growing AI diffusion gap between Global North (developing at double the rate) and Global South Ensuring that lab-developed AI solutions actually work effectively in real-world deployment scenarios Managing the tension between global AI development and local community control over cultural and linguistic resources
Suggested compromises
Public-private partnerships where private sector provides technical infrastructure (like Microsoft’s compute resources) while public sector and communities maintain control over cultural content and applications Collaborative funding model bringing together multiple governments, foundations, and organizations to share costs and risks of supporting economically unviable but socially critical language AI development Hybrid approach combining open-source model development with community-governed deployment to balance innovation with local sovereignty Phased implementation starting with most widely spoken languages while building capacity and infrastructure for smaller language communities
Thought Provoking Comments
I think this moment is so profound that I don’t think you guys are realising what is happening here… we’re actually in the age of intelligence… The Global South has never lacked intelligence, as we know, right? So what it has lacked is the power to define how that intelligence is recognized, recorded, or transmitted… our entire culture’s values have been coined in language. The Global South is largely an oral civilization. And so the current models lacking our language means our civilization are at risk, almost existential, to be extinct.
This comment reframes the entire discussion from a technical challenge to an existential and civilizational issue. Thigo elevates the conversation beyond mere language inclusion to questions of cultural survival and power dynamics in defining intelligence itself. His distinction between lacking intelligence versus lacking power to define it is particularly profound.
This comment fundamentally shifted the discussion’s framing from technical implementation to civilizational preservation. It established the philosophical foundation that influenced all subsequent speakers, with David Lammy immediately picking up on the theme of ‘seeing yourself in this story’ and ensuring Africa isn’t ‘written out.’ This reframing elevated the urgency and moral imperative of the initiatives being discussed.
Speaker: Philip Thigo
Because the markets are broken. The markets are broken. Private sector companies are investing in models developed in English and Mandarin. And it makes sense for the markets to do that. Because that’s where the economics work. But just because the economics don’t work for the small resource languages doesn’t mean that we shouldn’t be investing in this… When markets are broken, funders can get together and invest in public goods.
This comment provides a clear economic rationale for why multilingual AI requires intervention beyond market forces. Vora’s blunt assessment that ‘markets are broken’ offers a compelling justification for public-private partnerships and donor involvement, moving beyond moral arguments to practical economic analysis.
This comment provided the economic framework that justified the entire collaborative approach being discussed. It validated David Lammy’s political perspective about state intervention and gave concrete reasoning for why organizations like Gates Foundation, UK government, and others needed to work together. It shifted the conversation from ‘should we do this’ to ‘how we organize to do this effectively.’
Speaker: Ankur Vora
It is very important in human contexts to provide some value to the person in any interchange. If you can work the value then the adoption is possible… If you divorce the two, people don’t understand why I’m doing what I’m doing. It looks like an encumbrance… at the heart of our innovation is what does it mean for the person.
This comment cuts through the technical complexity to focus on fundamental user experience principles. Sivasubramanian’s emphasis on immediate, tangible value challenges the field to move beyond theoretical language preservation to practical utility that people can immediately understand and benefit from.
This grounded the discussion in practical implementation reality. It influenced how other speakers framed their initiatives, with subsequent speakers emphasizing real-world applications in healthcare, education, and agriculture rather than just technical capabilities. It provided a user-centered design principle that became a thread throughout the remaining discussion.
Speaker: Shekar Sivasubramanian
Like I speak Shona, but the Shona I speak in Harare is not the same as the Shona spoken in Mutai. So it’s really capturing that diversity and nuance of that work… And one special mention I want to put forward is actually that we are working on a project called Project Echo, which means enhancing communications for her opportunities. This is a gender responsive intervention that exists in the context of high gendered inequality on the continent.
This comment adds crucial complexity to the language challenge, showing that it’s not just about including African languages but understanding dialectical variations within languages. The introduction of gender-responsive interventions also broadens the scope to address intersectional inequalities, not just linguistic ones.
This comment deepened the technical understanding of the challenge while simultaneously expanding the social justice framework. It influenced subsequent speakers to consider not just language inclusion but also gender equity and contextual variations. It demonstrated that the initiative was thinking beyond simple translation to nuanced, socially conscious AI development.
Speaker: Chenai Chair
Our own analysis shows that we have AI diffusing in the global north at roughly double the rate that we have it diffusing in the global south at the current time… nobody is going to use AI if it does not speak the language that you speak, and importantly, that it does not work in the context, in the specific scenario in which you need to use it.
This comment provides concrete data on the AI divide while emphasizing both linguistic and contextual awareness as requirements for AI adoption. The quantification of the disparity (double the rate) gives urgency to the discussion, while the emphasis on scenario-specific functionality adds another layer of complexity beyond language.
This comment provided empirical validation for the urgency of the initiatives being discussed and introduced the concept of ‘scenario-aware AI’ alongside language-aware AI. It helped justify the compute infrastructure investments and reinforced the need for local testing and development rather than just translation of existing models.
Speaker: Natasha Crampton
Overall Assessment

These key comments fundamentally elevated and reframed the discussion from a technical language processing challenge to a comprehensive examination of power, equity, and civilizational preservation in the age of AI. Philip Thigo’s opening reframing established the existential stakes, which created a moral urgency that permeated the entire discussion. Ankur Vora’s economic analysis provided the practical framework for intervention, while Shekar Sivasubramanian’s user-centered perspective grounded the conversation in implementation reality. Chenai Chair’s nuanced understanding of linguistic diversity and gender considerations added sophisticated complexity, and Natasha Crampton’s data-driven perspective provided empirical validation. Together, these comments transformed what could have been a straightforward policy announcement into a rich, multi-dimensional exploration of how AI development can either perpetuate or address global inequalities. The discussion evolved from describing technical solutions to examining fundamental questions about whose intelligence gets recognized, how markets fail marginalized communities, and what it means to build truly inclusive technology.

Follow-up Questions
How do we ensure that the 2,000+ African languages, each with their own context-specific applications, are adequately represented in AI models?
This addresses the challenge of linguistic diversity across Africa, where each language has unique cultural and contextual applications that need to be preserved and integrated into AI systems.
Speaker: Philip Thigo
How do we build the entire AI stack locally, including compute, talent development, research and development capability in African countries?
This relates to achieving AI sovereignty and ensuring African countries can develop their own AI capabilities rather than depending on external resources.
Speaker: Philip Thigo
How do we capture the growing diversity and evolution of African languages, including regional variations of the same language?
This addresses the dynamic nature of languages and the need to account for dialectical differences within the same language across different regions.
Speaker: Chenai Chair
How do we ensure that open source models and data created through these initiatives translate into sustainable businesses and long-term impact beyond initial funding?
This focuses on the sustainability and commercialization of AI language initiatives to ensure they continue beyond donor funding cycles.
Speaker: Chenai Chair
How do we bridge the theory-practice gap in AI applications, ensuring that utilitarian approaches work effectively in real-world community contexts?
This addresses the challenge of translating AI research and development into practical applications that provide genuine value to end users.
Speaker: Shekar Sivasubramanian
How do we address the exponentially higher costs of compute capacity in African countries compared to developed nations?
This highlights a critical infrastructure barrier that prevents African researchers from fully participating in AI innovation due to cost disparities.
Speaker: Julie Delahanty
How do we ensure that AI solutions developed in labs will actually work effectively in real-world deployment scenarios?
This addresses the gap between controlled development environments and practical implementation in diverse real-world contexts.
Speaker: Chenai Chair
How do we maintain the balance between leveraging private sector innovation and ensuring communities retain sovereign capabilities over their linguistic and cultural data?
This addresses the tension between utilizing advanced private sector AI capabilities while ensuring local communities maintain control over their cultural and linguistic assets.
Speaker: David Lammy

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Panel Discussion Data Sovereignty India AI Impact Summit

Panel Discussion Data Sovereignty India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on the concept of data sovereignty in the age of AI, examining what it means for countries to maintain control over their digital infrastructure while remaining globally connected. The panel explored whether nations can be “sovereign yet connected,” with participants agreeing that complete isolation is neither realistic nor desirable given the interconnected nature of AI technology development.


Sunil Gupta emphasized that sovereignty doesn’t mean doing everything independently, but rather maintaining control over critical infrastructure like compute and data storage within national borders. He argued that countries need local infrastructure to serve their unique needs, such as India’s requirement for AI systems that work in multiple native languages and dialects. Gupta shared practical examples from Yotta, including migrating India’s AI language platform Bhashini from hyperscale cloud operators to local infrastructure while still leveraging global technologies like NVIDIA’s tools within controlled environments.


Nasubo Ongoma highlighted Africa’s perspective, noting that while the continent may lack compute capacity, it possesses valuable data and unique use cases. She emphasized the importance of building AI solutions for local contexts, citing examples like breast cancer detection systems designed specifically for African women’s physiology. Ongoma stressed the need for offline-capable AI solutions given Africa’s connectivity challenges.


Seema Ambastha provided a framework for understanding sovereignty through strategic control, operational efficiency, and supply chain trust. She advocated for treating digital infrastructure as critical national assets while maintaining transparency and traceability in design. The discussion concluded that effective data sovereignty requires partnership between governments, industry, and society, with the ultimate goal of serving real people’s needs while maintaining national control over critical digital infrastructure.


Keypoints

Major Discussion Points:

Definition and Balance of Sovereignty vs. Connectivity: The panel explored whether countries can be “sovereign yet connected,” concluding that sovereignty doesn’t mean isolation but rather strategic control over critical infrastructure while maintaining global partnerships and avoiding dependence on single entities or countries.


Infrastructure and Compute Sovereignty: Discussion centered on the necessity of having local compute infrastructure and data storage within national borders, with examples of migrating critical systems like India’s AI language platform from hyperscale cloud operators to local, controlled environments.


Local Use Cases and Design for Specific Needs: Panelists emphasized the importance of building AI solutions for local contexts and problems, such as voice-based AI in native languages for India or healthcare solutions designed for African populations, arguing that only local stakeholders can effectively address local challenges.


Trust and Supply Chain Management: The conversation highlighted that sovereignty requires engineered trust rather than paper-based agreements, focusing on transparent, traceable, and observable systems while managing global technology partnerships responsibly.


Public-Private Partnership Models: Discussion of how governments and industry must collaborate, with governments providing guardrails and policy stability while private sector focuses on innovation and scale, treating digital infrastructure as critical national assets similar to power grids or telecommunications.


Overall Purpose:

The discussion aimed to move beyond theoretical concepts of data sovereignty to practical implementation strategies, exploring how countries can maintain control over their digital destiny while leveraging global technologies and partnerships to serve their citizens effectively.


Overall Tone:

The tone was collaborative and pragmatic throughout, with panelists sharing real-world experiences and solutions rather than engaging in abstract debate. The conversation maintained a constructive, solution-oriented approach, emphasizing partnership over isolation and ending on an inspirational note about serving the most vulnerable populations through responsible AI development.


Speakers

Arghya Sengupta: Moderator/Host of the panel discussion on sovereignty and AI


Sunil Gupta: Running large data centers in India, associated with Yotta (data center company), expertise in compute infrastructure and sovereign AI implementation


Nasubo Ongoma: Associated with Kala, working on AI solutions in Africa (particularly Southern Africa), expertise in building AI innovations for African contexts and use cases


Seema Ambastha: Building large data centers in India and globally, expertise in critical infrastructure, sovereign AI compute infrastructure, and public-private partnerships


Additional speakers:


None identified beyond the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This panel discussion examined the practical implementation of data sovereignty in the age of artificial intelligence, bringing together perspectives from Sunil Gupta from Yotta (data center company), Nasubo Ongoma from Kala, and Seema Ambastha, with moderation by Arghya Sengupta. The conversation moved beyond theoretical concepts to explore real-world strategies for maintaining national control over digital infrastructure while remaining globally connected.


Redefining Sovereignty: Strategic Control Over Isolation

The discussion began by challenging conventional notions of sovereignty, with moderator Arghya Sengupta framing the central question as “who gets to make the rules” rather than simply where data is stored. Sunil Gupta fundamentally reframed the sovereignty debate by arguing that “sovereignty for sure does not mean we become isolated and just try to do everything ourselves. It is a matter of what is the thing we need to control and what is the thing where we need to collaborate.”


This understanding recognizes the interconnectedness of AI technology development, where different countries and companies excel in various components. Gupta emphasized that true sovereignty means ensuring “as a country, we do not allow a single country or a single company to define our digital destiny for future,” establishing the principle of strategic control rather than complete self-reliance.


Infrastructure Control and Local Use Cases

The panelists converged on the critical importance of maintaining sovereign control over compute infrastructure. Gupta argued that “compute infrastructure, I strongly believe, has to be within your country, has to be within your control. That is where your data is getting processed, that is where data is getting stored, that is where your models are being made.”


He illustrated this with India’s specific needs for voice-based AI systems processing multiple native languages and dialects – requirements that global frontier models may not prioritize. Gupta noted that “for India use cases, possibly I need the focus to go on my use cases which can benefit masses at a larger scale,” suggesting that 95% of India’s AI requirements could be met with models containing 20 billion to 100 billion parameters rather than trillion-parameter frontier models.


Seema Ambastha reinforced this by advocating for treating digital infrastructure “like any other precious asset” comparable to “power grid port or telecom.” She distinguished between ownership and control, emphasizing the need for “visibility into ownership structures” and policies ensuring digital assets are not compromised externally.


African Innovation and Resource Constraints

Nasubo Ongoma provided a compelling perspective on how resource constraints can drive innovation. Acknowledging that Africa possesses only 1% of global computing capacity, she reframed this by emphasizing the continent’s unique assets: “we have data, we have use cases.” Her approach challenged deficit-based thinking by asserting that “we can define the rules by building the tools that actually work for the people in our context.”


Ongoma’s examples demonstrated the importance of local context in AI development, citing breast cancer detection systems designed specifically for African women, noting that “when you look at the composition of the breast tissue for African women, it’s different.” She also highlighted the need for AI systems capable of offline operation, given that digital connectivity reaches only 50% of Africa’s population.


Engineered Trust and Partnership Models

A crucial theme was the distinction between trust based on agreements and trust built into technical systems. Ambastha emphasized that “Trust is not paper-based. Trust can only be engineered, and it needs to be verified.”


Gupta illustrated this through Yotta’s experience migrating India’s AI language platform Bhashini from hyperscale cloud operators to sovereign infrastructure. When they discovered that NVIDIA’s NVCF software tool was still running on NVIDIA’s US-based platform, they successfully negotiated for NVIDIA to open-source that component and bring it within their controlled environment.


This example demonstrates what Gupta termed “partnership not dependence” – utilizing “the best of foreign technologies” including NVIDIA GPUs, Microsoft Azure tools, and Amazon technologies, but “not using these technologies in the public cloud.” Instead, these technologies operate “within my ring-fenced walls, within my GPU and CPU compute infrastructure” where “the access control of these technologies firmly lies with me.”


Government-Industry Collaboration

The discussion revealed the need for sophisticated collaboration between government and industry. Ambastha emphasized that achieving sovereignty represents “core accountability for every country” requiring government to establish sovereign guardrails, provide long-term policy stability, and offer commitment that gives private enterprises confidence to build substantial capacity.


However, she noted that “policies need to evolve along with the infrastructure” and are currently not developing at the same pace. She advocated for moving “security and regulatory, not at point-in-time checks, but move it to a continuous verification process.”


The panelists emphasized treating digital infrastructure as a national asset while avoiding nationalization. As Ambastha explained, “the goal is not to nationalise. I think the goal is assurance.”


Practical Implementation Strategies

The panelists shared concrete examples of sovereignty implementation. Gupta’s Bhashini migration demonstrated how critical national digital infrastructure could be moved from dependency on hyperscale operators to sovereign control while maintaining technological sophistication.


Ongoma described Kala’s approach of providing compute resources to African innovators while engaging with governments to ensure local use cases inform sovereignty policies. She noted that Kala works “with global partners to give us compute, but at the same time we also want to buy compute for ourselves” because understanding technical realities is essential for meaningful policy engagement.


Addressing Connectivity Challenges

Ongoma’s emphasis on developing AI systems that function without constant internet connectivity addresses a fundamental challenge in achieving inclusive AI deployment. This requirement, driven by Africa’s connectivity constraints, represents an innovation opportunity that could benefit other regions facing similar challenges.


Conclusion

The panel concluded with the moderator referencing Gandhi’s principle of thinking about those at the end of the queue, positioning sovereignty not as an abstract concept but as a practical requirement for ensuring AI serves all populations effectively. The discussion demonstrated that practical sovereignty is achievable through strategic partnerships, local capacity building, and government-industry collaboration, requiring sophisticated approaches that balance control with connectivity and local needs with global capabilities.


Session transcriptComplete transcript of the session
Arghya Sengupta

been used almost as much as AI in this session, the last three days, it’s been sovereignty. So I think it’s good that we get 24 minutes and 47 seconds to discuss what sovereignty is about. So I’ll jump straight in. We’ve got a great panel. And I think the key question of sovereignty is a question of who gets to make the rules. And the way in which sovereignty has been discussed is in terms of where data is stored. So we have a variety of viewpoints here, and I look to get some opening remarks from each of you. So Sunil, I’ll start with you. You’re running some very large and very impressive data centers in India. One term that we’ve often heard is sovereign yet connected.

So we want to be sovereign but connected. Is that realistic?

Sunil Gupta

No, as you said, there are different ideas, different theories, different narratives going on in sovereign. Everybody has their own take on sovereignty. And so many times, sovereignty is also confused with we will do everything ourselves. We’ll start looking inwards, we’ll isolate ourselves from the rest of the world and everything is done by us also. I think let’s understand any and every technology stack, AI is now the latest one, you will always have interconnectedness, interdependencies across the world. Somebody will be good at making chips, somebody will be making raw material for the chips like gases and maybe rare earths, somebody will be making models, somebody will have great data sets, somebody will be very good in making applications, agentic AI.

You will have, and of course capital flows, somebody will have lots of capital and somebody will be waiting for that capital and somebody will have talent. We all know where India is good at and where any other country is good at. So, sovereignty for sure does not mean we become isolated and just try to do everything ourselves. It is a matter of what is the thing we need to control and what is the thing where we need to collaborate. For sure, it definitely means that as a country, we do not allow a single country or a single company to define our digital destiny for future. Answering your second question, there are certain things which are fundamental.

Compute infrastructure, I strongly believe, has to be within your country, has to be within your control. That is where your data is getting processed, that is where data is getting stored, that is where your models are being made. Your needs as a country, forget control, your needs as a country are unique. You want to create a voice -based AI because majority of population may not be comfortable speaking in English or writing in English, but they’ll be very, very comfortable talking in their own native language. We all are very comfortable talking in native language. We have a mix of Hindi, English, Malayalam, Kannada, whatever native languages, and we mix up with English. So if we are able to talk to a device in my own native language with my own slang, and the device does all the processing and gives me my answer in real time in my own language, my slang, that is where the real benefit, that is where population scale benefits comes in.

Maybe the model builders of any other country may have a different viewpoint of how they want to adopt AI at a global level. So frontier models are good for those use cases, but for India use cases, possibly I need the focus to go on my use cases which can benefit masses at a larger scale. So both from control point of view that nobody else should tomorrow just switch off my access to digital infrastructure, also from the point of view that my priorities for my citizens can be different, I would rather like to have sovereign compute, right? And some of the models which are taken care of, let’s say, as Minister I think said in the last three, four days, in Devas also, that 95 % of the use cases which India requires can possibly be handled.

So I think that’s the goal. by having models which are 20 billion to let’s say 100 billion parameters. You don’t need to necessarily go for frontier models, trillions of parameters. So we build our compute here. We store our data here. We allow controlled data flow outside. We build the models which are satisfying 95 % of my need. That is what I need to do. But what we can do, and I give you our own example as Yotta. While on one side I’m building…

Arghya Sengupta

So Sunil, we’ll just come back to that. We’ll just get everybody else in and then we’ll speak about your examples. I’m just mindful of time. So I think the takeaway is that as far as the infrastructure layer is concerned, as in sovereignty in compute is not only desirable but perhaps possible. And as far as control is concerned, and we should try to have control, but I’ll take that to you, Nasubo. Let’s look at the design layer. I mean, Sunil gave what the infrastructure is about. Sovereignty is also about who makes the rules in terms of how things are designed. And what Sunil said, we work for a very large country like India where there are lots of buildings.

There are lots of builders. But how does it translate to the rest of the world? Maybe some experiences from Kala as well.

Nasubo Ongoma

Excellent, thank you very much for this when you look at or when you think about let’s say Africa sometimes it’s we are disadvantaged in that we don’t have compute when you look at the computing capacity it’s like at 1 % so already we are at disadvantage before we even leap forward and get ahead but the one thing we have is we have data, we have use cases so when it comes to use cases how are we able to design for our lived realities because we as he said that the language the different things that we are looking at for example when you look at when you look at the local needs, what are the things that we want that we can adopt for example if I look at the use case of health if you look at the at how People in other sectors have been looking at health, for example.

We’ve done a lot of work on designing for our needs in terms of breast cancer. We were able to get data sets from our lived context, knowing that when you look at the composition of the breast tissue for African women, it’s different. So those are the use cases that we need to look at. Because we can be confident and say that, yes, we don’t have compute, but we have the use cases, and that’s the important bit that we need to put into place. That in as much as we are disadvantaged, we have use cases, we have the people who are able to build. That’s the one component that we never talk about. We always talk about, you know, we are getting the data there, someone else is defining the rules.

But we can define the rules by building the tools that actually work for the people in our context and being confident that, you know, once it works for our context, that people are going to use.

Arghya Sengupta

That’s right. I think that’s a really powerful statement. because at the end of the day, it’s only local people who have skin in the game who will build for local problems. And I think that’s where actually the opportunity also lies. So I think that’s a very critical intervention. And I’ll take that to you, Seema. So we’ve discussed the infrastructure layer, the design layer. And I think it would be good to get a holistic perspective as far as critical systems and sovereignty in critical systems is concerned, especially because, as Sunil was saying, that while we can certainly try to build, compute, and store it locally, locally, it’s, again, a pipe dream to think that any country can do everything itself.

So there are, of course, questions of supply chains, trusted supply chains, who’s supplying what, and how that control is going to be exercised. So maybe a little bit from your experiences as to what sovereignty means for you, building a large data center, many large data centers now in India, but the rest of the world as well.

Seema Ambastha

So first of all, thank you. I’ll just keep it. I’ve answered this in two parts, and real quickly. So. So. critical question at the critical moment I think it’s very important it’s like an important question for this decade what first question is can you be connected and sovereign yes, I don’t see a problem at all of being sovereign and connected I think over there what is important to understand is basically the strategic control that you need needs to be sovereign and it remains sovereign, I think that’s the definition more on sovereign so you don’t need to really build everything yourself so if you want me to just elaborate around the three what does really government look from its services, so it’s like public services, critical citizen data, financial networks AI systems, unlimited amount, so we’re not talking of outsourcing right over here what we’re talking of is basically critical national infrastructure, I think it’s very important to define not in general but in specific right, what it means so let’s look at three things one one is ownership.

Is ownership very important across all the components in the supply chain and in the critical infrastructure for government? Not really. Not really. I think we need to define the extent to which you want to have ownership. Second is visibility into ownership structures. And third, I think for most important for all countries, whatever, developed, underdeveloped, developing, whatever it might be. I think it’s important for all of us to treat like our digital assets like any other precious asset. And therefore, you have to have policies, guardrails that ensure whatever you have in a sovereign or semi -sovereign infrastructure is not compromised externally and you have a degree of assurance. Where you don’t have geopolitical leverage. I think that is important.

That defines sovereignty to a great extent. So what does it mean for industry? Industry, we have seen some some really good models come up, right? So there is like sovereign infrastructure model. I’ve seen some real good air -gapped, some kind of ring fence environments within the commercial infrastructure, which has been very interesting. And of course, the public -private, which still remains. What does it all mean? It means no national… We are not trying… The goal is not to nationalize. I think the goal is assurance, which is most important. That’s number one. That’s your strategic ownership question. The second is operational efficiency. I think over here, yes, degree of sovereignty does matter. It goes well beyond a few definitions of infrastructure that we have.

I think what is important here is to ensure the extent of operational control, look at efficiencies of operational control, the components within operational control that can be sovereign. And I think that’s what we’re trying to do. is to ensure the extent of operational control, and to operate So what does it mean for industry? We need to build things that are transparent, traceable, and also observable. I think that is the code to your design. That is sovereign design. Then you decide how you want to implement it. So the second thing, what does it mean? It means trust. So trust is not paper -based. Trust can only be engineered, and it needs to be verified, in my opinion.

Okay, I’m quickly coming to the third question. I think you had so many things. Supply chain trust, absolutely. Today, if you look at data sovereignty, it goes well beyond data, digital data. It goes into hardware, chipsets, network components, AI provenance, a whole lot of stuff. So in this case, I think industry needs to basically, you can’t isolate yourself. I do not believe in that. You need to forge very good technology global partnerships. It is important. Again, another degree of trust. The second thing is, of course, you can have some guardrails around it by the government, and you can govern that. I think what is most important in this case is to build some sovereign capacity.

By domestic, which is because in the age of AI, I strongly believe that the sovereign AI compute infrastructure has become a global leverage. So it is important, right? So these are my take. And basically, what I also believe in this is national digital infrastructure for any countries, like a national infrastructure, which could be like a power grid port. or a telecom. So you treat it with that level of whatever you need to do for it. Secondly, a very good guardrails from the government to safeguard sovereignty and govern it. Industry should focus on innovation and not worry too much, whatever you can, not try to own everything, because it slows down your transition and your aspiration of growth.

And this

Arghya Sengupta

That’s great. And I think one underlying point that you made across these three is of trust, because at the end of the day, you can’t build everything yourself. Sovereign nations don’t do things themselves, even in a non -AI analog world, so it’s not that you’re going to do everything yourself. But sovereignty is only partly what we say, but more importantly, I think is what we do. And so I want to take that to each of you in terms of what you are doing in your own domains, in your own companies, and where that line, that what am I going to do? myself, what am I going to do with somebody else and if so how will I ensure that this person is trusted and I have control.

So Sunil you were saying about Yotta and what you do briefly so that we can get the others in.

Sunil Gupta

Yeah sure. So I’ll just go by the actual example which we have sort of done in the last two years or so. Last week we inaugurated and made open to the world that India’s AI language platform which I think every government entity is using, Pashini, we actually migrated that from a hyperscale cloud operator to our cloud. It’s a combination of a whole lot of general compute services and AI and GPUs and all on which all those language models are working which are giving real -time translation services. Now their purpose considering that it is a digital public infrastructure they were very very clear that at no point of time we want to be dependent on the platform service of a hyperscale operator because that makes it make stickiness that you cannot come out of that platform.

Whether it is a hyperscale platform or for that matter Yotta’s platform they don’t want to remain dependent on only one entity. They want it to be independent. They wanted a choice. we ended up not only giving them the physical infrastructure which was obviously local in my data center but we ended up creating almost I can say 30 or 40 different technologies we developed put it on their virtual machines in their environment in my data center and brought them into their control they were not using PaaS anymore. At the last when everything was going live we suddenly realized there is one component NVCF which is a NVIDIA’s software tool which was still running on NVIDIA’s platform somewhere in US and it was not running in India and then they said even though it is all fine NVIDIA is my biggest technology supplier giving me GPU software, everything but they said no this cannot go this software component is very critical for this whole structure but it has to come within your control into my environment so what we have done after that after everything was done was also to and NVIDIA agreed to open source that part of the software brought the software into our environment and now it is available and it is running within my control this example is telling.

I’m just in same breath, I’m telling you same thing. I’m using the best of the foreign technologies. I’m using Nvidia’s technologies. I’m using, of course, open source technologies. I’m using Microsoft technology. We have great partnership with Azure. I’m using Amazon’s technologies. But I’m not using these technologies in the public cloud. I’m using their technology stack within my ring -fenced walls, within my GPU and CPU compute infrastructure. The access control of these technologies firmly lies with me. No third party is able to log into my system and control or dictate what will be running and what will not be running. And that, I think, is the real balance, that you use the best technologies. These guys have spent hundreds of years, put billions of dollars in creating great technologies.

We must benefit from that. But you use these technologies within your environment, within your control.

Arghya Sengupta

That’s right. So I think partnership and not dependence. What I’m interested in, Nasubo, and I’ll come to you on this, is what you’re doing at Kala. Because what Sunil is saying may work, say. In a setting like India, where… tell NVIDIA perhaps that some part can be stored on something locally. But I’m thinking of Malawi, I’m thinking of Lesotho or Eswatini, I’m thinking of smaller Southern African countries, which also will want to use AI for solving local problems. And so what does it look like from your perspective, having done so much work in Southern Africa?

Nasubo Ongoma

When you look at, let’s just first ground what Africa has, right? How are we going to use compute that also allows for offline? That is one of the use cases we are looking at, because in as much as digital connectivity is everywhere in Africa, it’s up to like 50%. So how do we also ensure that people are able to use? So one of the ways that we do this is, one, we are working with global partners to give us compute, but at the same time we also want to buy compute, for ourselves, because the… conversations that he’s talking about in the rules, creating the rules and the structure can only be done once you also understand what is happening.

So at Kala, we are also offering compute to different innovators. And if you go to our stand in Hall 14, you are able to interact with different African innovators from the AI village who are building AI innovations. And part of those innovations are innovations that allow for offline access. That is the one thing that we need to be cognizant of. We need to understand how we need to work practically. That’s something that Kala is actively building, actively championing for. So that when we’re having even conversations with government, we are going to them and saying, yes, compute is something that we may not have. But if you approach, let’s say, big tech and you’re talking about offering compute, offering us…

being sovereign, this is what it means. So we are also having conversations with different African governments to talk about what we are learning, what people are building, and now having once they have their understanding now we can continue ensuring that our use cases are well represented. Because if we just take things that are dictated to us without having like a perspective it means that we are building for exclusion. And for us we want to ensure that all voices are well represented, including the people who are offline, who want to use AI for solving use cases in our sectors.

Arghya Sengupta

That’s right, and I think this is resonating greatly with the fact that you’re building for offline, because when we were doing Aadhaar in India and the legal framework for Aadhaar, which underlies all our DPIs, one of the key game changers was for moving from online authentication to offline verification, because we realized that that was a big need. So this is where the global south, I think, needs to learn from . each other because these problems are somewhere similar. Last word perhaps to you Seema on this in terms of your actual experiences in ensuring that you have control over whatever is within your ring fence but what’s outside is something that you trust and you think will further the goal of sovereignty and sovereign AI as you mentioned.

Something from your practical experiences.

Seema Ambastha

Let me just give you what most of us are doing and why it is pertinent and it’s important. We are building currently we are building for demand so it’s like a gigawatt AI factories huge amount of compute huge amount of data centers it has to be done responsibly in all ways and a lot of money. I think what’s important is how does this work between the government and the enterprise. I think that is the recipe for success. So there are three, four things which I have a take. One, of course, is basically the policies need to evolve along with the infrastructure. They are not based at the same. So that, I think, is important. The second thing is government must lay the sovereign guardrails.

It’s all spoken about, but you don’t have them. So it’s very difficult. Third, I think what is also important in every country to help the industry to build that capacity is also give, you know, not only have a long -term stability of your policy, but also look at commitment so that private enterprises, private industry is confident of building that huge capacity for you. I think that’s very, very key. And last but not the least is definitely look at security and regulatory, not at point -in -time checks, but move it to a continuous verification process. But this… This will ensure your sovereignty is implementable. You can also, you know, kind of enforce it. and get the best results out of it in terms of outcome.

My closing remarks. One, of course, I did speak about before in terms of how you treat this asset. You’ve got to treat it like any other national asset. Second is government needs to extend that hand in becoming an absolute sovereign partner or a public -private partner to the industry. And third is industry needs to really focus on innovation, innovation, scale, time to value, time to market. I think that’s where your soul energy should go. And last but not the least, this is a core accountability for every country. It can’t be one over the other, right? And that will ensure you safeguard your national interest and also do scale and progress without compromising your transformation times and things like that.

So you’re not left behind. See, AI is a journey where we don’t want any country to be left behind. One, lack of… resources, lack of definitions, security, sovereignty, access. I think we need to have that. I really like the theme. It says welfare for all and happiness of all and that should really be the case if it is so very transforming in nature.

Arghya Sengupta

That’s right and I think if we were to quickly wrap up with some takeaways because we see that the purpose of this session was that data sovereignty shouldn’t be just theoretical, a slogan. It has to work in practice and what I took away from the three of you who are actually walking the talk on data sovereignty is A, in terms of the role of the market is essentially to build sovereign AI in whichever country you may be in and build it yourself, store locally and ensure that you have trusted partners when you are partnering with someone because it’s obviously futile to try to even think about doing anything yourself. As far as governments are concerned, again, this is a, I like the word you used, co -accountability, this is a partnership.

And I think government has to build guardrails, but hand in hand with both the bazaar and the samaj, that’s the society and the market. And as far as the samaj is concerned, the society, and Kala mentioned that about, at the end of the day, we mustn’t forget that what we are trying to do is solve real problems for real people. So like she mentioned that the breast tissue in an African woman is different in somewhere else, that’s the person whom we are trying to serve. And I think that that is what is imperative for all of us to do. And I think it’s appropriate to end with what Gandhiji said, that we must think about the last person in the line.

And I think when we are talking about AI, just because we are in a kind of modern technocratic age, we shouldn’t forget that it’s that last person, the man or woman in the queue. The man. The most unfortunate who we must think about. because at the end of the day that is for whom AI is built and that is for whom we are talking about sovereignty so we leave it there thank you very much ladies and gentlemen and thank you to my panelists for a wonderful session thank you

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Arghya Sengupta
4 arguments190 words per minute1213 words381 seconds
Argument 1
Sovereignty means controlling who makes the rules, not isolating from global partnerships
EXPLANATION
Sengupta frames sovereignty as fundamentally about control over decision-making rather than complete isolation. He emphasizes that sovereignty should be about having agency in rule-making while maintaining global connections and partnerships.
EVIDENCE
He opens the discussion by stating that sovereignty is ‘a question of who gets to make the rules’ and later emphasizes ‘partnership and not dependence’ as the key principle.
MAJOR DISCUSSION POINT
Definition and Scope of Data Sovereignty
Argument 2
Local people with skin in the game are best positioned to build solutions for local problems
EXPLANATION
Sengupta argues that only those who have direct stakes and understanding of local contexts can effectively develop solutions that address specific regional challenges. This principle supports the case for local sovereignty in AI development.
EVIDENCE
He responds to Nasubo’s examples by stating ‘it’s only local people who have skin in the game who will build for local problems. And I think that’s where actually the opportunity also lies.’
MAJOR DISCUSSION POINT
Local Needs and Use Cases
AGREED WITH
Sunil Gupta, Nasubo Ongoma
Argument 3
This is a co-accountability between government and industry, requiring partnership rather than isolation
EXPLANATION
Sengupta emphasizes that achieving data sovereignty requires collaborative efforts between government and private sector rather than either party working in isolation. He frames this as shared responsibility where both sectors must work together.
EVIDENCE
He uses the term ‘co-accountability’ and mentions that government must work ‘hand in hand with both the bazaar and the samaj, that’s the society and the market.’
MAJOR DISCUSSION POINT
Government-Industry Collaboration
AGREED WITH
Seema Ambastha
Argument 4
Focus should be on serving the most disadvantaged people, remembering that AI sovereignty ultimately serves real people with real problems
EXPLANATION
Sengupta concludes that the ultimate purpose of AI sovereignty should be to solve real problems for real people, particularly the most vulnerable. He emphasizes that technological advancement should not lose sight of its human purpose.
EVIDENCE
He references Gandhiji’s principle of thinking about ‘the last person in the line’ and states that ‘we must think about the last person, the man or woman in the queue. The most unfortunate who we must think about.’
MAJOR DISCUSSION POINT
Practical Implementation
S
Sunil Gupta
5 arguments183 words per minute1158 words379 seconds
Argument 1
Sovereignty doesn’t mean doing everything yourself but controlling what’s fundamental while collaborating on other aspects
EXPLANATION
Gupta argues that sovereignty is often misunderstood as complete self-reliance and isolation. Instead, he advocates for strategic control over fundamental elements while maintaining global partnerships and leveraging international expertise in areas where collaboration makes sense.
EVIDENCE
He explains that ‘sovereignty for sure does not mean we become isolated and just try to do everything ourselves’ and describes the interconnected nature of technology stacks where ‘somebody will be good at making chips, somebody will be making raw material for the chips.’
MAJOR DISCUSSION POINT
Definition and Scope of Data Sovereignty
AGREED WITH
Seema Ambastha
Argument 2
Compute infrastructure must be within national control as it’s where data is processed, stored, and models are built
EXPLANATION
Gupta strongly advocates that compute infrastructure represents a fundamental component that must remain under national control. He argues this is essential both for security reasons and to ensure that national priorities can be addressed effectively.
EVIDENCE
He states ‘Compute infrastructure, I strongly believe, has to be within your country, has to be within your control. That is where your data is getting processed, that is where data is getting stored, that is where your models are being made.’
MAJOR DISCUSSION POINT
Infrastructure and Compute Sovereignty
AGREED WITH
Seema Ambastha
DISAGREED WITH
Nasubo Ongoma
Argument 3
Countries need AI models focused on local languages, dialects, and cultural contexts rather than just frontier models
EXPLANATION
Gupta emphasizes that different countries have unique needs that may not be served by global frontier models. He advocates for developing AI solutions that cater to local languages, cultural contexts, and specific use cases that benefit the local population at scale.
EVIDENCE
He provides the example of voice-based AI for India: ‘if we are able to talk to a device in my own native language with my own slang, and the device does all the processing and gives me my answer in real time in my own language, my slang, that is where the real benefit, that is where population scale benefits comes in.’
MAJOR DISCUSSION POINT
Local Needs and Use Cases
AGREED WITH
Nasubo Ongoma, Arghya Sengupta
Argument 4
Partnership without dependence is key – using best foreign technologies within controlled, ring-fenced environments
EXPLANATION
Gupta advocates for a balanced approach where countries can leverage the best global technologies while maintaining control and avoiding dependency. This involves using foreign technology stacks within sovereign infrastructure rather than relying on external platforms.
EVIDENCE
He describes Yotta’s approach: ‘I’m using the best of the foreign technologies… But I’m not using these technologies in the public cloud. I’m using their technology stack within my ring-fenced walls, within my GPU and CPU compute infrastructure.’
MAJOR DISCUSSION POINT
Trust and Partnership Models
AGREED WITH
Seema Ambastha
DISAGREED WITH
Nasubo Ongoma
Argument 5
Moving critical systems like India’s AI language platform from hyperscale clouds to sovereign infrastructure while maintaining technology partnerships
EXPLANATION
Gupta provides a concrete example of how sovereignty can be implemented in practice by migrating critical digital public infrastructure from foreign cloud platforms to domestic infrastructure while still leveraging international technology partnerships.
EVIDENCE
He details the migration of India’s Bhashini platform from a hyperscale cloud operator to Yotta’s infrastructure, including the development of 30-40 different technologies and bringing even NVIDIA’s NVCF software component under local control through open sourcing.
MAJOR DISCUSSION POINT
Practical Implementation
S
Seema Ambastha
6 arguments148 words per minute1247 words503 seconds
Argument 1
Strategic control needs to be sovereign, not necessarily ownership of all supply chain components
EXPLANATION
Ambastha argues for a nuanced approach to sovereignty that focuses on maintaining strategic control over critical elements rather than trying to own every component of the supply chain. She emphasizes the importance of defining what specifically needs to be sovereign versus what can be collaborative.
EVIDENCE
She states ‘strategic control that you need needs to be sovereign’ and explains that ‘ownership. Is ownership very important across all the components in the supply chain and in the critical infrastructure for government? Not really.’
MAJOR DISCUSSION POINT
Definition and Scope of Data Sovereignty
Argument 2
Critical national infrastructure should be treated like other precious national assets with appropriate policies and guardrails
EXPLANATION
Ambastha advocates for treating digital infrastructure with the same level of importance and protection as traditional national infrastructure like power grids or telecommunications. This requires comprehensive policies and protective measures.
EVIDENCE
She compares national digital infrastructure to ‘like a power grid port or a telecom’ and emphasizes the need to ‘treat like our digital assets like any other precious asset’ with ‘policies, guardrails that ensure whatever you have in a sovereign or semi-sovereign infrastructure is not compromised externally.’
MAJOR DISCUSSION POINT
Infrastructure and Compute Sovereignty
AGREED WITH
Sunil Gupta
Argument 3
Trust must be engineered and verified, not just paper-based, requiring transparent and traceable systems
EXPLANATION
Ambastha emphasizes that trust in sovereign systems cannot be based merely on agreements or documentation but must be built into the technical architecture. She advocates for systems that are inherently transparent, traceable, and verifiable.
EVIDENCE
She states ‘trust is not paper-based. Trust can only be engineered, and it needs to be verified’ and calls for building ‘things that are transparent, traceable, and also observable.’
MAJOR DISCUSSION POINT
Trust and Partnership Models
AGREED WITH
Sunil Gupta
Argument 4
Global technology partnerships are important but should maintain sovereign capacity and control
EXPLANATION
Ambastha supports international collaboration while maintaining that countries should build domestic sovereign capacity. She argues against isolation but emphasizes the need for strategic partnerships that don’t compromise national control.
EVIDENCE
She states ‘you can’t isolate yourself’ and advocates to ‘forge very good technology global partnerships’ while building ‘some sovereign capacity’ because ‘sovereign AI compute infrastructure has become a global leverage.’
MAJOR DISCUSSION POINT
Trust and Partnership Models
AGREED WITH
Sunil Gupta
Argument 5
Government must establish sovereign guardrails and provide long-term policy stability for private investment
EXPLANATION
Ambastha argues that government has a crucial role in creating the policy framework and guardrails necessary for sovereignty while providing the stability needed for private sector investment in sovereign infrastructure.
EVIDENCE
She emphasizes that ‘government must lay the sovereign guardrails’ and provide ‘long-term stability of your policy’ along with ‘commitment so that private enterprises, private industry is confident of building that huge capacity for you.’
MAJOR DISCUSSION POINT
Government-Industry Collaboration
AGREED WITH
Arghya Sengupta
Argument 6
Policies need to evolve alongside infrastructure development
EXPLANATION
Ambastha points out that there’s often a mismatch between policy development and infrastructure deployment, arguing that both need to progress in tandem for effective sovereignty implementation.
EVIDENCE
She notes that ‘policies need to evolve along with the infrastructure. They are not based at the same’ pace.
MAJOR DISCUSSION POINT
Government-Industry Collaboration
N
Nasubo Ongoma
5 arguments168 words per minute679 words241 seconds
Argument 1
Sovereignty includes building tools that work for local contexts and use cases
EXPLANATION
Ongoma argues that sovereignty involves the ability to design and build solutions that address specific local needs and contexts. She emphasizes that having the capability to create contextually relevant tools is a key component of digital sovereignty.
EVIDENCE
She discusses designing ‘for our lived realities’ and mentions ‘the local needs, what are the things that we want that we can adopt’ as important aspects of sovereignty.
MAJOR DISCUSSION POINT
Definition and Scope of Data Sovereignty
Argument 2
Africa is disadvantaged with only 1% computing capacity but can leverage data and use cases for sovereignty
EXPLANATION
Ongoma acknowledges Africa’s significant disadvantage in computing infrastructure but argues that the continent has valuable assets in the form of data and unique use cases that can be leveraged to build sovereignty despite the compute deficit.
EVIDENCE
She states ‘when you look at the computing capacity it’s like at 1% so already we are at disadvantage’ but emphasizes ‘the one thing we have is we have data, we have use cases.’
MAJOR DISCUSSION POINT
Infrastructure and Compute Sovereignty
DISAGREED WITH
Sunil Gupta
Argument 3
Africa has unique use cases like healthcare solutions designed for African contexts (breast cancer detection for African women)
EXPLANATION
Ongoma provides specific examples of how local contexts require specialized solutions, demonstrating that global AI models may not adequately serve local populations without consideration of regional differences and specific needs.
EVIDENCE
She gives the concrete example of breast cancer detection: ‘We were able to get data sets from our lived context, knowing that when you look at the composition of the breast tissue for African women, it’s different.’
MAJOR DISCUSSION POINT
Local Needs and Use Cases
AGREED WITH
Sunil Gupta, Arghya Sengupta
Argument 4
Building for offline access is crucial in regions with limited digital connectivity
EXPLANATION
Ongoma emphasizes the importance of developing AI solutions that can function without constant internet connectivity, recognizing that many regions in Africa have limited digital infrastructure and connectivity challenges.
EVIDENCE
She notes that ‘digital connectivity is everywhere in Africa, it’s up to like 50%’ and discusses ‘innovations that allow for offline access’ as a key requirement for practical AI deployment.
MAJOR DISCUSSION POINT
Local Needs and Use Cases
Argument 5
Offering compute to local innovators and engaging with governments to represent use cases in sovereignty discussions
EXPLANATION
Ongoma describes Kala’s practical approach to building sovereignty by providing computing resources to local developers and actively engaging with governments to ensure that real-world use cases inform policy discussions about sovereignty.
EVIDENCE
She mentions that ‘at Kala, we are also offering compute to different innovators’ and describes having ‘conversations with different African governments to talk about what we are learning, what people are building.’
MAJOR DISCUSSION POINT
Practical Implementation
DISAGREED WITH
Sunil Gupta
Agreements
Agreement Points
Sovereignty does not mean complete isolation or doing everything yourself
Speakers: Sunil Gupta, Seema Ambastha
Sovereignty doesn’t mean doing everything yourself but controlling what’s fundamental while collaborating on other aspects Global technology partnerships are important but should maintain sovereign capacity and control
Both speakers agree that sovereignty should involve strategic partnerships and collaboration rather than complete self-reliance, emphasizing the importance of maintaining control over critical elements while leveraging global expertise
Trust must be engineered and verifiable, not just based on agreements
Speakers: Sunil Gupta, Seema Ambastha
Partnership without dependence is key – using best foreign technologies within controlled, ring-fenced environments Trust must be engineered and verified, not just paper-based, requiring transparent and traceable systems
Both speakers emphasize that trust in sovereign systems must be built into technical architecture and operational practices, requiring transparent, traceable, and verifiable systems rather than relying solely on contractual agreements
Local contexts and use cases require specialized solutions
Speakers: Sunil Gupta, Nasubo Ongoma, Arghya Sengupta
Countries need AI models focused on local languages, dialects, and cultural contexts rather than just frontier models Africa has unique use cases like healthcare solutions designed for African contexts (breast cancer detection for African women) Local people with skin in the game are best positioned to build solutions for local problems
All three speakers agree that global AI models and solutions may not adequately serve local populations, emphasizing the need for AI development that considers regional languages, cultural contexts, and specific local challenges
Government-industry collaboration is essential for sovereignty
Speakers: Seema Ambastha, Arghya Sengupta
Government must establish sovereign guardrails and provide long-term policy stability for private investment This is a co-accountability between government and industry, requiring partnership rather than isolation
Both speakers emphasize that achieving data sovereignty requires collaborative efforts between government and private sector, with government providing policy frameworks and guardrails while industry focuses on innovation and implementation
Infrastructure sovereignty is fundamental for national control
Speakers: Sunil Gupta, Seema Ambastha
Compute infrastructure must be within national control as it’s where data is processed, stored, and models are built Critical national infrastructure should be treated like other precious national assets with appropriate policies and guardrails
Both speakers agree that compute infrastructure represents a critical component that must remain under national control, comparing it to other essential national infrastructure like power grids or telecommunications
Similar Viewpoints
Both speakers advocate for a nuanced approach to sovereignty that focuses on strategic control over critical elements rather than complete ownership or isolation from global partnerships
Speakers: Sunil Gupta, Seema Ambastha
Sovereignty doesn’t mean doing everything yourself but controlling what’s fundamental while collaborating on other aspects Strategic control needs to be sovereign, not necessarily ownership of all supply chain components
Both speakers emphasize that sovereignty involves the capability to design and build solutions that address specific local needs and contexts, with local stakeholders being best positioned to understand and solve local challenges
Speakers: Nasubo Ongoma, Arghya Sengupta
Sovereignty includes building tools that work for local contexts and use cases Local people with skin in the game are best positioned to build solutions for local problems
Both speakers demonstrate practical approaches to implementing sovereignty by providing local computing infrastructure and engaging with stakeholders to ensure real-world use cases inform sovereignty strategies
Speakers: Sunil Gupta, Nasubo Ongoma
Moving critical systems like India’s AI language platform from hyperscale clouds to sovereign infrastructure while maintaining technology partnerships Offering compute to local innovators and engaging with governments to represent use cases in sovereignty discussions
Unexpected Consensus
Embracing global technology partnerships while maintaining sovereignty
Speakers: Sunil Gupta, Seema Ambastha, Nasubo Ongoma
Partnership without dependence is key – using best foreign technologies within controlled, ring-fenced environments Global technology partnerships are important but should maintain sovereign capacity and control Offering compute to local innovators and engaging with governments to represent use cases in sovereignty discussions
Despite representing different regions and contexts (India’s large market, global enterprise perspective, and Africa’s resource constraints), all speakers converged on the idea that sovereignty can coexist with international partnerships. This consensus is unexpected given typical sovereignty discussions often emphasize isolation or complete self-reliance
Focus on serving disadvantaged populations through sovereign AI
Speakers: Nasubo Ongoma, Arghya Sengupta
Building for offline access is crucial in regions with limited digital connectivity Focus should be on serving the most disadvantaged people, remembering that AI sovereignty ultimately serves real people with real problems
The consensus on prioritizing offline capabilities and disadvantaged populations in sovereignty discussions is unexpected in a high-level policy discussion, as such conversations often focus on technical and geopolitical aspects rather than inclusive development
Overall Assessment

The speakers demonstrated remarkable consensus on key principles of data sovereignty: the importance of strategic control over critical infrastructure, the need for local solutions to address specific contexts, the value of global partnerships without dependence, and the requirement for government-industry collaboration. They agreed that sovereignty should serve real people and solve local problems while maintaining global connectivity.

High level of consensus with complementary perspectives rather than conflicting viewpoints. The implications suggest a mature understanding of sovereignty that balances national control with global collaboration, practical implementation with policy frameworks, and technological capabilities with social needs. This consensus provides a strong foundation for developing practical sovereignty frameworks that can work across different contexts and development levels.

Differences
Different Viewpoints
Scale and capacity requirements for sovereignty implementation
Speakers: Sunil Gupta, Nasubo Ongoma
Compute infrastructure must be within national control as it’s where data is processed, stored, and models are built Africa is disadvantaged with only 1% computing capacity but can leverage data and use cases for sovereignty
Gupta emphasizes the necessity of having compute infrastructure within national borders as fundamental to sovereignty, while Ongoma acknowledges Africa’s severe compute disadvantage (1% capacity) but argues that sovereignty can still be achieved by leveraging other assets like data and use cases
Approach to technology partnerships based on negotiating power
Speakers: Sunil Gupta, Nasubo Ongoma
Partnership without dependence is key – using best foreign technologies within controlled, ring-fenced environments Offering compute to local innovators and engaging with governments to represent use cases in sovereignty discussions
Gupta describes a model where countries can negotiate with major tech companies (like NVIDIA) to bring technologies under local control within ring-fenced environments, while Ongoma focuses on building local capacity and engaging governments to ensure African use cases are represented in sovereignty discussions, reflecting different levels of negotiating power
Unexpected Differences
Policy evolution pace relative to infrastructure development
Speakers: Seema Ambastha, Sunil Gupta
Policies need to evolve alongside infrastructure development Moving critical systems like India’s AI language platform from hyperscale clouds to sovereign infrastructure while maintaining technology partnerships
While both support sovereignty implementation, Ambastha emphasizes that policies are lagging behind infrastructure development and need to catch up, while Gupta demonstrates through practical examples that sovereignty can be implemented even within current policy frameworks. This suggests different views on whether policy or implementation should lead
Overall Assessment

The speakers show remarkable consensus on the fundamental principles of data sovereignty – that it should involve strategic control rather than isolation, require partnerships rather than complete self-reliance, and serve local needs. The main disagreements center on implementation approaches based on different resource constraints and negotiating positions rather than philosophical differences

Low to moderate disagreement level with high strategic alignment. The disagreements are primarily tactical and reflect different contexts (large vs. small countries, different resource availability) rather than fundamental opposition to sovereignty principles. This suggests that while the concept of data sovereignty has broad support, implementation strategies need to be tailored to specific national circumstances and capabilities

Partial Agreements
Both agree that sovereignty doesn’t require complete self-reliance and that strategic elements should be controlled while collaborating on others, but they differ in implementation – Gupta focuses on physical infrastructure control while Ambastha emphasizes policy frameworks and guardrails
Speakers: Sunil Gupta, Seema Ambastha
Sovereignty doesn’t mean doing everything yourself but controlling what’s fundamental while collaborating on other aspects Strategic control needs to be sovereign, not necessarily ownership of all supply chain components
Both support international partnerships while building domestic capacity, but Ambastha focuses on large-scale infrastructure investment and policy stability for private sector confidence, while Ongoma emphasizes grassroots capacity building and ensuring local voices are heard in policy discussions
Speakers: Seema Ambastha, Nasubo Ongoma
Global technology partnerships are important but should maintain sovereign capacity and control Offering compute to local innovators and engaging with governments to represent use cases in sovereignty discussions
Both agree on the importance of local solutions for local problems, but Sengupta frames this as a general principle about local ownership, while Ongoma provides specific technical examples of how local contexts require different solutions
Speakers: Arghya Sengupta, Nasubo Ongoma
Local people with skin in the game are best positioned to build solutions for local problems Africa has unique use cases like healthcare solutions designed for African contexts (breast cancer detection for African women)
Takeaways
Key takeaways
Data sovereignty means controlling who makes the rules and maintaining strategic control, not isolating from global partnerships or doing everything domestically Compute infrastructure must be within national control as it processes, stores data and builds models, but can use foreign technologies within ring-fenced, controlled environments Local contexts and use cases are critical – countries need AI solutions for their languages, cultures, and specific problems rather than just adopting global frontier models Trust must be engineered and verified through transparent, traceable systems rather than relying on paper-based agreements Successful sovereignty requires government-industry partnership with governments providing guardrails and policy stability while industry focuses on innovation The ultimate goal is serving real people with real problems, particularly the most disadvantaged populations
Resolutions and action items
Industry should build sovereign AI infrastructure locally while maintaining trusted global technology partnerships Governments must establish sovereign guardrails and provide long-term policy stability to encourage private investment Countries should treat digital infrastructure as national assets requiring appropriate policies and protection Focus on building AI solutions for local languages, offline access, and region-specific use cases Move from point-in-time security checks to continuous verification processes for sovereignty enforcement
Unresolved issues
How smaller countries with limited resources can practically implement sovereignty while competing with larger nations Specific mechanisms for ensuring trusted supply chains and technology partnerships How to balance the need for global AI advancement with national sovereignty requirements Detailed frameworks for government-industry collaboration and co-accountability Standards for what constitutes adequate sovereign guardrails and verification processes
Suggested compromises
Use best foreign technologies within controlled, ring-fenced national environments rather than complete isolation Focus sovereignty efforts on 95% of national use cases using smaller models (20-100 billion parameters) rather than pursuing frontier models Maintain strategic control over critical components while allowing collaboration on non-critical elements Build sovereign capacity domestically while forging selective global technology partnerships Treat sovereignty as co-accountability between government, industry, and society rather than government control alone
Thought Provoking Comments
Sovereignty for sure does not mean we become isolated and just try to do everything ourselves. It is a matter of what is the thing we need to control and what is the thing where we need to collaborate. For sure, it definitely means that as a country, we do not allow a single country or a single company to define our digital destiny for future.
This comment reframes the entire sovereignty debate by distinguishing between isolation and strategic control. It moves beyond the binary thinking of ‘build everything ourselves vs. complete dependence’ to introduce a nuanced framework of selective control and collaboration. This fundamentally challenges the common misconception that sovereignty equals autarky.
This comment set the foundational framework for the entire discussion. It shifted the conversation from theoretical definitions to practical implementation strategies, allowing subsequent speakers to build on this nuanced understanding of sovereignty as strategic control rather than complete self-reliance.
Speaker: Sunil Gupta
We always talk about, you know, we are getting the data there, someone else is defining the rules. But we can define the rules by building the tools that actually work for the people in our context and being confident that, you know, once it works for our context, that people are going to use.
This comment introduces a powerful paradigm shift from a deficit mindset to an asset-based approach. Instead of focusing on what Africa lacks (compute capacity), it emphasizes what it has (unique use cases, local context, and people who understand local problems). It challenges the narrative of technological dependency by positioning local knowledge and context as competitive advantages.
This comment fundamentally changed the discussion’s tone from one of disadvantage to one of opportunity. It led Arghya to make the crucial observation that ‘it’s only local people who have skin in the game who will build for local problems,’ which became a recurring theme throughout the rest of the discussion.
Speaker: Nasubo Ongoma
Trust is not paper-based. Trust can only be engineered, and it needs to be verified.
This succinct statement cuts through the abstract discussions about trust in technology partnerships and provides a concrete, actionable framework. It challenges the notion that trust can be established through agreements alone and emphasizes the need for technical verification mechanisms.
This comment elevated the technical sophistication of the discussion and provided a bridge between the strategic concepts discussed earlier and practical implementation. It influenced the subsequent focus on transparency, traceability, and observability as core design principles for sovereign systems.
Speaker: Seema Ambastha
We ended up creating almost I can say 30 or 40 different technologies we developed put it on their virtual machines in their environment in my data center and brought them into their control they were not using PaaS anymore… I’m using their technology stack within my ring-fenced walls, within my GPU and CPU compute infrastructure.
This detailed example transforms abstract sovereignty principles into concrete implementation strategies. It demonstrates how ‘partnership not dependence’ works in practice, showing how organizations can leverage global technologies while maintaining control. The specificity of moving away from Platform-as-a-Service to infrastructure control provides a replicable model.
This practical example grounded the entire discussion in reality and provided a template for implementation. It prompted Arghya to explore how this model might work for smaller countries, leading to Nasubo’s insights about offline capabilities and the need for different approaches based on local constraints.
Speaker: Sunil Gupta
How are we going to use compute that also allows for offline? That is one of the use cases we are looking at, because in as much as digital connectivity is everywhere in Africa, it’s up to like 50%. So how do we also ensure that people are able to use?
This comment introduces a critical technical requirement that challenges conventional AI deployment models. By highlighting the need for offline capabilities, it forces a reconsideration of what sovereign AI means in contexts with limited connectivity. It demonstrates how local constraints can drive innovation rather than limit it.
This comment expanded the technical scope of the sovereignty discussion beyond data storage and compute location to include accessibility and resilience. It resonated with Arghya’s experience with Aadhaar’s offline verification, creating a connection between different global south experiences and establishing offline capability as a sovereignty requirement.
Speaker: Nasubo Ongoma
Overall Assessment

These key comments fundamentally shaped the discussion by moving it from theoretical concepts to practical frameworks and real-world applications. Sunil’s opening reframing established sovereignty as strategic control rather than isolation, creating space for nuanced discussion. Nasubo’s asset-based perspective shifted the narrative from deficit to opportunity, while her offline computing insight expanded the technical requirements for sovereign systems. Seema’s ‘engineered trust’ concept provided a bridge between strategy and implementation, emphasizing verification over agreements. Together, these comments created a progression from conceptual framework to practical implementation, with each speaker building on previous insights to create a comprehensive understanding of how sovereignty can be achieved in practice across different contexts and resource constraints.

Follow-up Questions
How can smaller countries like Malawi, Lesotho, or Eswatini implement sovereign AI solutions when they lack the leverage that larger countries like India have with technology providers?
This question highlights the challenge of achieving AI sovereignty for smaller nations that may not have the economic or political influence to negotiate favorable terms with global technology providers, requiring different strategies than those available to larger countries.
Speaker: Arghya Sengupta
How can AI systems be designed to work effectively in offline environments where digital connectivity is limited?
With only 50% digital connectivity across Africa, there’s a critical need to develop AI solutions that can function without constant internet access, which is essential for inclusive AI deployment in developing regions.
Speaker: Nasubo Ongoma
What specific policies and guardrails should governments establish to ensure AI sovereignty while enabling innovation?
While the need for government guardrails was mentioned multiple times, the specific nature and implementation of these policies requires further development to balance sovereignty with technological progress.
Speaker: Seema Ambastha
How can continuous verification processes for security and regulatory compliance be implemented effectively in sovereign AI systems?
Moving beyond point-in-time security checks to continuous verification is crucial for maintaining sovereignty, but the practical implementation mechanisms need further exploration.
Speaker: Seema Ambastha
What are the optimal models for public-private partnerships in building sovereign AI infrastructure across different country contexts?
The discussion emphasized the importance of government-industry collaboration, but specific partnership models and frameworks need to be developed for different national contexts and capabilities.
Speaker: Seema Ambastha
How can global south countries effectively learn from each other’s experiences in implementing sovereign AI solutions?
The moderator noted similarities in challenges faced by developing nations (like offline verification needs), suggesting a need for systematic knowledge sharing and collaboration frameworks among global south countries.
Speaker: Arghya Sengupta
What are the specific technical and operational requirements for creating truly ring-fenced AI environments that maintain sovereignty while enabling global technology partnerships?
While Sunil provided an example with the Bhashini platform, the broader technical specifications and operational procedures for implementing such sovereign yet connected systems need further documentation and standardization.
Speaker: Sunil Gupta

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit

Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

The discussion centered on how India can leverage AI and digital platforms to deliver affordable, preventive, and personalized healthcare across the country, building on the nation’s high out-of-pocket spending and large talent pool of AI engineers [1-3][4]. Apollo Hospitals highlighted its legacy of expanding care from a single polar hospital founded 43 years ago to a nationwide network that now serves over 1,100 towns and cities, emphasizing that its impact is not limited to big urban centers [9][16-18]. Its digital front door, Apollo 24-7, enables users to order medicines, book diagnostics, store health records, and interact with an AI-driven assistant, attracting more than 45 million total users and nearly one million daily interactions [12-15].


The group’s AI ecosystem has processed roughly 3.5 million API calls and is organized into five work streams, including a clinical intelligence engine, a doctor-workforce knowledge base, disease-risk scoring, multimodal imaging analysis, and acute-care pathway optimization [19-24][25-35]. Notable applications include an early-warning system that predicts sepsis 24-48 hours before onset in ICU beds, potentially saving thousands of lives if deployed at scale [30-33]. Apollo also introduced the EASE framework, which sets standards for ethical AI adoption, suitability, and explainability to ensure clinicians understand algorithmic outputs [40-43].


Preventive initiatives were described, such as AI-enhanced ultrasound for early detection of non-alcoholic fatty liver disease, a pre-diabetes algorithm used on 450 000 people, and collaborations with Google to detect tuberculosis from chest X-rays [47-48][61-64]. The clinician co-pilot tool automatically synthesizes patient records, saving up to 1.5 hours of physician time per day, while a nurse-pilot extends monitoring to home and rural settings, integrating data from multiple algorithms to improve decision-making [72-74][75-77].


Rural outreach is supported by mobile vans that screen for non-communicable diseases, provide tele-ophthalmology, and share data with ASHA workers and district health authorities to enable faster, cheaper diagnoses [79-81]. Apollo reports regulatory progress with MDSAP approval for 19 solutions and FDA clearance for nine, emphasizing that rigorous validation is essential to move pilots into mainstream practice [38-39][83-84].


The speaker argued that the future of health lies in an interconnected system that links public and private sectors, primary and advanced care, research institutions, and health-tech startups, creating a virtuous flywheel where data fuels new predictive and preventive algorithms, reducing disease burden and costs [85-91][92-94]. The call to action urged removal of skill and regulatory gaps and collaboration across companies, organizations, and individuals to build a predictive, preventive, personalized, participatory, and place-agnostic healthcare ecosystem [95-98]. The discussion concluded by emphasizing that this collaborative effort can make high-quality clinical care accessible to every village, city, or apartment, ultimately fostering a healthier world [97-99][100].


Keypoints

AI-driven, scalable healthcare ecosystem – Apollo leverages India’s large talent pool and high out-of-pocket spending to build a digital front-door (Apollo 24-7) that integrates medicine ordering, diagnostics, health records and AI-powered assistance, reaching over 45 million users with daily active engagement [12-14]. Their AI platform processes ~3.5 million API calls and is organized into five work streams, including a clinical intelligence engine that gives doctors access to cumulative patient data [18-24].


Concrete AI applications across the care continuum – The hospital deploys AI for clinical decision support, disease-risk scoring (cardiac, diabetes, hypertension), multimodal imaging analysis, early-warning sepsis prediction (2 000 ICU beds) and throughput optimization such as automated billing and record auto-population [21-34][35-37]. Additional tools include AI-enhanced radiology (TB detection, brain-bleed alerts) and a clinician-co-pilot that saves 1-1.5 hours of physician time per day [64-66][72-74].


Ethical and governance framework (EASE) – Apollo has published the EASE framework to ensure AI adoption is ethical, suitable for its intended use, and explainable to healthcare workers, positioning it as a baseline for all health-AI deployments [40-44].


Emphasis on preventive care and rural outreach – The speaker highlights AI-enabled early detection of non-communicable diseases (e.g., NAFLD via ultrasound, pre-diabetes algorithm used on 450 000 people) and large-scale screening via mobile vans, tele-ophthalmology and community health workers, aiming to shift focus from curative to preventive interventions [44-50][51-58][80-82].


Call for multi-sector collaboration to build future health systems – Apollo envisions a “health system of the future” that interconnects public and private providers, primary and advanced care, research institutions, startups and regulators, creating a flywheel of data-driven predictive, preventive, personalized, participatory, place-agnostic care [85-93][95-99].


Overall purpose/goal – The discussion is a strategic showcase of Apollo Hospitals’ AI-enabled healthcare model, illustrating current achievements, ethical safeguards, and preventive initiatives, while urging partners across industry, academia, government and technology to collaborate in constructing an integrated, future-ready health system for India and beyond.


Overall tone – The speaker maintains an upbeat, visionary tone throughout, beginning with pride in India’s unique advantages and the scale of digital adoption, moving into detailed, data-rich descriptions of AI implementations, and concluding with an inspirational, rally-calling appeal for collective action. The tone shifts subtly from informative and demonstrative to motivational as the talk progresses.


Speakers

Speaker 1: Dr. Pratap Siredi (Chairman, Apollo Hospitals) – Area of expertise: Healthcare leadership, AI and digital health innovation.


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

Dr Pratap Siredi, Chairman of Apollo Hospitals, opened by asserting that health-care in India must no longer be dictated by the zip code of a person’s birth, stressing a shift toward sustainable costs, preventive care and early detection as the new paradigm for collaborative health delivery [1-3]. He argued that India’s uniquely high out-of-pocket health-spending creates both pressure and opportunity: it forces the development of low-cost innovations while the country simultaneously expands its clinical workforce and boasts a talent pool of more than 600 000 AI engineers [4].


He then traced the origins of Apollo Hospitals, noting that his father founded the first Apollo hospital in India 43 years ago with the mission of bringing health-care within reach of the masses [9-10]. Today, Apollo has grown far beyond that single facility, operating in over 1 100 towns and cities and serving thousands of PIN codes across the nation, thereby demonstrating that its impact is not confined to metropolitan centres [16-18].


A cornerstone of this expansion is the digital front-door platform Apollo 24-7, described as a “digital front door” that lets patients purchase medicines, order diagnostics, store health records and interact with an AI-driven assistant (Apollo Assist) [12-13]. The platform has attracted more than 45 million users and sees close to one million daily interactions, a scale the speaker attributes to the power of the underlying communications infrastructure [14-15] (also highlighted in the summit report [S24]).


The speaker detailed Apollo’s AI ecosystem, which has already handled roughly 3.5 million API calls [19-20]. The ecosystem is organised into five work streams: (i) a clinical intelligence engine that supplies doctors with cumulative patient data [21-24]; (ii) a doctor-workforce knowledge base comprising about 20 million analysed records [22-23]; (iii) disease-prediction and risk-scoring models for conditions such as cardiac disease, diabetes and hypertension, aimed at prioritising interventions for India’s 1.4 billion residents [24-27]; (iv) multimodal imaging and signal analysis that synthesises body-derived data into causal insights for clinicians [27-29]; and (v) acute-care augmented pathways, exemplified by an early-warning system that predicts sepsis 24-48 hours before onset in 2 000 ICU beds [30-33]. The speaker highlighted that scaling this algorithm to 100 000 ICU beds could save countless lives [32-33] and noted that it is just one of potentially another hundred algorithms that could be added to the platform [80-82]. Complementary throughput-optimisation tools automate billing, eliminate patient waiting times and auto-populate records, thereby improving operational efficiency [34-37].


To assure that rapid AI deployment does not compromise safety or trust, the EASE framework addresses Ethics, Adoption, Suitability and Explainability, ensuring that clinicians understand algorithmic outputs in each context [40-44]. The organisation has already secured MDSAP approval for 19 AI solutions and FDA clearance for nine, underscoring a commitment to regulatory rigour [38-39] (further corroborated by Apollo’s validation leadership noted in the summit [S25]). Following these approvals, Dr Siredi emphasized that Apollo is actively seeking partnerships to expand its portfolio, noting that “a thousand flowers can bloom” when collaborators join the effort [68-70].


Preventive-care applications were presented as concrete illustrations of AI’s impact. An embedded AI module in ultrasound machines can detect non-alcoholic fatty liver disease (NAFLD) – a condition affecting roughly 40 % of Indian adults – enabling early intervention before the need for liver transplantation [47-49]. A lifestyle-risk-scoring system, validated in partnership with Solventum and 3M, quantifies risk groups and guides personalised behavioural changes [55-58]; the same platform underpins a pre-diabetes algorithm already applied to 450 000 individuals, with the ambition to reach the nation’s 85 million diabetics [61-62]. In radiology, collaborations with Google enable AI-based tuberculosis detection from chest X-rays, and other partnerships target early brain-bleed identification, facilitating rapid emergency diagnosis [63-66].


Workflow-enhancing tools were also highlighted. The Clinician Co-pilot automatically synthesises patient records, saving doctors between one and one-and-a-half hours per day [72-74]. A parallel Nurse Pilot extends similar efficiencies to nursing staff, while the integrated Care Console links ICU command stations, home monitoring, remote wards and external nursing homes in small rural areas, collectively saving millions of lives and reducing clinician burnout [75-77][76-78].


Rural outreach forms a further pillar of Apollo’s strategy. Mobile vans conduct non-communicable disease, cancer and tele-ophthalmology screening in remote villages, transmitting results to ASHA workers, district health authorities and government hospitals for faster, cheaper and earlier diagnosis [79-82]. The speaker stressed that such AI-enabled screening exemplifies the power of early detection to transform health outcomes in underserved areas [81].


In the realm of AI-driven research, Apollo is leveraging its blood bank and a newly established biobank with genetic testing to advance disease-prediction models and biomarker discovery [70-72].


Throughout the talk, the speaker repeatedly stressed that validation-not merely piloting- is the bottleneck that determines whether innovative tools become mainstream [82-84] (as also noted in the policy discussion on validation [S6]).


Looking ahead, the vision expands from individual hospitals to a “health system of the future” that interconnects public and private providers, primary and advanced care, research institutions, universities and health-tech startups [85-91]. This integrated ecosystem is described as a self-reinforcing flywheel that boosts health productivity, improves economic efficiency and fuels the development of new predictive and preventive algorithms [92-94]. To realise this, the speaker called for the removal of skill gaps and regulatory barriers, urging stakeholders to collaborate in building a predictive, preventive, personalised, participatory and place-agnostic health-care model that reaches every village, city or apartment building [95-98].


The speaker concluded by urging all stakeholders to collaborate now to build a predictive, preventive, personalized, participatory and place-agnostic health-care ecosystem for every Indian community [99-100].


Session transcriptComplete transcript of the session
Speaker 1

India and that your health care should not be defined by the zip code in which you’re born. It’s about sustainable costs and it’s about preventive care and early detection. It’s a new paradigm in collaborative care where I believe India has an advantage. This advantage is because we not only have one of the highest out -of -pocket payment and therefore we’re creating innovation and keeping our costs low, but also we’re growing more doctors, we’re training more nurses, and we have the largest talent pool of over 600 ,000 AI engineers. All this coming together to create something truly significant. But I’m not here to talk to you about technology. I’m here to share our story. And this story is about using the passion and the mission of bringing health care within the reach of people and using every tool possible to enable this to happen.

Dr. Pratap Siredi. I’m the art chairman and I’m honored to say my father. brought to polar hospitals when he returned from the U .S. almost 43 years ago to bring this, to bring healthcare within the reach of people. Today, we’ve tried to embed and imbibe every technology, whether it’s surgical robots, the proton therapy, all kinds of treatment and curative capability. We’ve gone beyond to say we must find a way to not just use these machines, but also to connect with our customer. So Apollo 24 -7, our digital front door, is actually, not only can you buy your medicines, order your diagnostics, store your health record, but also on Apollo Assist, ask queries, questions, get these answered, and then find ways.

And our market has rewarded us with the volumes that we see. Over 45 million users have come into this. and now we have close to a million users on a daily basis coming in to interact on this ecosystem. These records, these capabilities are getting enhanced every day because of the power of the communications that we have. But moving on, I think what is most important is that we’re not just in the big cities. We’re serving multiple PIN codes across the country and over 1 ,100 towns and cities. Moving across divine methodologies, I just wanted to share with you quickly a few of the things that we’re doing in AI because this is the AI summit. And approximately now we have about 3 .5 million API calls on our AI platforms.

These platforms we’ve divided into five areas. Number one is really our clinical intelligence engine so that a new doctor can have the knowledge and the capability of the cumulative data that we’re providing to the patient. And number two is the cumulative doctor workforce of about 20 million records analyzed. So this is our clinical decision support and our clinical intelligence engine. The next one is the disease prediction and the risk score, because we need to know in a population of 1 .4 billion people, where do we focus? What should we do more? So this is the second work stream, and this goes across cardiac, diabetes, multiple others, including hypertension, but we’re also looking at embedded AI. The next and another critical one is taking images and signals, because the body is an amazing piece of machinery that continues to give us this messaging.

How do we pick this up, synthesize it smarter than any one individual can do, and bring this multimodal signaling into a causal interpretation to thereby enable the doctor. We also have acute care augmented pathways. About 2 ,000 of our critical care beds are connected with our early warning symptom, and there we are predicting. The onset of sepsis. 24 to 48 hours before it happens. Imagine if we could take this AI algorithm and put it into a hundred thousand ICU beds. Imagine the number of lives saved. So here I’m sharing these examples because I believe that the power of AI is directly proportionate to the impact that we can have on lives saved, disease prevented, cost reduction, and therefore talking about cost reduction, the final one is really throughput optimization.

How can you be smarter about billing? How can you ensure that your patient has zero waiting time, that the data capture is using ambient systems, therefore the doctor is able to look at the patient and talk to the patient and you’re doing auto -population of your records. Millions of these capabilities are coming together. We’ve collated them. We’re getting MDSAP approval on almost 19 of them, FDA approval for nine, and we’re looking for partnership to build because I believe in this space. a thousand flowers can bloom, and that there is deeper work to be done on the use of our blood bank and our biobank with genetic testing to move further into disease prediction, biomarkers. So these are just new dimensions opening up.

And I’m sharing more of the examples of how we’re working in these areas, but before I go into those, I want to talk about the EASE framework. I’m happy that our EASE framework has been published fairly extensively because it talks about the ethical considerations of the use of AI. It looks at adoption, the suitability of a certain algorithm within the area that it’s being used, and finally the explainability so that every healthcare worker is able to understand what they use in which environment and what the interpretation means. I believe this is a base framework that we need to put into every healthcare environment. Moving on is another area of deep passion, and that is that while we’re doing the highest end of surgeries, curative care, transplants, etc.

How much can we spend our time on health care prevention? Because for every life -saving intervention, for every 1 ,000 people screened, you will have 11 people where you have averted a major crisis. And therefore, the ability to look at proactive preventive care and get a lot more intuitive on the mechanism of biomarkers and early detection in cancer. We are working with the ultrasound company to do an embedded AI into the ultrasound machine so that we can pick up NAFLD, non -alcoholic fatty liver, of which 40 % of the adult population of India is susceptible to. And if you can pick it up early, you can completely prevent a major crisis if you find it late. These are candidates for liver transplant, a lot of pain and suffering, and some of them potential death.

So the interventions at the appropriate time using technology open up an entire… realm of what we can do differently in this world. I’m sharing now this aspect of how lifestyle changes risk reduction. All of you on Instagram are getting thousands of messages a day on what to eat, how to exercise, what to do better. But is it quantified? Is there a risk scoring? Do you understand the difference between a high -risk group and what they need to do to a low -risk group? But every single group, by understanding the risk profiling and the modifiable risk factors of these non -communicable disease can move into a healthy pattern. This has been studied in partnership with Solventum, the company with 3M, with definitive proof on the power of doing something like this.

We also have a significant product on AI prediabetes which I think we’ve used for a long time. We’ve used it for a long time. We’ve used it for a long time. We’ve used this algorithm over 450 ,000 people. But I would love to see the 85 million diabetics in our country using this to predict and to handle their diabetes better. I also want to move on to the fact that in radiology, because of the years of data and the teleradiology services that we do across the world, we are able to take these images, and here we’ve worked with Google on prediction of tuberculosis in a simple x -ray. We’re working with various other companies, whether it’s an early detection of a brain bleed.

So once somebody goes into the emergency room, you’re quickly able to diagnose these. Each one of these are amazing new factors which are coming in. This is a quick example of the clinician co -pilot. Because I’m running out of time, I’m not going to share this video, but basically… Okay, they are playing the video. Can we have some volume on this? Or I’ll click through, because we’re really running out of time. But basically what the clinician co -pilot does is it’s synthesizing the record so that you’re summarizing. We’re approximately saving… We’re saving one to one and a half hours per day of doctor time in the records. We’re now doing the nurse pilot. I’m moving now to reimagining the way patients are monitored, whether it’s the challenge of a misdiagnosis, the integrated solution, which is looking at Care Console, and the technology stack around this, which is connecting the command station with the ICUs, with home, and with connected wards.

And because of this, we’ve not only saved millions of lives, we’ve saved time for doctors, and this is connected even to external nursing homes in small rural areas. I believe this is a powerful solution where the current AI algorithm has multiple factors from antibiotic usage to early warning symptoms of sepsis, but there are potentially another hundred algorithms that we could add on to this to enhance the quality of decision -making. And share this further, enabling a safer patient care and also less burnout in our staff. I’ve been sharing lots of hospital -based examples, but I do want to say that many of the solutions are applicable in rural India. We’re running mobile vans, we’re doing non -communicable disease screening in small rural environments, we’re finding ways to do cancer screening, tele -ophthalmology screening, and sharing this data and enabling either the ASHA worker or the district health authorities or even the government hospitals to diagnose faster, better, cheaper, and earlier.

And this is really the power of what can be done through early screening. I also do want to say, because for those who are listening from research organizations, from pharmaceuticals, from manufacturing, that we are among the people doing the largest number of validations. So innovation happens from multiple quarters, but validation is what moves a pilot into a mainstream activity. And that is what is critical for our country because you’ve been hearing this over the last two days about the number of pilots happening, but we’re not finding ways to continue this. I believe the hospital of the future is interconnected in multiple ways, from the theatres to the ICUs to using drone delivery. But then as we were drawing and designing this, we actually said, no, our thinking is too small and narrow.

We need to think bigger because the world is more connected. And primary care, preventive care, out there in the market, home care, these are the important redefinition factors of the future of healthcare. And so now I talk not about hospitals of the future, but about health systems of the future. This is what we need to redefine, and we have to do this together. These health systems of the future connect public and private, connect primary care with advanced care, connect research institutions, universities, innovators, health tech startups, all together to build new solutions for the betterment of healthcare. And I believe that this is a flywheel which will drive not just positive health productivity and the economics of the healthcare environment, but this data will enhance into new algorithms.

And these algorithms can be predictive and preventive, and if you find disease earlier, you’re actually saving so many aspects. So let us remove skill gaps. Let us push through regulatory gaps. Let us bring companies, organizations, and people together to build a new healthcare world, which is predictive, preventive, personalized, participatory, and place agnostic. Let every village in any part of the world, or every city, or every apartment building, wherever you are, be able to access good clinical care. Let’s come together to build a healthier world. And definitely, let’s say that this is the time for us to… to dream of finding cures for cancer, of enabling the world to be healthier, and finding a methodology for us to say that we brought our next generation into a healthier world.

Thank you so much, and namaste. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (9)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“His father founded the first Apollo hospital in India 43 years ago with the mission of bringing health‑care within reach of the masses.”

The knowledge base notes that Dr Pratap Siredi’s father returned from the U.S. about 43 years ago, founded Apollo Hospitals and aimed to bring healthcare within reach of people [S6].

Additional Contextmedium

“Apollo Hospitals has grown far beyond that single facility, operating in over 1 100 towns and cities and serving thousands of PIN codes across the nation.”

The knowledge base describes Apollo Hospitals as the largest for-profit hospital in India, indicating a very extensive national footprint, which adds context to the claim of operation in many towns and cities [S12].

Additional Contextmedium

“India’s uniquely high out‑of‑pocket health‑spending creates pressure and opportunity, driving low‑cost innovation and a large AI talent pool.”

The knowledge base highlights that healthcare systems are under immense pressure from rising demand and chronic disease, which accelerates adoption of data and AI-driven innovation [S33]; this provides contextual support for the reported pressure and opportunity.

External Sources (39)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — Evidence:Healthcare should not be defined by zip code of birth. Examples include mobile vans for rural screening, non-co…
S5
Panel Discussion: 01 — Hindi provides specific examples of AI implementation in Egypt’s healthcare and education sectors. The healthcare applic…
S6
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — Healthcare should not be defined by geographic location but should be accessible everywhere through interconnected healt…
S7
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — Federico Chacon Loaiza: and vulnerable populations? Thank you for your question. Good afternoon and best regards to all …
S8
The Purpose of Science / DAVOS 2025 — Michael Hengartner: Thank you, Max. I’m looking at the time. I still have two questions. So the short answers and Max…
S9
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-ananya-birla-birla-ai-labs — Freeing over 2 ,000 man days a year. The immediate impact is efficiency. Architects and developers are no longer constra…
S10
ISIF Asia 2023 Awards | IGF 2023 Launch / Award Event #8 — The work helps in reducing operating costs through automation
S11
https://dig.watch/event/india-ai-impact-summit-2026/keynote-by-sangita-reddy-joint-managing-director-apollo-hospitals-india-ai-impact-summit — How can you be smarter about billing? How can you ensure that your patient has zero waiting time, that the data capture …
S12
Cracking the Code of Digital Health / DAVOS 2025 — – Shobana Kamineni: Executive Chairperson, Apollo Health Company Karen Tso: Roy, thank you. And that dovetails neatly …
S13
WSIS Action Line C2 Information and communication infrastructure — Data quality and governance as fundamental requirements Legal and regulatory | Human rights Regulatory Frameworks and …
S14
Ethics and AI | Part 6 — Even if the Act itself does not make direct reference to “ethics”, it is closely tied to the broader context of ethical …
S15
Empowering communities through bottom-up AI: The example of ThutoHealth — In Botswana, a silent epidemic claims nearly half of all lives. Hypertension, diabetes, cancer, and other non-communicab…
S16
Keynote-Rishad Premji — “In healthcare, it can enable earlier disease screening and strengthen rural care, especially where access is limited.”[…
S17
Panel Discussion AI in Healthcare India AI Impact Summit — Dr. Sabine highlights the challenge of implementing screening and preventive care because it requires people to pay for …
S18
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — “These health systems of the future connect public and private, connect primary care with advanced care, connect researc…
S19
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — India and that your health care should not be defined by the zip code in which you’re born. It’s about sustainable costs…
S20
Comprehensive Report: Preventing Jobless Growth in the Age of AI — This observation highlighted the gap between what might be optimal for society (augmentation leading to shared prosperit…
S21
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — India possesses several competitive advantages that position it well for AI innovation and startup growth. These include…
S22
DIGITAL DIVIDENDS — The effi ciency of some government tasks and services can be improved through automation that eliminates routine manual…
S23
POLICY BRIEF — 1. Optimization – maximizing efficiency and reliability in existing processes to reduce the costs of trading. 2. Extens…
S24
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — The success of Apollo’s digital transformation is exemplified by Apollo 24-7, their comprehensive digital healthcare pla…
S25
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — Addressing the critical gap between pilot programs and mainstream implementation, Apollo positions itself as a leader in…
S26
Capacity Building in Digital Health — Dr. Yadav proposes that India has a unique opportunity to create a global healthcare ecosystem by combining its large po…
S27
MedTech and AI Innovations in Public Health Systems — Clinical Decision Support & Care Coordination
S28
Ethics and AI | Part 6 — Even if the Act itself does not make direct reference to “ethics”, it is closely tied to the broader context of ethical …
S29
Ethics and AI | Part 2 — 4.An ethic is framework, or guiding principle, and it’s often moral. […]  A social ethic might include “treating people …
S30
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S31
WHO issues first global report on AI in health and guiding principles for its design and use — The World Health Organization (WHO)publisheda report titled, Ethics and governance of artificial intelligence for health…
S32
Empowering communities through bottom-up AI: The example of ThutoHealth — In Botswana, a silent epidemic claims nearly half of all lives. Hypertension, diabetes, cancer, and other non-communicab…
S33
Keynote-Roy Jakobs — Healthcare systems are under immense pressure. Rising demand. chronic disease, stretched workforces, and high expectatio…
S34
THE GREEK NATIONAL DIGITAL DECADE STRATEGIC ROADMAP — The measure will allow citizens to have access to their electronic health records.
S35
DPI+H – health for all through digital public infrastructure — Garrett Mehl:Great, I just wanna thank PATH for helping to organize this session and for also inviting WHO to this impor…
S36
ITU-T Y-SERIES RECOMMENDATIONS — The big data ecosystem includes the following roles:
S37
CONCEPT — To improve the diagnosis of diseases and the selection of the most effective treatment methods, the main priority is the…
S38
Strengthening Worker Autonomy in the Modern Workplace | IGF 2023 WS #494 — Eliza:Sure, yeah, thank you so much. And thank you to Juanita for that really great kind of introduction. I can’t follow…
S39
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: Good afternoon, dear delegates and participants. It’s a great honor for me to have the opportunity to intro…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
22 arguments152 words per minute2092 words825 seconds
Argument 1
Healthcare must be independent of zip code; focus on sustainability, prevention, early detection
EXPLANATION
The speaker argues that access to quality health care should not depend on the geographic location where a person is born. The emphasis is on creating a system that is financially sustainable, prioritises preventive measures and enables early disease detection.
EVIDENCE
He states that health care should not be defined by the zip code in which you are born and that the model is about sustainable costs, preventive care and early detection [1-2].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote by Sangita Reddy stresses that health care should not be defined by zip code and highlights a sustainable, preventive model [S4][S6].
MAJOR DISCUSSION POINT
Equitable access to health care
Argument 2
India’s high out‑of‑pocket spending drives innovation, low costs, expanding doctor/nurse workforce and a 600,000‑strong AI talent pool
EXPLANATION
The speaker highlights that India’s large out‑of‑pocket health expenditures have spurred home‑grown innovation while keeping costs low. He also points to rapid growth in the clinical workforce and a sizable pool of AI engineers supporting health‑tech development.
EVIDENCE
He notes that India has one of the highest out-of-pocket payments, which drives innovation and low costs, and that the country is training more doctors, nurses and has a talent pool of over 600,000 AI engineers [4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reddy notes that high out-of-pocket payments create pressure for cost-effective innovation and that India’s expanding clinical workforce and large AI talent pool support this dynamic [S4][S6].
MAJOR DISCUSSION POINT
Innovation driven by financing model
Argument 3
Serves as a digital front door for medicines, diagnostics, records, AI assistance; 45 M users, ~1 M daily interactions
EXPLANATION
Apollo 24‑7 is presented as an integrated digital platform where patients can order medicines, book diagnostics, store health records and interact with AI‑driven assistance. The platform has attracted massive user adoption.
EVIDENCE
He describes Apollo 24-7 as a digital front door offering medicines, diagnostics, health-record storage and AI queries, and reports over 45 million total users with close to one million daily interactions [12-14].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Apollo 24-7 is described as a “digital front door” with over 45 million users and close to a million daily interactions [S4][S6].
MAJOR DISCUSSION POINT
Digital health platform adoption
Argument 4
Extends reach beyond metros to >1,100 towns and PIN codes across the country
EXPLANATION
The speaker stresses that the digital health services are not limited to major cities but are deployed across thousands of postal zones, covering more than a thousand towns and cities, thereby widening geographic coverage.
EVIDENCE
He mentions that the organization serves multiple PIN codes across the country and over 1,100 towns and cities, not just big cities [16-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker cites service across multiple PIN codes and more than 1,100 towns and cities, confirming broad geographic coverage [S4].
MAJOR DISCUSSION POINT
Geographic expansion of services
Argument 5
Clinical intelligence engine delivers cumulative data to doctors; 3.5 M API calls
EXPLANATION
A clinical intelligence engine aggregates large volumes of patient data and makes it available to clinicians, enabling evidence‑based decisions. The platform has already handled millions of API requests, indicating active usage.
EVIDENCE
He explains that the first AI workstream is a clinical intelligence engine that provides cumulative data to doctors, and notes approximately 3.5 million API calls on the AI platforms [19-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Approximately 3.5 million API calls on AI platforms and a clinical intelligence engine that analyses millions of records for decision support are reported in the keynote [S4][S6].
MAJOR DISCUSSION POINT
Data‑driven clinical decision support
Argument 6
Disease‑prediction and risk‑scoring models for cardiac, diabetes, hypertension, etc., to target a 1.4 B population
EXPLANATION
The speaker outlines AI models that calculate disease risk scores for major non‑communicable diseases, helping to prioritize interventions across India’s 1.4 billion residents.
EVIDENCE
He describes a second workstream focused on disease prediction and risk scoring for cardiac, diabetes, hypertension and other conditions to guide population-level focus [24-27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Disease-prediction and risk-scoring platforms covering cardiac, diabetes, hypertension across India’s 1.4 billion population are detailed in the presentation [S4][S6].
MAJOR DISCUSSION POINT
Population‑level risk stratification
Argument 7
Multimodal imaging and signal AI synthesize data for causal interpretation, aiding clinicians
EXPLANATION
AI tools are used to combine imaging and physiological signals, processing them faster and more comprehensively than a single clinician could, to produce actionable insights.
EVIDENCE
He highlights a workstream that takes images and signals, synthesizes them smarter than any individual, and turns multimodal data into causal interpretations for doctors [27-29].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI that combines imaging and signal data for multimodal causal interpretation is highlighted in the keynote [S4][S6].
MAJOR DISCUSSION POINT
Advanced AI for diagnostic synthesis
Argument 8
Acute‑care pathways predict sepsis 24‑48 h early; scaling could save countless lives
EXPLANATION
AI‑enabled early warning systems monitor ICU beds and can forecast sepsis onset two days before clinical manifestation, offering a potential massive reduction in mortality if widely deployed.
EVIDENCE
He reports that about 2,000 critical-care beds are linked to an early-warning system that predicts sepsis 24-48 hours in advance, and imagines scaling this to hundreds of thousands of ICU beds to save many lives [30-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
An acute-care pathway predicts sepsis 24-48 hours before onset on about 2,000 ICU beds, with plans to scale nationally, as described by the speaker [S4][S6].
MAJOR DISCUSSION POINT
Early warning AI in critical care
Argument 9
Throughput optimisation automates billing, eliminates waiting, auto‑populates records
EXPLANATION
The speaker describes AI‑driven workflow improvements that streamline billing, reduce patient wait times, and automatically fill electronic health records, enhancing efficiency.
EVIDENCE
He discusses throughput optimisation that makes billing smarter, ensures zero waiting time, and uses ambient systems to auto-populate records while doctors focus on patients [34-37].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Automation of billing, zero waiting time and ambient auto-population of records are discussed as throughput optimisation, with additional references to efficiency gains from automation [S11][S9][S10].
MAJOR DISCUSSION POINT
Operational efficiency via AI
Argument 10
Regulatory milestones: MDSAP approval for 19 tools, FDA clearance for 9
EXPLANATION
The organization has achieved significant regulatory recognitions, obtaining MDSAP approval for nineteen AI tools and FDA clearance for nine, underscoring compliance and safety.
EVIDENCE
He states that they are receiving MDSAP approval on almost 19 tools and FDA approval for nine of them [38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The organization reports MDSAP approval for 19 AI tools and FDA clearance for nine, as stated in the keynote [S6].
MAJOR DISCUSSION POINT
Regulatory validation of AI solutions
Argument 11
EASE addresses ethics, suitability, and explainability to ensure trustworthy AI deployment in healthcare
EXPLANATION
The EASE framework provides guidelines for ethical AI use, assessing algorithm suitability and ensuring that healthcare workers can understand AI outputs, fostering responsible adoption.
EVIDENCE
He introduces the EASE framework, noting it covers ethics, suitability of algorithms, and explainability for healthcare workers [40-44].
MAJOR DISCUSSION POINT
Ethical AI governance
Argument 12
Embedded AI in ultrasound detects NAFLD early, preventing liver disease progression
EXPLANATION
By integrating AI into ultrasound devices, the system can identify non‑alcoholic fatty liver disease early, allowing timely intervention and averting severe outcomes such as liver transplant.
EVIDENCE
He mentions collaboration with an ultrasound company to embed AI that detects NAFLD, affecting 40 % of Indian adults, and stresses that early detection can prevent major crises [47-49].
MAJOR DISCUSSION POINT
AI‑enhanced early disease detection
Argument 13
AI‑driven risk scoring for lifestyle changes, validated with Solventum/3M partnership
EXPLANATION
A risk‑scoring system powered by AI helps individuals understand their health risk categories and guides lifestyle modifications; its effectiveness has been proven through a partnership with Solventum and 3M.
EVIDENCE
He references a study with Solventum and 3M that provides definitive proof of the power of risk-scoring for lifestyle changes [55-58].
MAJOR DISCUSSION POINT
Validated AI risk‑scoring for behavior change
Argument 14
Prediabetes AI algorithm used on 450 k people; aims to reach 85 M diabetics
EXPLANATION
An AI model for pre‑diabetes screening has already been applied to 450,000 individuals, and the speaker envisions scaling it to serve the 85 million diabetics in India for better disease management.
EVIDENCE
He notes that the algorithm has been used on over 450,000 people and expresses a desire to extend it to 85 million diabetics [61-62].
MAJOR DISCUSSION POINT
Scaling AI for chronic disease management
Argument 15
Partnership with Google for TB detection on X‑ray; other AI tools for brain‑bleed detection
EXPLANATION
Collaborations with Google enable AI‑based tuberculosis detection from chest X‑rays, and additional partnerships target early identification of brain bleeds, expanding AI’s diagnostic repertoire.
EVIDENCE
He cites work with Google on TB prediction from simple X-rays and mentions other collaborations for early brain-bleed detection [63-64].
MAJOR DISCUSSION POINT
AI collaborations for radiology
Argument 16
Clinician co‑pilot summarises records, saving 1–1.5 hours of doctor time per day
EXPLANATION
The clinician co‑pilot tool automatically synthesizes patient records, reducing documentation time by up to one and a half hours daily, thereby freeing clinicians for direct patient care.
EVIDENCE
He explains that the clinician co-pilot synthesizes records, saving one to one-and-a-half hours of doctor time per day, and notes the development of a nurse pilot [72-74].
MAJOR DISCUSSION POINT
AI‑driven documentation efficiency
Argument 17
Mobile vans provide NCD, cancer, and tele‑ophthalmology screening; data shared with ASHA workers and district health authorities
EXPLANATION
Mobile health units conduct screenings for non‑communicable diseases, cancer, and eye conditions in rural areas, transmitting results to community health workers and local health authorities for rapid follow‑up.
EVIDENCE
He describes running mobile vans that screen for NCDs, cancer and provide tele-ophthalmology, sharing data with ASHA workers and district health authorities [80-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Mobile vans delivering NCD, cancer and tele-ophthalmology screening and sharing results with ASHA workers and district health authorities are mentioned in the keynote [S4].
MAJOR DISCUSSION POINT
Rural outreach via mobile health units
Argument 18
Rural‑adapted solutions enable faster, cheaper, earlier diagnosis
EXPLANATION
Tailoring technology to rural contexts allows quicker, more affordable, and earlier disease detection, improving health outcomes in underserved areas.
EVIDENCE
He emphasizes that early screening in rural settings delivers faster, cheaper, earlier diagnosis [81].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Early screening in rural settings delivering faster, cheaper, earlier diagnosis is highlighted in the presentation [S4].
MAJOR DISCUSSION POINT
Impact of rural‑focused innovations
Argument 19
Extensive validation work is essential to move pilots into mainstream practice
EXPLANATION
The speaker stresses that rigorous validation is the bridge that transforms experimental pilots into widely adopted health solutions, highlighting the organization’s leadership in validation activities.
EVIDENCE
He notes that they conduct a large number of validations, which are critical for scaling pilots into mainstream practice, and that many pilots are not being continued without validation [82-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of extensive validation to transition pilots to mainstream practice is emphasized in the keynote [S6].
MAJOR DISCUSSION POINT
Importance of validation for scaling
Argument 20
Interconnect public & private sectors, primary & advanced care, research institutions, startups to build new solutions
EXPLANATION
A future health system should seamlessly link public and private entities, primary and specialized care, and academic and startup innovators to co‑create novel health solutions.
EVIDENCE
He describes health systems of the future that connect public and private, primary and advanced care, research institutions, universities, innovators and health-tech startups [91-92].
MAJOR DISCUSSION POINT
Collaborative health‑system ecosystem
Argument 21
Creates a flywheel driving health productivity, economics, and new predictive/preventive algorithms
EXPLANATION
The integrated ecosystem generates a self‑reinforcing cycle that boosts health productivity, improves economic efficiency, and fuels the development of further predictive and preventive AI tools.
EVIDENCE
He calls the ecosystem a flywheel that drives health productivity, economics, and enables new predictive and preventive algorithms [92-94].
MAJOR DISCUSSION POINT
Synergistic growth model for health innovation
Argument 22
Calls for removing skill and regulatory gaps to achieve a place‑agnostic, participatory health system for every community
EXPLANATION
The speaker urges action to close skill shortages and streamline regulations, enabling a universally accessible, participatory health system that works regardless of location.
EVIDENCE
He urges removal of skill gaps, pushing through regulatory gaps, and bringing together companies, organizations and people to build a place-agnostic health system [94-96].
MAJOR DISCUSSION POINT
Addressing workforce and regulatory barriers
Agreements
Agreement Points
Healthcare must be independent of zip code; focus on sustainability, prevention, early detection
Speakers: Speaker 1
Healthcare must be independent of zip code; focus on sustainability, prevention, early detection
The speaker stresses that health care should not be defined by the zip code where a person is born and that the model emphasizes sustainable costs, preventive care and early detection [1-2].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with the vision expressed by Sangita Reddy that healthcare should not be defined by zip code and should emphasize sustainable, preventive care [S19].
India’s high out‑of‑pocket spending drives innovation, low costs, expanding doctor/nurse workforce and a 600,000‑strong AI talent pool
Speakers: Speaker 1
India’s high out‑of‑pocket spending drives innovation, low costs, expanding doctor/nurse workforce and a 600,000‑strong AI talent pool
India’s large out-of-pocket health expenditures have spurred home-grown innovation while keeping costs low, alongside rapid growth in doctors, nurses and a 600,000-strong AI engineering talent pool [4].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects India’s competitive advantages highlighted in policy briefs, noting abundant AI talent, low operational costs, and government support for AI innovation [S21]; also echoes concerns about cost-driven innovation discussed in AI impact analyses [S20].
Apollo 24‑7 serves as a digital front door for medicines, diagnostics, records, AI assistance; 45 M users, ~1 M daily interactions
Speakers: Speaker 1
Serves as a digital front door for medicines, diagnostics, records, AI assistance; 45 M users, ~1 M daily interactions
Apollo 24-7 is presented as an integrated digital platform where patients can order medicines, book diagnostics, store health records and interact with AI assistance, attracting over 45 million total users and close to one million daily interactions [12-14].
Extends reach beyond metros to >1,100 towns and PIN codes across the country
Speakers: Speaker 1
Extends reach beyond metros to >1,100 towns and PIN codes across the country
The organization serves multiple PIN codes across India and operates in more than 1,100 towns and cities, not just major metros [16-18].
POLICY CONTEXT (KNOWLEDGE BASE)
Extending services beyond metropolitan areas supports the zip-code-agnostic health vision highlighted by Sangita Reddy, emphasizing equitable access irrespective of location [S19].
Clinical intelligence engine delivers cumulative data to doctors; 3.5 M API calls
Speakers: Speaker 1
Clinical intelligence engine delivers cumulative data to doctors; 3.5 M API calls
A clinical intelligence engine aggregates large volumes of patient data for clinicians, having handled approximately 3.5 million API calls on the AI platforms [19-24].
Disease‑prediction and risk‑scoring models for cardiac, diabetes, hypertension, etc., to target a 1.4 B population
Speakers: Speaker 1
Disease‑prediction and risk‑scoring models for cardiac, diabetes, hypertension, etc., to target a 1.4 B population
AI models calculate disease risk scores for major non-communicable diseases, helping prioritize interventions across India’s 1.4 billion residents [24-27].
Multimodal imaging and signal AI synthesize data for causal interpretation, aiding clinicians
Speakers: Speaker 1
Multimodal imaging and signal AI synthesize data for causal interpretation, aiding clinicians
AI tools combine imaging and physiological signals, processing them faster than any individual clinician to produce actionable, causal insights [27-29].
Acute‑care pathways predict sepsis 24‑48 h early; scaling could save countless lives
Speakers: Speaker 1
Acute‑care pathways predict sepsis 24‑48 h early; scaling could save countless lives
About 2,000 critical-care beds are linked to an early-warning system that forecasts sepsis 24-48 hours before onset, with potential national scaling to save many lives [30-33].
Throughput optimisation automates billing, eliminates waiting, auto‑populates records
Speakers: Speaker 1
Throughput optimisation automates billing, eliminates waiting, auto‑populates records
AI-driven workflow improvements streamline billing, ensure zero patient waiting time and automatically fill electronic health records, enhancing efficiency [34-37].
POLICY CONTEXT (KNOWLEDGE BASE)
Automation of billing and record-keeping corresponds to the digital dividends of government efficiency through automation and reduced intermediaries described in policy analysis [S22]; it also fits the optimisation pillar of the policy brief on transformation [S23].
Regulatory milestones: MDSAP approval for 19 tools, FDA clearance for 9
Speakers: Speaker 1
Regulatory milestones: MDSAP approval for 19 tools, FDA clearance for 9
The organization has obtained MDSAP approval for almost 19 AI tools and FDA clearance for nine of them, underscating compliance and safety [38].
EASE framework addresses ethics, suitability, and explainability to ensure trustworthy AI deployment in healthcare
Speakers: Speaker 1
EASE framework addresses ethics, suitability, and explainability to ensure trustworthy AI deployment in healthcare
The EASE framework provides guidelines covering ethics, algorithm suitability and explainability so healthcare workers can understand AI outputs [40-44].
Embedded AI in ultrasound detects NAFLD early, preventing liver disease progression
Speakers: Speaker 1
Embedded AI in ultrasound detects NAFLD early, preventing liver disease progression
Collaboration with an ultrasound company embeds AI that identifies non-alcoholic fatty liver disease early, allowing timely intervention to avert severe outcomes [47-49].
AI‑driven risk scoring for lifestyle changes, validated with Solventum/3M partnership
Speakers: Speaker 1
AI‑driven risk scoring for lifestyle changes, validated with Solventum/3M partnership
A risk-scoring system powered by AI helps individuals understand health risk categories and guides lifestyle modifications; its effectiveness is proven through a partnership with Solventum and 3M [55-58].
Prediabetes AI algorithm used on 450 k people; aims to reach 85 M diabetics
Speakers: Speaker 1
Prediabetes AI algorithm used on 450 k people; aims to reach 85 M diabetics
The pre-diabetes AI model has been applied to over 450,000 individuals, with a goal to scale to India’s 85 million diabetics for better disease management [61-62].
Partnership with Google for TB detection on X‑ray; other AI tools for brain‑bleed detection
Speakers: Speaker 1
Partnership with Google for TB detection on X‑ray; other AI tools for brain‑bleed detection
Collaborations with Google enable AI-based tuberculosis detection from chest X-rays, and additional partnerships target early identification of brain bleeds [63-64].
Clinician co‑pilot summarises records, saving 1‑1.5 h of doctor time per day
Speakers: Speaker 1
Clinician co‑pilot summarises records, saving 1‑1.5 h of doctor time per day
The clinician co-pilot automatically synthesises patient records, reducing documentation time by up to one and a half hours daily and freeing clinicians for direct care [72-74].
Mobile vans provide NCD, cancer, and tele‑ophthalmology screening; data shared with ASHA workers and district health authorities
Speakers: Speaker 1
Mobile vans provide NCD, cancer, and tele‑ophthalmology screening; data shared with ASHA workers and district health authorities
Mobile health units conduct screenings for non-communicable diseases, cancer and eye conditions in rural areas, transmitting results to community health workers and local authorities for rapid follow-up [80-82].
POLICY CONTEXT (KNOWLEDGE BASE)
Deploying mobile screening vans and sharing data with ASHA workers aligns with the digital dividends framework that promotes automation, data sharing, and audit trails to improve public service delivery [S22] and the extension aspect of the policy brief [S23].
Rural‑adapted solutions enable faster, cheaper, earlier diagnosis
Speakers: Speaker 1
Rural‑adapted solutions enable faster, cheaper, earlier diagnosis
Tailoring technology to rural contexts allows quicker, more affordable, and earlier disease detection, improving outcomes in underserved areas [81].
Extensive validation work is essential to move pilots into mainstream practice
Speakers: Speaker 1
Extensive validation work is essential to move pilots into mainstream practice
Rigorous validation is presented as the bridge that transforms experimental pilots into widely adopted health solutions, with the organization leading validation activities [82-84].
Interconnect public & private sectors, primary & advanced care, research institutions, startups to build new solutions
Speakers: Speaker 1
Interconnect public & private sectors, primary & advanced care, research institutions, startups to build new solutions
A future health system should seamlessly link public and private entities, primary and specialized care, and academic and startup innovators to co-create novel health solutions [91-92].
POLICY CONTEXT (KNOWLEDGE BASE)
Echoes the call to connect public and private sectors, primary and advanced care, and research institutions to co-create solutions, as articulated by Sangita Reddy at the AI Impact Summit [S18].
Creates a flywheel driving health productivity, economics, and new predictive/preventive algorithms
Speakers: Speaker 1
Creates a flywheel driving health productivity, economics, and new predictive/preventive algorithms
The integrated ecosystem generates a self-reinforcing cycle that boosts health productivity, improves economic efficiency, and fuels development of further predictive and preventive AI tools [92-94].
POLICY CONTEXT (KNOWLEDGE BASE)
The ‘flywheel’ metaphor for health productivity mirrors the same language used by Sangita Reddy describing systemic momentum in future health systems [S18].
Calls for removing skill and regulatory gaps to achieve a place‑agnostic, participatory health system for every community
Speakers: Speaker 1
Calls for removing skill and regulatory gaps to achieve a place‑agnostic, participatory health system for every community
The speaker urges closing skill shortages and streamlining regulations to enable a universally accessible, participatory health system that works regardless of location [94-96].
POLICY CONTEXT (KNOWLEDGE BASE)
The call to remove skill and regulatory gaps resonates with the transformation agenda of extending digital services and addressing regulatory barriers in health-tech policy [S23] and the push for audit-able, inclusive digital health systems [S22].
Similar Viewpoints
A consistent emphasis on equitable, inclusive access to health services through digital platforms and rural outreach, ensuring that care is not limited by geography or socioeconomic status [1-2][12-14][16-18][80-82][81].
Speakers: Speaker 1
Healthcare must be independent of zip code; focus on sustainability, prevention, early detection Serves as a digital front door for medicines, diagnostics, records, AI assistance; 45 M users, ~1 M daily interactions Extends reach beyond metros to >1,100 towns and PIN codes across the country Mobile vans provide NCD, cancer, and tele‑ophthalmology screening; data shared with ASHA workers and district health authorities Rural‑adapted solutions enable faster, cheaper, earlier diagnosis
A strong focus on AI‑enabled preventive, diagnostic and workflow solutions that improve clinical decision‑making, early disease detection and operational efficiency across the health system [19-24][24-27][27-29][30-33][47-49][55-58][61-62][63-64][72-74].
Speakers: Speaker 1
Clinical intelligence engine delivers cumulative data to doctors; 3.5 M API calls Disease‑prediction and risk‑scoring models for cardiac, diabetes, hypertension, etc., to target a 1.4 B population Multimodal imaging and signal AI synthesize data for causal interpretation, aiding clinicians Acute‑care pathways predict sepsis 24‑48 h early; scaling could save countless lives Embedded AI in ultrasound detects NAFLD early, preventing liver disease progression AI‑driven risk scoring for lifestyle changes, validated with Solventum/3M partnership Prediabetes AI algorithm used on 450 k people; aims to reach 85 M diabetics Partnership with Google for TB detection on X‑ray; other AI tools for brain‑bleed detection Clinician co‑pilot summarises records, saving 1‑1.5 h of doctor time per day
Commitment to rigorous validation, regulatory compliance and ethical governance to ensure AI tools are safe, trustworthy and scalable [34-37][38][40-44][82-84].
Speakers: Speaker 1
Throughput optimisation automates billing, eliminates waiting, auto‑populates records Regulatory milestones: MDSAP approval for 19 tools, FDA clearance for 9 EASE framework addresses ethics, suitability, and explainability to ensure trustworthy AI deployment in healthcare Extensive validation work is essential to move pilots into mainstream practice
Advocacy for an integrated, ecosystem‑wide health system that bridges public‑private divides, removes systemic barriers and creates a self‑reinforcing innovation cycle [91-94][94-96].
Speakers: Speaker 1
Interconnect public & private sectors, primary & advanced care, research institutions, startups to build new solutions Creates a flywheel driving health productivity, economics, and new predictive/preventive algorithms Calls for removing skill and regulatory gaps to achieve a place‑agnostic, participatory health system for every community
Unexpected Consensus
Explicit linking of AI‑driven clinical tools with large‑scale regulatory approvals (MDSAP and FDA) in a single health system narrative
Speakers: Speaker 1
Regulatory milestones: MDSAP approval for 19 tools, FDA clearance for 9 Throughput optimisation automates billing, eliminates waiting, auto‑populates records EASE framework addresses ethics, suitability, and explainability to ensure trustworthy AI deployment in healthcare
While many health-tech presentations separate technical innovation from regulatory discussion, the speaker consistently intertwines AI capabilities with concrete regulatory achievements and ethical frameworks, indicating an unexpected depth of alignment between innovation and compliance [38][34-37][40-44].
Overall Assessment

Speaker 1 presents a highly cohesive vision where equitable digital access, AI‑enabled prevention and diagnosis, rigorous validation/ethical governance, and ecosystem integration are repeatedly reinforced across all arguments.

Strong internal consensus – the same speaker repeatedly aligns multiple thematic strands, suggesting a unified strategic direction that could drive coordinated policy and investment actions across health, technology and regulatory domains.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains only statements from Speaker 1; no other speakers are present, and therefore no contrasting viewpoints or debates are evident. All presented points are articulated by a single speaker, indicating an absence of disagreement within the provided material.

None – the lack of multiple participants means there is no conflict or divergent perspectives to influence the discussion of the topics.

Takeaways
Key takeaways
Healthcare in India must be equitable, cost‑effective, and independent of a patient’s zip code, emphasizing sustainability, prevention, and early detection. India’s high out‑of‑pocket spending drives innovation, low costs, a growing workforce of doctors and nurses, and a large AI talent pool (~600,000 engineers). Apollo 24‑7 serves as a digital front door for medicines, diagnostics, health records, and AI assistance, reaching 45 M users with ~1 M daily interactions and extending services to >1,100 towns and PIN codes. AI is deployed across five major streams: clinical intelligence engine, disease‑prediction/risk‑scoring, multimodal imaging and signal interpretation, acute‑care pathways (e.g., early sepsis prediction), and throughput optimisation (billing, wait‑time reduction, auto‑population of records). Regulatory progress includes MDSAP approval for 19 AI tools and FDA clearance for 9 tools. The EASE framework (Ethics, Adoption, Suitability, Explainability) is proposed as a baseline for trustworthy AI deployment in healthcare. Preventive care initiatives use AI for early detection of NAFLD via ultrasound, lifestyle‑risk scoring (validated with Solventum/3M), and a pre‑diabetes algorithm already applied to 450 k people, with a goal to reach 85 M diabetics. Radiology collaborations (e.g., with Google) enable AI‑based TB detection on X‑rays and brain‑bleed detection; the Clinician Co‑Pilot saves 1–1.5 hours of doctor time per day by summarising records. Rural health delivery leverages mobile vans for NCD, cancer, and tele‑ophthalmology screening, sharing data with ASHA workers and district health authorities to enable faster, cheaper, earlier diagnosis. Extensive validation is highlighted as essential to move pilots into mainstream practice. A future health‑system vision calls for integrated public‑private ecosystems linking primary, advanced, research, and startup sectors to create a predictive, preventive, personalized, participatory, place‑agnostic system.
Resolutions and action items
Apollo seeks partnerships to co‑develop and scale AI tools (clinical intelligence, disease prediction, imaging, acute‑care pathways, throughput optimisation). Call to remove skill gaps and regulatory barriers to enable broader AI adoption across the health system. Commitment to continue validation work to transition pilots into mainstream clinical use. Invitation for stakeholders (research institutions, pharma, manufacturers, startups) to join a collaborative ecosystem for building next‑generation health solutions.
Unresolved issues
How to scale acute‑care AI pathways (e.g., sepsis prediction) from 2,000 ICU beds to potentially 100,000 beds nationwide. Mechanisms for sustained funding and business models to support widespread preventive‑care AI tools for the 1.4 B population. Specific regulatory processes needed to accelerate approval of additional AI algorithms beyond the current 19 MDSAP and 9 FDA clearances. Operational details for integrating AI‑driven risk scoring into everyday lifestyle guidance for diverse risk groups. Strategies for ensuring data privacy, security, and interoperability across the extensive digital ecosystem (Apollo 24‑7, rural mobile vans, ASHA workers, etc.). Concrete plans for training and upskilling the expanding clinical workforce to effectively use AI decision‑support tools.
Suggested compromises
None identified
Thought Provoking Comments
India’s health care advantage stems from having one of the highest out‑of‑pocket payments, which forces innovation and low‑cost solutions, combined with a rapidly growing workforce of doctors, nurses, and over 600,000 AI engineers.
It reframes a common perception that high out‑of‑pocket spending is purely a problem, turning it into a catalyst for technological and talent development unique to India.
This observation set the stage for the entire talk, justifying why India can lead in AI‑driven health care and prompting the audience to view cost pressures as an opportunity rather than a barrier.
Speaker: Speaker 1
Apollo 24‑7 is our digital front door – a platform where users can buy medicines, order diagnostics, store health records, ask queries via Apollo Assist, and interact daily; we now have over 45 million users and close to a million daily interactions.
It introduces a concrete, large‑scale example of consumer‑facing health‑tech that bridges the gap between high‑end hospital services and everyday patient needs.
By quantifying user adoption, the comment shifted the conversation from abstract AI concepts to measurable impact, highlighting scalability and encouraging listeners to consider digital front‑ends as a core health‑system component.
Speaker: Speaker 1
Our AI platform is organized into five work streams: clinical intelligence engine, disease‑prediction risk scores, multimodal imaging interpretation, acute‑care augmented pathways (e.g., sepsis prediction 24‑48 hrs early), and throughput optimisation.
It provides a clear taxonomy that structures a complex ecosystem, making the breadth of AI applications understandable and showing how each piece contributes to outcomes.
This categorisation acted as a turning point, moving the narrative from a single success story to a systematic, multi‑layered AI strategy, inviting deeper questions about integration and governance.
Speaker: Speaker 1
We have introduced the EASE framework – Ethical, Adoption, Suitability, Explainability – to ensure every AI algorithm is used responsibly and is understandable by health‑care workers.
It brings ethics to the forefront of a technology‑heavy discussion, acknowledging the critical need for trust and transparency in AI deployment.
The mention of EASE pivoted the tone from pure innovation to responsible innovation, prompting the audience to consider regulatory and trust issues alongside technical capabilities.
Speaker: Speaker 1
For every 1,000 people screened, we avert major crises in 11 individuals – highlighting the massive return on investment of preventive care versus curative interventions.
It quantifies the value of prevention, challenging the prevailing focus on treatment‑centric models and emphasizing population‑health economics.
This statistic redirected attention toward preventive strategies, leading the speaker to discuss AI‑enabled early detection (e.g., NAFLD via ultrasound) and setting up a narrative about shifting resources upstream.
Speaker: Speaker 1
Our AI‑driven ultrasound can detect non‑alcoholic fatty liver disease (NAFLD), which affects 40 % of Indian adults, allowing early intervention before the need for liver transplant.
It showcases a specific, high‑impact use‑case where AI directly changes clinical pathways for a prevalent condition.
By grounding the discussion in a tangible disease, the comment deepened the conversation, illustrating how AI moves from theory to bedside and reinforcing the preventive‑care theme.
Speaker: Speaker 1
The Clinician Co‑Pilot synthesises records and saves one to one‑and‑a‑half hours of doctor time per day, while the Nurse Pilot extends similar efficiencies to nursing staff.
It highlights productivity gains for clinicians, addressing a major pain point—burnout—through AI augmentation rather than replacement.
This point shifted the dialogue toward workforce sustainability, linking AI benefits to human resource challenges and setting up later remarks about reducing staff burnout.
Speaker: Speaker 1
We are extending these solutions to rural India via mobile vans, non‑communicable disease screening, tele‑ophthalmology, and by empowering ASHA workers with AI‑enabled diagnostics.
It expands the scope from urban tertiary hospitals to underserved rural populations, emphasizing equity and scalability.
This transition broadened the conversation from a hospital‑centric view to a national health‑system perspective, reinforcing the earlier claim that health care should not be defined by zip code.
Speaker: Speaker 1
Validation, not just pilots, is the bottleneck that moves innovation into mainstream practice; we are among the leaders in large‑scale validation across research, pharma, and manufacturing partners.
It identifies a systemic obstacle—lack of rigorous validation—and positions the organization as a bridge between experimentation and adoption.
By calling out validation, the speaker steered the discussion toward implementation science and collaboration, paving the way for the final call to collective action.
Speaker: Speaker 1
We must build health systems of the future that connect public and private sectors, primary and advanced care, research institutions, startups, and that are predictive, preventive, personalized, participatory, and place‑agnostic.
It synthesises all previous points into a visionary framework, proposing a holistic, ecosystem‑wide transformation rather than isolated tech projects.
This concluding vision acted as a culminating turning point, tying together ethics, technology, prevention, rural outreach, and collaboration, and leaving the audience with a clear, aspirational roadmap.
Speaker: Speaker 1
Overall Assessment

The discussion was driven by a single, highly detailed presentation, but within it several pivotal comments redirected the narrative and deepened the conversation. Early remarks framed India’s unique market dynamics, establishing credibility for large‑scale AI deployment. Subsequent quantifications of user adoption and the five‑fold AI work‑stream taxonomy gave structure to the technical narrative. Introducing the EASE ethical framework and the concrete preventive‑care statistics shifted the tone from pure innovation to responsible, impact‑focused health care. Highlights of specific AI applications (NAFLD detection, Clinician Co‑Pilot) grounded the vision in real‑world outcomes, while the pivot to rural outreach and the emphasis on validation broadened the scope from hospital‑centric pilots to national system change. The final visionary statement unified these strands into a comprehensive, collaborative health‑system model. Collectively, these key comments transformed the monologue from a product showcase into a strategic call for ecosystem‑wide, ethically guided, preventive, and equitable AI‑enabled health care in India.

Follow-up Questions
How can the early‑warning sepsis AI algorithm be scaled and deployed across 100,000 ICU beds to maximize lives saved?
Scaling this predictive model could dramatically reduce sepsis mortality nationwide.
Speaker: Dr. Pratap Siredi
What partnerships are needed to co‑develop and accelerate AI‑driven healthcare solutions?
Collaborations with technology firms, academia, and industry are essential to bring innovations to scale.
Speaker: Dr. Pratap Siredi
What robust validation frameworks are required to move AI pilots into mainstream clinical practice?
Rigorous validation ensures safety, efficacy, and regulatory acceptance of AI tools.
Speaker: Dr. Pratap Siredi
How can skill gaps and regulatory barriers be removed to enable rapid AI adoption in healthcare?
Addressing workforce training and policy hurdles is critical for widescale implementation.
Speaker: Dr. Pratap Siredi
How effective is embedded AI in ultrasound machines for early detection of non‑alcoholic fatty liver disease (NAFLD) in the Indian population?
Early NAFLD detection can prevent progression to severe liver disease and reduce transplant needs.
Speaker: Dr. Pratap Siredi
Can lifestyle‑risk scoring be quantified and differentiated between high‑risk and low‑risk groups to guide personalized interventions?
A validated risk score would allow targeted preventive measures for non‑communicable diseases.
Speaker: Dr. Pratap Siredi
What strategies are needed to scale the AI‑based pre‑diabetes tool from 450,000 users to the estimated 85 million diabetics in India?
Broad adoption could improve diabetes management and reduce complications at a population level.
Speaker: Dr. Pratap Siredi
How can AI models for radiology (e.g., TB detection, brain‑bleed identification) be further refined and validated across diverse settings?
Enhanced accuracy and generalizability are vital for rapid, reliable emergency diagnostics.
Speaker: Dr. Pratap Siredi
What is the impact of the Clinician Co‑Pilot on physician workflow efficiency and patient outcomes?
Measuring time saved and care quality will justify wider deployment of the tool.
Speaker: Dr. Pratap Siredi
How can the Nurse Pilot and integrated Care Console be expanded to improve monitoring in ICUs, homes, and rural wards?
Extending these solutions could reduce burnout, improve early detection, and standardize care.
Speaker: Dr. Pratap Siredi
What biomarkers and genetic tests from the biobank can be leveraged to enhance disease‑prediction algorithms?
Integrating genomics could increase predictive power for personalized preventive care.
Speaker: Dr. Pratap Siredi
Which population‑level risk‑prediction models (cardiac, diabetes, hypertension) should be prioritized for resource allocation in a 1.4 billion‑person country?
Targeted focus helps optimize public‑health interventions and cost‑effectiveness.
Speaker: Dr. Pratap Siredi
How can AI‑driven throughput optimization (billing, zero‑waiting‑time, ambient data capture) be implemented to reduce healthcare costs?
Operational efficiencies directly affect affordability and patient satisfaction.
Speaker: Dr. Pratap Siredi
What is the effectiveness of mobile‑van AI screening programs for non‑communicable diseases, cancer, and tele‑ophthalmology in rural India?
Assessing outcomes will guide scaling of outreach services to underserved areas.
Speaker: Dr. Pratap Siredi
How can a future health‑system architecture be built to seamlessly connect public and private sectors, primary and advanced care, research institutions, and startups?
A unified ecosystem is essential for sustainable, equitable, and innovative healthcare delivery.
Speaker: Dr. Pratap Siredi
How can the EASE framework (Ethical, Adoption, Suitability, Explainability) be operationalized across all AI deployments in Indian healthcare?
Ensuring ethical and transparent AI use builds trust and regulatory compliance.
Speaker: Dr. Pratap Siredi

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Leveraging AI4All_ Pathways to Inclusion

Leveraging AI4All_ Pathways to Inclusion

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion centered on the launch of a report examining AI and inclusion, featuring insights from technology leaders and practitioners working across various sectors. Nirmal Bhansali presented key findings highlighting that good technology alone doesn’t automatically create inclusion, and that AI deployment must address multi-layered access problems including connectivity, skills, and user interfaces. The report identified three interconnected pillars for inclusive AI: design, access, and investment, emphasizing the need for participatory design that involves end users from the beginning.


Several panelists shared practical examples of inclusive AI implementation. Arghya Bhattacharya from Adalat AI described how their legal technology addresses India’s justice system challenges by creating multilingual tools for courts while avoiding potentially harmful AI applications like legal advice. Olivier from Rwanda’s AI Scaling Hub explained their approach of “building the plane as we fly it,” developing AI solutions in Kinyarwanda while simultaneously building the necessary digital infrastructure. Archana Joshi highlighted the business case for inclusion, noting that companies increasingly recognize that inclusive design isn’t just charitable work but smart business strategy.


The discussion revealed that successful inclusive AI requires addressing real-world constraints like limited internet connectivity, diverse language needs, and varying technological literacy levels. Participants emphasized that accessibility-first design benefits everyone, not just marginalized communities, and that the “purple economy” representing people with disabilities represents a significant market opportunity worth $150 billion in India alone. The conversation concluded with recognition that while AI can expand access and opportunity, success depends on building durable, equitable, and sustainable systems that prioritize inclusion from the outset rather than treating it as an afterthought.


Keypoints

Major Discussion Points:

Multi-layered Access Challenges in AI Implementation: The discussion emphasized that good technology alone doesn’t automatically include people. Key barriers include connectivity issues, skills gaps, interface design problems, and the need to consider diverse community needs. The “last mile gap” remains a significant obstacle to AI adoption.


Design-First Approach to Inclusion: Panelists stressed the importance of embedding inclusion from the very beginning of AI development through participatory design. This includes involving end users (like ASHA workers, judges, farmers) directly in the design process, understanding real-world constraints like low bandwidth environments, and ensuring products work offline when necessary.


Investment and Procurement Policy Reform: The conversation highlighted how traditional procurement processes are too slow for rapidly evolving AI technology. Rwanda’s “public procurement for innovation” approach and the use of non-profit vehicles (like Adalat AI) were presented as creative solutions to navigate bureaucratic challenges and align incentives better.


Language as a Foundation for Inclusion: Multiple speakers emphasized that local language support is crucial for AI adoption, particularly in countries like India and Rwanda. The discussion covered challenges with low-resource languages like Kinyarwanda and the business imperative of supporting local languages rather than defaulting to English-only solutions.


Business Case for Inclusive AI: The panel demonstrated a shift from viewing inclusion as a CSR initiative to recognizing it as sound business strategy. Examples included the “purple economy” (assistive tech market worth $150 billion in India alone) and how accessible design benefits everyone, not just people with disabilities.


Overall Purpose:

The discussion aimed to present findings from a research report on AI and inclusion while showcasing real-world examples of how organizations are successfully implementing inclusive AI solutions. The session served as both a report launch and a practical guide for building, scaling, and investing in AI systems that work for diverse populations.


Overall Tone:

The discussion maintained a consistently optimistic and solution-oriented tone throughout. Speakers acknowledged significant challenges but focused on practical approaches and success stories. The tone was collaborative and educational, with panelists building on each other’s insights. There was a notable shift from discussing problems in the beginning to emphasizing actionable solutions and business opportunities by the end, reflecting the summit’s goal of moving from awareness to implementation.


Speakers

Speakers from the provided list:


Nirmal Bhansali – Presented findings on AI and inclusion, focusing on healthcare, finance, education, and urban planning sectors


Moderator – Facilitated the event and panel discussions


Rutuja Pol – Partner at Ikigai Law, moderated the panel discussion on AI inclusion and access


Arghya Bhattacharya – Founder/representative of Adalat AI, working on AI solutions for courts and justice system efficiency


Speaker 1 – Representative from Rwanda AI Scaling Hub, working on AI implementation aligned with national priorities for socioeconomic development


Archana Joshi – Works with businesses across healthcare, BFSI, and education sectors for digital transformation, served as jury member for AI by Her


Agustya Mehta – Works at Meta, involved in design and development of AI-powered hardware including Ray-Ban Meta glasses, focuses on accessibility-first innovation


Additional speakers:


None identified – all speakers in the transcript correspond to the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion centred on the launch of a research report examining AI and inclusion, featuring insights from technology leaders and practitioners working across justice systems, national AI strategy, enterprise consulting, and product development. The session served both as a report launch and a practical guide for building, scaling, and investing in AI systems that work for diverse populations.


The Multi-Layered Challenge of AI Inclusion


Nirmal Bhansali opened the discussion by challenging fundamental assumptions about AI and accessibility, arguing that “good technology by itself does not bring in or include people” and that “by adding AI, you’re automatically not going to include more.” This counterintuitive insight established that AI might actually create additional barriers rather than removing them, setting the stage for a nuanced examination of inclusion challenges.


The report identified three interconnected pillars essential for inclusive AI: design, access, and investment. The design pillar emphasises embedding inclusion from the start through participatory approaches that involve end users directly in the development process. The access pillar focuses on ensuring AI systems work in real-world conditions, acknowledging that significant portions of the global population still lack reliable internet access. The investment pillar calls for aligning procurement policies, capital allocation, and market incentives to reward accessibility and open standards.


Bhansali highlighted successful examples of inclusive AI tools, including Shishumapin from Badbani AI for ASHA workers, the Be My Eyes feature in Ray-Ban glasses, and the YesSense access app, demonstrating that practical solutions already exist across different sectors.


Practical Implementation in Justice Systems


Arghya Bhattacharya from Adalat AI provided compelling insights into AI implementation within India’s justice system, offering the profound observation that “justice in these settings is really not a question of law. It’s become a question of logistics.” This systems-thinking approach identified operational efficiency as the root challenge rather than legal knowledge or jurisprudence.


Adalat AI’s approach illustrates both direct and indirect pathways to access. The direct approach addresses the “information darkness problem” through a multilingual WhatsApp chatbot that allows citizens to check case status and hearing dates without navigating multiple layers of intermediaries. The indirect approach involves making judicial institutions more efficient through tools like multilingual legal transcription that understands Indian accents, dialects, and legal terminology.


Particularly innovative was Adalat AI’s use of a non-profit model to overcome procurement and trust barriers. This approach automatically addressed concerns about data usage and judge profiling whilst aligning incentives with court needs. Within two years, the organisation expanded to nine Indian states, with Kerala mandating its use for all witness depositions. The non-profit pathway also enabled courts to develop technical expertise and better draft future RFPs for AI procurement.


A crucial implementation insight emerged around basic digital literacy challenges. Bhattacharya noted that judges couldn’t update Chrome browsers before learning AI tools, highlighting how fundamental digital skills gaps can derail sophisticated AI implementations. This led to the integration of AI training into official judicial curricula through the Adalat AI Academy.


National AI Strategy and Infrastructure Development


Olivier from Rwanda’s AI Scaling Hub introduced the compelling metaphor of “building the plane as we fly it,” describing Rwanda’s approach to simultaneous AI implementation and digital infrastructure development. Rwanda’s AI Scaling Hub focuses explicitly on scaling rather than piloting, with a mission to drive AI implementation aligned with national socioeconomic development priorities.


The language challenge proved particularly acute for Rwanda, where Kinyarwanda is spoken by the entire population but represents a low-resource language for AI applications. The country is simultaneously building datasets for text and voice whilst implementing AI solutions.


Rwanda’s procurement innovation addresses the fundamental mismatch between traditional government purchasing processes and rapidly evolving technology. Their “public procurement for innovation” approach brings together potential solution providers for competitive development rather than lengthy traditional processes that often result in outdated technology by implementation time.


Enterprise Adoption and Business Case Evolution


Archana Joshi’s enterprise consulting perspective revealed the evolving corporate approach to AI inclusion through three distinct scenarios. A humanitarian agency required AI systems that function offline during emergencies when connectivity fails. A global bank sought to make financial literacy videos accessible to hearing-impaired users through AI-generated sign language. Most challenging was an insurance company initially resistant to multilingual implementation due to ROI pressure, preferring English-only deployment despite serving primarily Hindi-speaking customers.


Joshi strongly cautioned against positioning inclusion as a Corporate Social Responsibility (CSR) initiative, arguing that “if you position inclusion as a CSR initiative, you are also going to get budgets which match the CSR initiatives, which don’t necessarily translate to good products or make good economic sense.”


The economic dynamics of inclusive AI are shifting favourably due to decreasing dataset costs through government initiatives like India’s AI Kosh, which provides diverse, locally-relevant datasets. This addresses the traditional challenge where inclusive AI required significantly higher investment in data acquisition and cleaning.


Accessible Design as Innovation Driver


Agustya Mehta from Meta provided philosophical grounding for inclusive design, emphasising that “accessible design is good design” and “universal design is good design.” This perspective reframes accessibility from a constraint to an innovation catalyst, supported by historical examples where mainstream technologies originated from accessibility efforts.


Meta’s Ray-Ban smart glasses development illustrated how product evolution can diverge from initial plans, with user behaviour revealing greater demand for music capabilities than originally anticipated. The AI functionality that now defines the product wasn’t part of the original product plan, demonstrating the importance of remaining responsive to user feedback.


The “nothing about us without us” principle proved central to Meta’s approach, emphasising diverse team composition and direct user involvement from target communities rather than token consultation.


Language Localisation as Foundation


Multiple speakers emphasised language localisation as fundamental for AI inclusion, moving beyond English-default approaches that exclude significant user populations. The business imperative for multilingual AI became clear through examples like the insurance company case, where English-only deployment would alienate the majority of customers in Hindi-speaking regions. However, corporate resistance often stems from ROI pressure rather than technical limitations.


Scaling Challenges and Investment Innovation


The discussion revealed that many AI products remain stuck in pilot stage due to surrounding system challenges rather than core technology limitations. Key barriers include last-mile diffusion problems, inadequate funding mechanisms, and limited institutional support for scaling.


Successful scaling requires understanding real-world deployment constraints including connectivity limitations, device capabilities, and user digital literacy levels. The integration of AI training into official professional curricula provides a model for sustainable adoption that builds institutional capacity over time.


Government initiatives like India’s AI Kosh demonstrate how public sector intervention can reduce barriers to inclusive AI development by providing accessible, diverse datasets. Rwanda’s innovation-friendly procurement allows competitive selection and agile development cycles, whilst non-profit pathways provide alternative routes that build institutional trust and technical expertise.


Future Directions


Despite significant progress, several challenges remain unresolved. The fundamental scaling problem persists across sectors and geographies, requiring continued research on effective mechanisms for moving beyond pilot implementations. The tension between demonstrating quick ROI to stakeholders whilst implementing truly inclusive design from the start remains difficult to navigate.


The global challenge of populations lacking internet access requires continued innovation in offline-capable AI systems and alternative connectivity solutions. Legal intelligence applications require careful research to identify safe use cases whilst avoiding potentially harmful applications.


Conclusion


The discussion demonstrated remarkable consensus around core principles of inclusive AI development, with speakers from diverse sectors arriving at similar conclusions about design requirements, implementation challenges, and business approaches. The conversation successfully shifted from viewing inclusion as a cost centre to recognising it as a revenue opportunity, and from treating accessibility as an add-on to understanding it as an innovation driver.


Most significantly, the discussion revealed that whilst AI can expand access and opportunity, success depends on building durable, equitable, and sustainable systems that prioritise inclusion from the outset rather than treating it as an afterthought. The three-pillar framework of design, access, and investment provides a practical roadmap for organisations seeking to implement AI systems that truly serve diverse populations.


Session transcriptComplete transcript of the session
Nirmal Bhansali

healthcare, finance, education, urban planning, but I’m going to only focus for a few for this particular evening. First, access is a multi -layered problem. Good technology by itself does not bring in or include people. By adding AI, you’re automatically not going to include more. The last mile gap is still a problem. You need to be able to focus on connectivity, in skilling, in the interfaces that people use. You must take into account the needs and wants of multiple communities. One of the other key observations that was important for this was understanding the power of the purple economy. The market of assistive tech products for people of persons with disabilities and people with special needs. These are often perceived to be on the margins of our reality, but they are not.

As one of the largest populations of people with disabilities in India, India alone has the potential of 150%. We have $150 billion just in this space. These are people who can purchase. These are people who can access these products. We need to be building for them. It’s not a charitable cause. It’s a simple business proposition. Second, a lot of AI products are stuck in the AI in the pilot stage. You often have a great idea, but you’re not able to execute them. These are for a lot of reasons, but fundamentally, they’re usually around the surrounding system. Like I mentioned, last -mile diffusion, funding, or limited support to be able to scale them up. Third, and this is something you have seen across the summit, language is foundational for enabling inclusion.

Whether it is a banking system which is using a voice AI for credit facilities or an educational AI tutor which you made for a rural village in India, all of them require to be understood in that local context where it’s operating. And this is something you would have seen across the summit in various. Exhibition halls over the past few days. And the last one is institutional capacity. this is a break or it can make a variable as well what you’re going to see is a lot of governments need to build technical expertise in the space of AI we need departments to understand this further this is already happening and once you see this you will see this reflected in procurement standards in technical specifications that these departments are making and this will lead to increasing adoption as a result of these findings then what do we have to suggest at the report there are three interconnected pillars like I mentioned in the beginning design, access and investment anything around AI and inclusion needs to take this into account first, looking at design you need to ensure that you’re embedding inclusion from the start a lot of AI systems are shaped very early and at that stage our recommendation is to have participatory design involve the people as you’re building it out if you’re making something for ASHA workers and you don’t involve them that happens that product is bound to fail in the last minute access.

This is where you have to make sure AI is usable in real world conditions. I know we’re in the AI Impact Summit, but something which you need to know is at least 33 % of the world, that’s 2 .6 billion people, still don’t have access to the internet. So when you’re thinking about building AI tools, you need to take into account those real world contexts, low bandwidth environments, not everyone has high speed internet or a full fledged smartphone. The third is investment. We need to align procurement, capital and incentives. Governments here can play a crucial role by acting as anchor buyers for these kind of products. By embedding standards which reward accessibility and open standards, you will be able to shape market incentives.

Creating these incentives we believe is very important to be able to scale inclusion through AI deployments. The last part of our report, and this is something which is my favorite, are these use cases. And And our report documents a bunch of them. Over the past few days, you would have seen a lot more than we could even account for. I’m just going to focus on two of them, two, three of them, which I really like. One is Shishumapin from Badbani AI. This is a very small tool which allows ASHA workers, frontline community healthcare workers, to take a photo or a video of a newborn baby and get accurate measurements. And this is very important and this is a very simple tool.

It can be used with low internet and can be used offline as well. Second, and you will hear from Augustia soon, I really like the reban glasses. I even tried them out at the Meta stall here. The Be My Eyes feature of that is something which a lot of people with visual impairment are using across the world. Something which helps them navigate the world around them. This is something which Meta has built by involving these people in their design process, involving them as they took decisions. And lastly, this is a shout out to the YesSense. To access app, you may have seen them in installs here. This is a very interesting tool where… you go around, take photos of buildings and physical spaces and understand whether they can be accessed by people with disabilities or not, creating a database which then allows for future greater policymaking in the future.

The crucial thing to note in all of these use cases is that all of these products and tools follow the principles which I talked to you about. They look at design, they have been supported by different government departments and finally they are looking at low resource context environments to be deployed. I am sure at the end of these five days we know that AI is going to expand access and opportunity. The question or doubt really isn’t that. It’s whether ecosystems will now choose to build systems that are required to make this expansion durable, equitable and sustainable. Our report will be out online soon. Thanks. Thanks so much.

Moderator

Thank you so much Nirmal for those insightful findings. May I request? Now everyone at the panel to please come for a photograph. this is the launch of the report as well so we’ll just take a quick photograph up so if you could come ahead with the report up front Nirmal please the project team who worked on it Yes Thank you very much We’re now going to move to a very interesting part of the event which is hearing from people who actually build these products. To take us through that we have Rutija Paul who’s a partner at Ikigai Law at the panel Rutija over to you.

Rutuja Pol

Thanks Rahil and thank you Nirmal for that wonderful presentation and to the audience for staying back for so long on a Friday evening. So thank you so much. Panelists, incredibly grateful for your time. I know it’s been a very hectic week for all of you. So thank you for taking out the time. And I think Nirmal, he set up a really good context about the three things that we thought were important from our findings. Design, access, and investment. And how do we sort of, you know, use them interchangeably and together to ensure that inclusion is not just a concept but really becomes, you know, really common in the conversations and all of our products. So I’ll start with actually Aragya.

Help us understand how has your product, tell us first about your product and how did you go about designing it, but also how has it enabled access to justice in a country as big as India and all of the issues that it has in the justice system.

Arghya Bhattacharya

Yeah, sure. Firstly, thank you so much for having me here. I’ll probably start by painting a picture of a district court. A lot of you, I’m sure, have been to a district court. by virtue of your profession, but there’s towers and towers of paper everywhere. I’m not a lawyer. The first time I went there, that was the most surprising thing for me. I saw more people writing with typewriters and not computers. And then there were people spending a lot more time looking for the right files than actually going through them and understanding what’s written in them, right? And so when you look at all of these things, it becomes quite clear that justice in these settings is really not a question of law.

It’s become a question of logistics. And that’s where Adalat AI comes in. We build AI and technology to make courts more efficient at a daily and weekly level. And the hope is that when you do this at scale, you can affect the case pendency problem in a rather positive manner. Now, coming to your question of how does AI actually enable access, I think what we are seeing is that there are two tracks. One is the more direct track, and then there is the indirect track as well. When it comes to the direct track, which is how does it enable communities to access justice better, I think there is a huge information darkness problem in the country.

It’s very hard to access judicial information about your cases. If you are in one, what’s going on with it? When is your next date of hearing? And there’s always multiple layers of middlemen that you need to sort of go through to access justice. I think the one use case of AI which we feel is quite safe now is to access information easily. And to that extent, at Adalat AI, we’ve built a WhatsApp chatbot which any citizen can access. They can talk to it in any language that they want. You can just give your name and your PIN code and it’s going to tell you if you have a case. And if you do have a case, what’s going on with it?

When is your next date of hearing? What happened in the previous order? This is not suggestive by any manner. In fact, we discourage any sort of legal advice using AI models at this point. I don’t think. That’s the right use. This is more around. given the information that already exists in the systems behind rather broken, you know, sort of websites can be kind of sort of bridge the last mile access. The more indirect sort of opportunity is by making the institutions of justice be more efficient, which is what we do with our core judicial product. We try to make courts more productive, you know, so writing everything down by hand. And in a courtroom is a big pain point.

Ninety percent of India’s courts don’t have stenographers. So we built a legal transcription tool, which is multilingual. You could understand the legal jargon that lawyers love to use, like rest your decata and whatnot. I’m not exactly a lawyer. It understands Indian accents and dialects. And what we are seeing is that courts that do use technology like this are able to improve judicial productivity two to three X. So if someone was recording two witness depositions per day, now they’re able to record four. to six. Now, when you do this at scale, you can get a lot more done at a daily, weekly level and then hopefully that helps the case pendency problem. We’re also sort of tackling a lot of other different judicial tasks like going through thousands and thousands of pages.

Can we help them navigate it? Can we digitize the entire workflow so that you don’t have to go through a lot of bundles of paper? What we are steering away from at this point is anything that involves legal intelligence. For example, something as simple as summarization too. We don’t think it’s safe enough right now because the summary for a citizen looks very different from the summary that you need for a judge versus a summary for a lawyer. And so that’s something that I would advise everyone to tread with caution on.

Rutuja Pol

Alright, that’s interesting. Thanks. I’m going to come back to you on the aspect of what has been safest to access information. But, Olivier, I wanted to come to you. next. One, very curious to know about Rwanda’s AI scaling hub. And second, Kinyarwanda, if I’m pronouncing it rightly, it’s your go -to language, right? But it’s also a very low resource language. So when you look at using an AI tool based on that language, how has it been? Has it been incredibly difficult? What have been your learnings? And just everything about the hub, please.

Speaker 1

Thank you. I hope everyone can hear me. And thank you, first, for having me here. And I’m happy to share. So, as she said, I come from the Rwanda AI scaling hub. And you wonder, she asked me a question when we were out there. She said, why the scaling hub and not just the AI hub? But the whole idea is we, as Rwanda, took the approach of thinking of working on solutions that can be scattered. so that we do not end up just having pilots and we stay in pilot mode, if I can say. So in that case, the AI Scaling Hub has one main mission that has two key pillars. And the mission is really to drive the AI implementation while ensuring that those implementations are aligned with the national priorities for socioeconomic development.

We focus on mainly AI solutions. And then we have two pillars. One is to encourage or accelerate the adoption by basically looking, scanning the world, and find those use cases. Those solutions that have succeeded elsewhere. and see which one inspires that should be brought to Rwanda, adapted to the context of the country, and then implemented to be scared and do the impact in the society. That’s one pillar. The other pillar is now build the ecosystem all around it to make sure that, one, those implementations can be scared and sustained. Two, they open up the door of possibility to actually be able to, I would say, create much more than this. That basically the ecosystem of innovators and all the other institutions and key stakeholders that really needs to make sure that this movement does not stop.

So that’s because we look at AI as, you know, Rwanda as a country have taken the direction of making sure that the country becomes… African hub for AI research and innovation. So that requires now to really go into this thing, and we are the scaling hub because we are also powered to really move as fast as possible in order to show the impact. So that’s in summary what we do, and we have three key sectors that we focus on, but we are not limited on this. Since we talk about the ecosystem, we really drive this whole thing as much as possible in a very agile way. We are the startup -ish type of institution. If I can say it like this, we find a way to make things happen.

So that’s why. And now talking to King Aruanda when it comes to AI solutions, there is something that in India many people may find or take for granted. But which is not somewhere everywhere. when the AI revolution started India had mature DPI which means that the focus has been more to actually implementing the AI already on existing and mature and trusted DPI that are in place it’s not a scenario in many places the Rwandan approach is actually building the plane as we fly it there is a lot of advancement into DPI I would say if I look at it from a technical standpoint everything is at least at 80 % but not necessarily at 100 % it’s more of plugging into things as we go, the DPI stack is being completed but the AI also needs to take off and go into this so there comes basically with that approach that’s why looking at it holistically is key and when it comes to Kinyarwanda definitely Rwanda is a small country compared to India in terms of size and in terms of people but it’s also a country with a high density population when you look at the way it is and the entire population speak one language which is Kinyarwanda as one of the languages that other we speak, basically which means that actually a solution for it to be adopted, it needs to be speaking Kinyarwanda and AI did not originate in Rwanda so AI does not speak Kinyarwanda originally so as we build our plane, there is the time of also now building the models, building the data set for the language be it the text be it the voice in order to get to perfection so we are doing this as we go and there is improvement every day.

I think that a couple of years from now, we have, I would say, a full stack data set of Kinyarwanda language that can now operate all this. But even right now, we are doing things. That’s the approach.

Rutuja Pol

That’s very fascinating. I think building a plane as you fly is going to stay with me. Thank you for that. I’m going to come to Archana next. I’m just going to pivot a little to a B2B conversation. You help businesses across the spectrum, be it healthcare, BFSI, education, scale up and transform digitally. What does access and inclusion mean in these rooms? How is it that you really convince your clients that inclusion and even access needs to be really embedded in the first thought of your transformation journey?

Archana Joshi

Thanks for that question, Rituja, and thanks for having me here on this panel. I’m going to take three examples. Recent ones. The first example, we were working with a humanitarian agency which deals with refugee crisis. So they had approached us to develop an AI solution for the field workers who operate on the field when a refugee crisis is happening to look at real -time where should the aid go. Because when refugee crisis happens, assume a blast happens, something happens, there’s a lot of aid that flows in. But is it reaching the right places? For that, you need to process real -time information. For that, you need to look at what is happening there on the ground, which you could be getting bits and pieces from the representatives who are there.

You need to be able to access information that’s flowing around the media. So there’s a lot of data crunching intelligence that needs to be baked in. And typically before AI, a lot of this was relied on telephone calls. That’s manually done. with AI this is something which helps but in this kind of situation most of the time your internet doesn’t work most of the time the connectivities are down because in this situation the connections go away and your AI still has to work you cannot say that I don’t know where to give the aid because my cloud connection went down or my net didn’t work or the connection was down by the government at that point in time so when you design an AI system like this you need to be able to figure out what needs to work offline what should work online where to bring in how to architect it and that becomes crucial so that’s first example where AI needs to be accessible inclusive by design I’ll take a second example so second example a global bank one of the largest bank in the world approached us and their request was, hey, I have a lot of financial literacy videos on my website.

Typically, those are in English and from an accessibility standpoint, there are some captions in English which come in but those don’t necessarily serve hearing impaired because for them, their first sign language, first language is sign language, not English. What can AI do here, right? So the question was, can we use for a little bit technical terms like the vision LLMs and some of the processes that are out there, technology, to create videos which probably were not accessible initially to a large set of population and make it accessible. So again here, something existed but you are using AI to put and add a wrapper on top of it. of it. So you are not accessible by design in this case, but you are trying to use AI to make it accessible.

Whereas in the first case it was accessible by design. And let me take a third case, which I was getting into quite a bit of heated conversation with the CTO of that insurance company, where they did a small POC with AI, where it was a conversational thing. Somebody calls at the insurance help desk and the AI kind of response on what queries the person has called in. And of course, like in all demos and POCs do, it worked beautifully. And the second question was, hey, let’s scale it up. And immediately the person with whom we were working, the CTO said, you know what, let’s do it in English for phase one. And let’s look at other languages later.

Now, my argument was that if you do it this way, most of the folks who are calling you are the ones who speak Hindi because you are operating in that region. If you don’t do that, you are alienating 70 % of the people and your customers. And why are you then putting this bot for? Why are you even attempting it, right? And their answer to that was, you know what, I have to show ROI from AI. And I have to show that quickly. And hence, hence, please go and still do the English one first. Let’s look at Hindi in Phase 2. Right? And you can imagine what kind of heated conversations I was trying to explain them. That’s not the right approach.

You need to be thinking of Hindi right from the start. Because if you do this, it will work beautifully in demo because it was all English. It was a sample data set with which you were working. It may still work in your Phase 1 a little bit. but in phase 2 it’s going to fail miserably and it will bite you even bad when it comes and fails at that point in time but it was a hard conversation we finally convinced them but to get to that there was a lot of education that’s needed so what I’m saying is if you look at these 3 examples where in certain cases due to the virtue of the business that humanitarian agency was you had to be accessible by design in the second case because it made good business sense the company said make it accessible whatever financial solutions we have whereas in the third case it was a very difficult conversation on accessibility because somebody wanted to prove a point to their management that AI gives the ROI which is there now if I look at various cases where most of the corporates are today of the businesses which actually are dealing with this economy and responsible for bringing AI out there, most of them are still hovering in the bucket three, which is the last one, where it is still not inclusive by design, still they feel, I have a POC, I can scale it up without being as inclusive with the data, with languages, with other things, and I can do that in later phases.

So this was the story till the entire last year. This year, and thanks to the summit and more and more forums like these, businesses are appreciating the fact that if they don’t do inclusive by design, they are leaving money on the table, and it’s just plain, smart, good business. So I think now the conversations in the boardrooms and the rooms and in corporates are shifting, where the question is not necessarily, get me the ROI and prove and show that AI works. but make AI sustainable and working for me for a long term, which means I have to be inclusive. So that’s what I would say.

Rutuja Pol

That’s wonderful. I think, I mean, kudos to the summit. It certainly made the conversation inclusive, really common and very boardroom, entered into the boardroom finally. So I think that’s a good takeaway. I think moving from the third bucket to now, Agastya, I wanted to come to you to just help us understand the way we, what we’ve seen in the research findings of our report has been that in many ways, AI is a force multiplier. It is going to enable at a much faster, at a much larger scale, right? So tell us a little more about the, at the back end of the design team in Meta, how when you look at designing a particular device, what are the instructions you give your team that this is what you need to follow A, B, C, D, so that the device you’re creating is definitely inclusive.

It respects the idea of the people that it’s going to be useful for.

Agustya Mehta

which are the divots on sidewalks that allow wheeled devices to transition from a sidewalk to a street to cross the street, they are ubiquitous in the United States due to regulatory pressure to protect the rights of people with disabilities that use wheelchairs. But anyone who’s encountered them while using a pram or stroller or a trolley, shopping cart, or luggage has benefited. They just make cities better. And so taking an extra step and thinking holistically rather than just being pressured by regulation, which of course is still an important component, is critical to making the end result good. I don’t think anyone’s perfect, but I’m doing my best to instill this mindset within Meta.

Rutuja Pol

All right. I mean, yes, I don’t think anyone’s perfect, and we’re all trying our best. It’s a good takeaway from the summit and everything that we’ve learned from here. I wanted to pivot to the conversation around investments and just, you know, how do you make inclusion and creating sustainable pathways? For inclusive AI, really, you know, in the context of India. or even globally for that matter. And I first wanted to actually come to Olivier again. Could you help us, just give us some idea of how did you go about making the procurement policy, which I understand is very innovation -friendly, for the national AI strategy? What were the considerations that went behind it, and how have you seen it pan out on ground so far?

Speaker 1

That’s a good question. So, Remy, paint a picture a little bit. So you see the whole journey to get to there. So procurement is normally most seen in public sector. And, you know, we are in a country where accountability is something expected from everyone. And when it comes to public funds, it’s even to another level. which leads that the classic procurement, if I can say, it takes a lot of time because in order to really avoid any way of any conflict of interest in the process but when it comes to the ICT space most innovation products look at the journey, he’s talking about about graphic user interface and you know the touch screens, look at the social media, he’s from meta you know, Facebook before before it become meta but just if you look at the journey you will see that normally into this space there is a change, there is a new thing every three years 2023 we were talking more about DPI, DPG and people were even having hard time to differentiate the two And now, three years later, we are talking more about AI as if it’s a new thing, but it’s basically the large language models that are new because of the revolution of social media that gets a lot of data sets and creates something that we can interact with.

If you go into the all -time procurement, you can try to buy 10 phones, and it takes you three years, which means basically by the time you follow this process, things have changed. You may have the right process, but not the right product because things have changed. That’s how the idea of now having the public procurement for innovation concept. Which was put in there, let’s say, in some space to some categories. Let’s consider a way where instead of really going through the classical time, how about… we bring together key players, potential institutions that can deliver to XYZ solution that we see is needed. And then give them a chance. That is a bit like, you know, they compete to see the best, who can deliver to this, and then they are empowered to do this.

So we go more into the agile mode of having these, you know, small step development along the way that can adapt to the change instead of waiting for that long process and end up getting a product that is no longer relevant to the market or to what we need to respond. Or maybe it’s relevant, but it’s way too old. So imagine trying to get, now we are at iPhone, what, 17? You know, how many times have we seen these basically these evolutions? So think about the process that started five years ago. It works for building roads. but not necessarily for technology projects. So that’s a bit of the picture of how we end up to this.

Rutuja Pol

That’s interesting. Even for us in India, it’s been that oftentimes the law and the policies is playing catch up with the tech. So you really need to find a creative way of finding solutions that you can smartly look at regulating the emerging tech. Aragi, I know you have a lot of thoughts on this one, especially around the procurement rules and how do courts adopt your product. Please do come in. We’d love to hear more about how do you think the existing procurement rules have shaped the way you’ve been able to access the courts and deploy your product there? And what do you think needs to change so that it’s faster and more usable for the courts?

Arghya Bhattacharya

Yeah, I think I’ll take a more solution -oriented. We could talk a lot about the problems of policy playing catch up with tech, but I’ll take a rather solution -oriented approach to how we… We’ve worked with the courts at Adalati. So when we started Adalat AI, which is about two years back, AI was very new. Courts are still, you know, working to adopt generic software technology. And so AI is extremely new, right? I think a couple of things worked really well for us. Number one was to build painkillers before vitamins when it comes to solutions. So we actually went for a very big pain in courts, which is judges are having to write everything down by hand.

And so when we say that, hey, there is this new technology, but it solves a really big pain point of yours. This is not a vitamin. This is something that you are all struggling with. There are a lot more open to adopting technology. But in terms of the creative solution around procurement, I think I want to emphasize that nonprofits as a pathway to creativity, creating impact are highly underrated. specifically in the space of justice and law. You know, there are all these non -profits that work with education and with healthcare to support doctors and teachers, but not enough non -profits doing this to support our court staff and justices in the country. And so, Adalat AI is exactly that.

Now, what do I mean by non -profit as a vehicle? Being non -profit helped us align incentive with the courts better. It automatically took away a lot of the stress around, oh, what are they going to do with my data? Are they going to profile the judges? It took away a lot of stress around, okay, are they going to charge me? Where am I going to, how am I going to evaluate the new technology? Now, so this helped us get into courts initially. And, you know, within two years, we are now in nine Indian states. We are in one out of every five courts in the country. And as of a historic mandate by Kerala, it actually became, mandatory to use Adalat AI in every courtroom in the state to record witness depositions.

It’s absolutely not allowed to do this by hand. And I do think that, you know, sort of this impact vessel vehicle really help us do that. In terms of sort of the other side, which is that at the end of the day, eventually courts and all institutions need RFPs. They need to sanction budgets and they need to sort of make sure that they pick the right player for it. Being a non -profit, some of the ways in which we are seeing we are able to influence this process is that now that they’ve been able to work with us, they have a lot more experience of what it means to scale these products. Their tech teams have a lot more experience of working with us in knowing what do they actually need out of these products.

And so they have a lot more in terms of ideas of how to draft the RFPs. And so I think that’s the other big benefits. If it’s that coming from being a non -profit, you know, all these non -profits in the ecosystem, they’re able to help these institutions. sort of design better RFPs when they actually do go and procure solutions.

Rutuja Pol

Right. That’s interesting. I love the Kerala example. I wish to see that happening across all states sooner in the country. But I wanted to now move to Agastya. I know that Meta Ray -Ban Glasses represent a significant investment for Meta in terms of the AI -powered hardware that you’ve created, right? From the inside, help us understand how do investment priorities shape the design journey for that product?

Agustya Mehta

That’s a good question. And I think in reality, sometimes the plan or the intent doesn’t necessarily match with where things land. For example, the Ray -Ban Stories, which were the first iteration of smart glasses we shipped, they were great. They had some really cool features. When we built them, we initially thought that the use cases would be around taking pictures and audio would just be used for making phone calls. While myself and a couple other engineers were doing hackathon projects, combining multimodal AI to help blind and low vision people, this was before the AI hype had caught the zeitgeist of the industry. And then the next iteration, the big focus we put was we found that people were using them for music much more than we expected.

And so we thought the biggest use case, the biggest investment would be on making the speakers better for Ray -Ban Meta version 1. And we did that, and the music and audio quality was much better. But you’ll notice something missing from the product plan that I mentioned for both of those products, AI, which is now not only front and center, but it’s literally how we market these glasses. They’re AI glasses. I say this not to drive cynicism, but nobody has a crystal ball. And so I think the key thing is learning to be nimble and understand the direction things are going and being able to jump on trends versus being too fixated on what the original plan was and maybe giving a sunk cost fallacy.

I love the painkiller. Vitamin analogy. And maybe adding to that. the really important thing to do is to avoid the temptation of eating the candy before either of those two. That’s my take on it.

Rutuja Pol

That’s interesting. Thanks so much. Arshna, I wanted to come to you. Same question, and I think you touched upon it in your earlier remarks that the executive wanted to show ROI. So really the question is when you have these routine discussions in boardrooms with your enterprise clients, did you start with positioning inclusion as a CSR initiative or just a good -to -have thing in your strategy, or has that shift significantly changed? I mean, of course, barring the summit and the two months of change in thinking, but in the past, how did the pitch start for you, and how was the reaction really like from the leaders?

Archana Joshi

If you, and this is my personal view based on what I’ve seen, and my experience, If you position inclusion as a CSR initiative, you are also going to get budgets which match the CSR initiatives, which don’t necessarily translate to good products or make good economic sense. So that never works. Don’t do it. That’s first. The second is that when you are positioning these kind of conversations, remember that in corporate world or any business for that matter, it’s always a trade off. How much you are willing to spend versus the returns that you are getting. Now, if you want to be it more and more inclusive, especially in an AI context, you can do that. If you have more and more diverse data sets feeding into that.

Do those exist today? at a cost which is palatable to all enterprises? The answer is no. So first thing what enterprises look at is great, I want to be inclusive, nobody wants to say no. But if they don’t have those data sets or if the cost of getting those data sets or cost of cleaning those data sets to make it more inclusive is going to be much higher. So typically in AI we say $1 spent on AI, you have to spend $3 on data. So if that’s the kind of economics you are dealing with, there is definitely going to be a point where the company says inclusion is going to come later because economically it stops becoming as viable to them.

Now if you look at the inflection point that AI is there today, it’s hyped up. it’s yet to show tangible outcomes across all the sectors. Yes, it’s shown great promises and results in some, but has it universally shown those promises? No, we are yet to see those. So when you are dealing with clients which are in those areas where they are yet to see those, you will see inclusion taking a backseat, not because of the intent, but because of the cost in certain cases. Whereas everybody realizes that inclusion is plain good business, but those trade -offs is what they look at. Increasingly, with the data being made more accessible, governments taking initiative. In fact, at India, we have AI Kosh, which the government of India has put in where you get diverse data sets of India and you can feed in those data.

And you can use those data sets to make your AI systems more inclusive. more tuned to custom local traditions, you will see the cost of this implementation going to come down. So economies of scale kicks in. Moment that happens, you will automatically see corporates and companies adopting this because now while there always was intent, now that intent is also becoming financially viable for them. So I would say it’s a combination of these kind of different facets which play together when certain decisions get made.

Rutuja Pol

That’s helpful to know that CSR is not the go -to route to see, but a bunch of things that determine the decision -making. I think in the interest of time, I’m going to move to the last segment of our panel discussion and my favorite, which is design. So I think I’m going to first come to Agastya again. Tell us about how can AI devices really drive accessibility first innovation? And I remember reading this at the Metastore, as well, earlier in the… weeks. So just help us understand the company thoughts behind it and how have you gone about executing it across different devices, including the glasses?

Agustya Mehta

Sure, thank you. Accessible design is good design. Universal design is good design. I think opening with that mindset that if you build things in an inclusive way, you make the product better for everyone, people with and without disabilities. I think that’s the critical factor. I think the second thing tied into that is the notion of nothing about us without us. On this panel, we discussed that a model is only as good as its data set. The same is true for a development team, for an organization. So I think it’s critical to hire people from all sorts of different backgrounds, not be stuck in your own bubble because you’re building products for people with all sorts of backgrounds.

It’s not just good karma. It’s not just charity. It’s good business. So I think those are kind of the two philosophies I’d push on, is that hammer home that innovation actually is seeded by accessibility. There’s so many innovations that started from accessibility efforts. The flatbed scanner, text -to -speech synthesis, OCR, these started as efforts to read books for blind people. They didn’t start as just industry -wide things. And yet here we are. So I think working with your leadership teams to call those examples out, show concrete examples of how things get better, and ensure that you are building with everyone.

Rutuja Pol

That’s incredible. Thanks, thanks. I know in the interest of time, I’m just going to quickly come to all three of our panelists to help me understand, of course, obviously with your own case study, but, Olivia, one case study from your country that you think the design aspect of it where, you know, from the very initial you’ve looked at, and inclusion has just been visible, and that’s helped in many ways. So just give us one example. and the same thing for you, maybe perhaps from the jury that you looked at on AI for her, that would be helpful. So, Olivier, then again, then Archana.

Speaker 1

All right. A quick one. We don’t have so many AI -powered solutions that are there, but just an example, we are working on an AI -powered advisory solution for agriculture. And right from the beginning, we need to think about the end user before we even think about the technology, because what AI is doing to us actually makes the tech easy. You know, a chat bot and a robot, it’s like even a code bot can make the code. But the end user now, in this case, we are talking about a smallholder farmer who does not use a software. He doesn’t use a smartphone, but uses a future phone, who may be in a place where the connectivity is shaky.

but who only speak Kinyarwanda. So going from that angle now basically there is that inclusivity right in the stage so that if we can deliver to this then the technology can work. That’s one example I can set in there and a couple months from now I should tell more success story because we are beginning into now those solutions to scare.

Rutuja Pol

That’s good. I look forward to a couple of more months and then some more case studies from your country. Raghav, do you want to go next?

Arghya Bhattacharya

Yeah, I think I’ll talk about two things. Number one is design and design of product and the second is and I want to contrast this is design of the intervention itself the entire solution and with respect to the problem that you’re trying to solve at Adalat AI with respect to design of the product there’s one thing that we’ve done from the start that has helped us we force our engineers, designers, everyone to go to court sit with judges, show them the designs, get an in -person approval from them before any piece of code is written, before they come back and touch their laptops, right? And that’s one thing that has helped us tremendously in being able to make sure that design is extremely inclusive.

The second is when it comes to design of intervention itself, it’s not enough to build technology. You know, we build transcription solutions, but if the judge doesn’t understand that they need to turn on the mic at the podium when they’re kind of dictating, then the mic just becomes a very expensive paperweight, right? It’s of no good use. And so we do extensive trainings. In fact, we have something called the Adalat AI Academy. As part of that, we go to courts, we teach them how to use the technology, and we had a very interesting insight. We were trying to teach them AI. But what we learned was a lot of judges don’t know how to update their Chrome browser.

And so that helped us then understand what exactly is needed to drive that intervention forward and make sure that impact is actually realized on ground. And I think now a lot of Adalat AI Academy has become a part of the official curriculum of becoming a judge in a lot of states. And so that’s kind of helped a lot in terms of design.

Rutuja Pol

That’s great. I think moving into the curriculum always helps that you’re planting the seeds early on for the training. Archana, the last word.

Archana Joshi

I’ll be real quick. So as part of the jury for AI by Her came across several startups, which were, of course, led by women and conceptualized and supported with AI. One of the startups which kind of stuck with me is a startup in fashion tech. And the interesting piece was that that startup. Helps the designers to show and envision how the finished product could look like. and what it does is not just show it so that you can reduce the time it would take to develop certain samples and then discard them so it’s not just sustainable fashion and sustainable designing, but it also shows in different shapes and sizes. So that makes it even more better and inclusive.

So some of these kind of things is what I found in the solutions which were there in AI by Her, which kind of makes you think that, yes, these are truly being sustainable and inclusive by design.

Rutuja Pol

That’s wonderful. All right, do we have time for questions? No? All right, cool. So we’re going to… Sorry about that, audience. Thank you so much. But probably we can catch all of the panelists once you’re done with the last segment. Thank you so much. Thank you. Thank you so much for a very insightful panel. I think everyone who stayed back has at least the last hour has been more informative as well and we were left with something. from all of you. So thank you so much. Please do catch the panelist. Thank you everyone for staying here. I know it’s been a long week. This is the last session at the AI Impact Summit, so just thank you all for being here.

And a big shout out to Metta who’s partnered with us for this project, so thank you for your continued support and we look forward to engaging further work. Thank you all. We do have some mementos from the India AI Summit for all the participants. So Rutuja, if you would please give them Yes, there is. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
N
Nirmal Bhansali
5 arguments178 words per minute1041 words350 seconds
Argument 1
Access is multi-layered requiring connectivity, skills, and community-focused interfaces – Multi-layered Access Problem
EXPLANATION
Bhansali argues that good technology alone does not automatically include people, and adding AI doesn’t solve inclusion issues. He emphasizes that addressing access requires focusing on multiple layers including connectivity infrastructure, skills development, user-friendly interfaces, and understanding the diverse needs of different communities.
EVIDENCE
He notes that 33% of the world (2.6 billion people) still don’t have access to the internet, highlighting the need to consider real-world contexts like low bandwidth environments and limited smartphone access.
MAJOR DISCUSSION POINT
Multi-layered nature of digital access challenges
Argument 2
Purple economy represents $150 billion market opportunity for assistive tech in India – Purple Economy Opportunity
EXPLANATION
Bhansali highlights the significant economic potential of the assistive technology market for people with disabilities and special needs. He argues this population is often perceived as marginal but represents a substantial market opportunity that should be viewed as a business proposition rather than charity.
EVIDENCE
India has one of the largest populations of people with disabilities globally, representing a potential $150 billion market of people who can purchase and access these products.
MAJOR DISCUSSION POINT
Economic opportunity in assistive technology market
Argument 3
Many AI products remain stuck in pilot stage due to scaling challenges – Pilot Stage Problem
EXPLANATION
Bhansali identifies a common problem where AI products with great ideas fail to move beyond the pilot stage to full implementation. He attributes this to systemic issues rather than the technology itself.
EVIDENCE
He cites reasons including last-mile diffusion problems, funding constraints, and limited support systems for scaling up solutions.
MAJOR DISCUSSION POINT
Challenges in scaling AI solutions from pilots to implementation
Argument 4
Language localization is foundational for enabling AI inclusion – Language Foundation
EXPLANATION
Bhansali emphasizes that language support is fundamental for AI systems to be truly inclusive. He argues that whether it’s banking systems using voice AI or educational AI tutors, all require understanding of local linguistic contexts to be effective.
EVIDENCE
He references examples seen across the summit including banking systems using voice AI for credit facilities and educational AI tutors for rural villages in India, all requiring local language support.
MAJOR DISCUSSION POINT
Importance of language localization in AI systems
Argument 5
Three interconnected pillars needed: design, access, and investment – Three Pillars Framework
EXPLANATION
Bhansali presents a comprehensive framework for AI inclusion based on three interconnected elements. He argues that successful AI inclusion requires embedding inclusion from the start (design), ensuring usability in real-world conditions (access), and aligning procurement and incentives (investment).
EVIDENCE
He provides specific recommendations for each pillar: participatory design involving end users, consideration of low bandwidth environments, and government procurement standards that reward accessibility.
MAJOR DISCUSSION POINT
Comprehensive framework for inclusive AI development
A
Arghya Bhattacharya
8 arguments176 words per minute1634 words554 seconds
Argument 1
Justice has become a logistics problem rather than a law problem due to inefficient court systems – Justice as Logistics
EXPLANATION
Bhattacharya argues that the fundamental challenge in India’s justice system is not legal complexity but operational inefficiency. He describes courts overwhelmed by paper-based processes where staff spend more time searching for files than reviewing their contents.
EVIDENCE
He describes district courts with ‘towers and towers of paper everywhere,’ people using typewriters instead of computers, and staff spending more time looking for files than understanding their contents.
MAJOR DISCUSSION POINT
Operational inefficiencies in justice systems
Argument 2
AI enables direct access through information tools like WhatsApp chatbots for case status – Direct Access Through Information
EXPLANATION
Bhattacharya explains how AI can address the ‘information darkness problem’ by providing citizens direct access to case information without intermediaries. Their WhatsApp chatbot allows people to check case status in any language using just their name and PIN code.
EVIDENCE
Adalat AI built a multilingual WhatsApp chatbot that tells citizens if they have cases, case status, hearing dates, and previous orders, while explicitly avoiding legal advice.
MAJOR DISCUSSION POINT
Using AI to improve access to judicial information
Argument 3
Courts using AI transcription tools can improve productivity 2-3X in witness depositions – Productivity Improvement
EXPLANATION
Bhattacharya demonstrates how AI transcription technology can significantly increase court efficiency. Since 90% of Indian courts lack stenographers, their multilingual legal transcription tool addresses a critical bottleneck in judicial processes.
EVIDENCE
Courts using their transcription technology improved from recording 2 witness depositions per day to 4-6 depositions. The tool understands legal jargon, Indian accents and dialects, and works in multiple languages.
MAJOR DISCUSSION POINT
AI-driven productivity improvements in judicial processes
Argument 4
Legal intelligence applications like summarization require caution due to different user needs – Caution on Legal Intelligence
EXPLANATION
Bhattacharya advocates for careful consideration when applying AI to complex legal tasks like document summarization. He argues that different stakeholders (citizens, judges, lawyers) require different types of summaries, making automated summarization potentially problematic.
EVIDENCE
He specifically mentions avoiding summarization because ‘the summary for a citizen looks very different from the summary that you need for a judge versus a summary for a lawyer.’
MAJOR DISCUSSION POINT
Need for caution in applying AI to complex legal tasks
DISAGREED WITH
Archana Joshi
Argument 5
Non-profit model helps align incentives and build trust with courts for technology adoption – Non-profit Pathway
EXPLANATION
Bhattacharya argues that operating as a non-profit provides significant advantages in the justice sector by addressing concerns about data privacy, cost, and technology evaluation. This model helped them achieve rapid adoption across Indian courts.
EVIDENCE
Within two years, Adalat AI operates in nine Indian states and one out of every five courts in the country. Kerala made their technology mandatory for all witness depositions, prohibiting manual recording.
MAJOR DISCUSSION POINT
Non-profit model as pathway for technology adoption in justice systems
Argument 6
Engineers and designers must interact directly with end users before writing code – Direct User Interaction
EXPLANATION
Bhattacharya emphasizes the importance of direct user engagement in the design process. At Adalat AI, they require all team members to visit courts, sit with judges, and get in-person approval of designs before any development begins.
EVIDENCE
They force engineers and designers to go to court, sit with judges, show them designs, and get in-person approval before any piece of code is written or before they touch their laptops.
MAJOR DISCUSSION POINT
Importance of direct user engagement in design process
Argument 7
Technology training must address basic digital literacy gaps before advanced AI features – Basic Literacy First
EXPLANATION
Bhattacharya discovered that successful AI implementation requires addressing fundamental digital literacy gaps. Their experience revealed that many judges lack basic computer skills, which must be addressed before introducing AI capabilities.
EVIDENCE
They found that many judges don’t know how to update their Chrome browser, leading them to restructure their training approach through the Adalat AI Academy.
MAJOR DISCUSSION POINT
Need to address basic digital literacy before AI implementation
DISAGREED WITH
Agustya Mehta
Argument 8
Formal training programs integrated into professional curricula ensure sustainable adoption – Curriculum Integration
EXPLANATION
Bhattacharya explains how integrating AI training into official judicial education ensures long-term sustainability and widespread adoption. Their academy has become part of the official curriculum for becoming a judge in multiple states.
EVIDENCE
The Adalat AI Academy has become part of the official curriculum of becoming a judge in a lot of states, ensuring systematic training for new judicial officers.
MAJOR DISCUSSION POINT
Integration of AI training into professional education systems
S
Speaker 1
5 arguments132 words per minute1438 words651 seconds
Argument 1
Rwanda focuses on scaling rather than just piloting AI solutions for national development – Scaling Focus Strategy
EXPLANATION
Speaker 1 explains Rwanda’s strategic approach of emphasizing scalable AI implementations rather than remaining in pilot phases. The AI Scaling Hub is designed to drive AI implementation aligned with national socioeconomic development priorities through two key pillars: accelerating adoption and building supporting ecosystems.
EVIDENCE
The hub scans globally for successful AI solutions to adapt and implement in Rwanda, while building an ecosystem of innovators and institutions to sustain the AI movement and make Rwanda an African hub for AI research and innovation.
MAJOR DISCUSSION POINT
National strategy focused on scaling AI solutions rather than piloting
Argument 2
Building AI capabilities while simultaneously developing digital infrastructure requires agile approach – Building While Flying
EXPLANATION
Speaker 1 describes Rwanda’s unique challenge of implementing AI while still completing their digital public infrastructure (DPI). Unlike India which had mature DPI when AI revolution started, Rwanda must develop both simultaneously, requiring a flexible ‘building the plane as we fly it’ approach.
EVIDENCE
Rwanda’s DPI stack is about 80% complete but not 100%, requiring them to plug into systems as they develop, while India had the advantage of mature DPI when implementing AI solutions.
MAJOR DISCUSSION POINT
Simultaneous development of AI capabilities and digital infrastructure
Argument 3
Traditional procurement processes are too slow for rapidly evolving technology sectors – Procurement Speed Problem
EXPLANATION
Speaker 1 argues that conventional procurement processes, designed for accountability and avoiding conflicts of interest, are inadequate for technology sectors where innovation cycles are much faster. He illustrates how a three-year procurement process for phones results in obsolete technology by completion.
EVIDENCE
He notes that ICT space sees major changes every three years (from DPI/DPG discussions to AI/LLM focus), and traditional procurement taking three years to buy 10 phones results in outdated products.
MAJOR DISCUSSION POINT
Mismatch between procurement timelines and technology innovation cycles
Argument 4
Innovation-friendly procurement allows competitive selection and agile development cycles – Innovation Procurement Solution
EXPLANATION
Speaker 1 describes Rwanda’s solution of implementing public procurement for innovation, which brings together potential solution providers to compete and develop in agile, iterative cycles. This approach adapts to changes rather than following rigid long-term processes.
EVIDENCE
The new approach involves competitive selection of key players who can deliver needed solutions, followed by agile development with small iterative steps that can adapt to market changes.
MAJOR DISCUSSION POINT
Innovative procurement approaches for technology solutions
Argument 5
Successful AI solutions require understanding end users’ real constraints and environments – End User Constraints
EXPLANATION
Speaker 1 emphasizes the importance of designing AI solutions with deep understanding of end user limitations. Using the example of an AI agricultural advisory system, he highlights how technical capabilities must align with user realities like limited connectivity and language barriers.
EVIDENCE
Their AI-powered agricultural advisory targets smallholder farmers who don’t use smartphones, have shaky connectivity, and only speak Kinyarwanda, requiring design decisions that prioritize accessibility over technical sophistication.
MAJOR DISCUSSION POINT
Importance of understanding real-world user constraints in AI design
A
Archana Joshi
5 arguments148 words per minute1765 words712 seconds
Argument 1
AI systems must work in real-world conditions including offline capabilities for crisis situations – Real-world Conditions Requirement
EXPLANATION
Joshi argues that AI systems must be designed to function in challenging real-world environments, particularly during crises when connectivity is unreliable. She uses the example of humanitarian aid distribution during refugee crises where internet connections are often down but AI systems still need to operate.
EVIDENCE
She describes working with a humanitarian agency on an AI solution for refugee crisis aid distribution that must process real-time information and function offline when internet connections are down during emergencies.
MAJOR DISCUSSION POINT
Need for AI systems to function in challenging real-world conditions
DISAGREED WITH
Arghya Bhattacharya
Argument 2
Accessibility features can be added to existing products using AI technology – Accessibility Enhancement
EXPLANATION
Joshi demonstrates how AI can retrofit existing products to make them more accessible. She describes a project with a global bank to make financial literacy videos accessible to hearing-impaired users by using AI to convert English captions to sign language.
EVIDENCE
Working with a global bank, they used vision LLMs and AI processes to create sign language versions of financial literacy videos, making content accessible to hearing-impaired users whose first language is sign language, not English.
MAJOR DISCUSSION POINT
Using AI to enhance accessibility of existing products
Argument 3
Corporate resistance to multilingual AI often stems from ROI pressure rather than technical limitations – ROI vs Inclusion Tension
EXPLANATION
Joshi describes the tension between business pressure to demonstrate quick AI returns and the need for inclusive design. She recounts a heated discussion with an insurance company CTO who wanted to launch in English first despite most customers speaking Hindi, prioritizing ROI demonstration over user needs.
EVIDENCE
An insurance company CTO insisted on English-first AI deployment despite 70% of customers speaking Hindi, arguing the need to show quick ROI from AI, requiring extensive convincing to include Hindi from the start.
MAJOR DISCUSSION POINT
Tension between business ROI pressure and inclusive AI design
Argument 4
Positioning inclusion as CSR initiative leads to inadequate budgets and poor outcomes – CSR Positioning Problem
EXPLANATION
Joshi strongly advises against framing AI inclusion as a corporate social responsibility initiative, arguing this approach results in insufficient funding that doesn’t support quality product development or make economic sense for sustainable solutions.
EVIDENCE
She states directly: ‘If you position inclusion as a CSR initiative, you are also going to get budgets which match the CSR initiatives, which don’t necessarily translate to good products or make good economic sense. So that never works. Don’t do it.’
MAJOR DISCUSSION POINT
Problems with CSR framing for AI inclusion initiatives
DISAGREED WITH
Agustya Mehta
Argument 5
Cost of diverse datasets is decreasing through government initiatives like AI Kosh – Dataset Cost Reduction
EXPLANATION
Joshi explains how the high cost of diverse datasets has been a barrier to inclusive AI, but government initiatives are making these resources more accessible. She notes the economic principle that AI requires $3 in data investment for every $1 spent on AI, but this is improving.
EVIDENCE
She mentions India’s AI Kosh initiative by the government, which provides diverse datasets that companies can use to make their AI systems more inclusive and tuned to local traditions, reducing implementation costs.
MAJOR DISCUSSION POINT
Government initiatives reducing barriers to inclusive AI through dataset accessibility
A
Agustya Mehta
4 arguments180 words per minute642 words213 seconds
Argument 1
Accessible design principles benefit all users, not just people with disabilities – Universal Design Benefits
EXPLANATION
Mehta argues that designing for accessibility creates better products for everyone, not just people with disabilities. He uses the analogy of curb cuts, which were designed for wheelchair users but benefit anyone using wheeled devices like strollers or luggage.
EVIDENCE
He cites curb cuts in the US, which were mandated for wheelchair accessibility but benefit anyone using prams, strollers, shopping carts, or luggage, demonstrating how accessibility improvements make cities better for everyone.
MAJOR DISCUSSION POINT
Universal benefits of accessible design principles
DISAGREED WITH
Archana Joshi
Argument 2
Product development requires diverse teams and direct user involvement from target communities – Diverse Team Necessity
EXPLANATION
Mehta emphasizes the principle of ‘nothing about us without us,’ arguing that development teams must include people from diverse backgrounds and directly involve target user communities. He argues this isn’t just ethical but essential for good business outcomes.
EVIDENCE
He states that ‘a model is only as good as its data set. The same is true for a development team, for an organization,’ emphasizing the need to hire people from diverse backgrounds and avoid building in bubbles.
MAJOR DISCUSSION POINT
Importance of diverse teams and user involvement in product development
Argument 3
Many mainstream innovations originated from accessibility efforts for disabled users – Accessibility Innovation Origins
EXPLANATION
Mehta argues that accessibility drives innovation by highlighting examples of mainstream technologies that originated from efforts to serve people with disabilities. This demonstrates that accessibility considerations often lead to breakthrough innovations with broad applications.
EVIDENCE
He lists specific examples: flatbed scanners, text-to-speech synthesis, and OCR technology all started as efforts to read books for blind people before becoming industry-wide technologies.
MAJOR DISCUSSION POINT
Historical role of accessibility in driving mainstream innovation
Argument 4
Investment priorities and market feedback can shift product focus unexpectedly during development – Flexible Investment Approach
EXPLANATION
Mehta explains how Meta’s Ray-Ban glasses evolved differently than planned, with AI becoming central despite not being in original product plans. He emphasizes the importance of being nimble and responsive to market feedback rather than rigidly following initial investment priorities.
EVIDENCE
Ray-Ban Stories was initially focused on photos with audio for calls, but users preferred music. Ray-Ban Meta improved audio quality, but AI features became the main selling point despite not being in the original plan.
MAJOR DISCUSSION POINT
Need for flexibility in product development and investment priorities
M
Moderator
1 argument103 words per minute110 words63 seconds
Argument 1
AI Impact Summit serves as a platform for launching inclusive AI research and facilitating knowledge sharing
EXPLANATION
The moderator emphasizes the summit’s role in launching important research reports on AI inclusion and bringing together practitioners to share insights. The summit creates opportunities for both formal presentations and informal networking to advance inclusive AI development.
EVIDENCE
The moderator facilitates the launch of Nirmal’s report on AI inclusion and organizes panel discussions with practitioners who build AI products, while encouraging audience interaction with panelists.
MAJOR DISCUSSION POINT
Role of conferences and summits in advancing inclusive AI
R
Rutuja Pol
8 arguments181 words per minute1411 words465 seconds
Argument 1
Three interconnected pillars of design, access, and investment must work together to ensure AI inclusion becomes common practice
EXPLANATION
Rutuja Pol frames the panel discussion around the key finding that AI inclusion requires coordinated attention to how products are designed, how they enable access, and how investment decisions shape development priorities. She argues these elements must be used interchangeably and together rather than in isolation.
EVIDENCE
She structures the entire panel discussion around these three pillars, asking each panelist to address how their work relates to design, access, and investment considerations.
MAJOR DISCUSSION POINT
Integrated approach to AI inclusion through design, access, and investment
Argument 2
Access to justice in India requires addressing both technological and systemic barriers in the judicial system
EXPLANATION
Rutuja Pol highlights the complexity of enabling access to justice in a country as large as India, recognizing that technology solutions must address the broader systemic issues within the justice system. She frames this as both a technological and institutional challenge.
EVIDENCE
She asks Arghya to explain how his product enables access to justice ‘in a country as big as India and all of the issues that it has in the justice system.’
MAJOR DISCUSSION POINT
Complexity of enabling access to justice through technology
Argument 3
Low-resource languages present significant challenges for AI implementation that require dedicated attention and resources
EXPLANATION
Rutuja Pol recognizes that implementing AI tools in low-resource languages like Kinyarwanda presents unique difficulties that go beyond simple translation. She frames this as a critical challenge for inclusive AI development in diverse linguistic contexts.
EVIDENCE
She specifically asks Olivier about the challenges of using AI tools based on Kinyarwanda, noting it as ‘a very low resource language’ and asking about the difficulties and learnings from this experience.
MAJOR DISCUSSION POINT
Challenges of AI implementation in low-resource languages
Argument 4
B2B AI transformation requires convincing business leaders that inclusion should be embedded from the start rather than added later
EXPLANATION
Rutuja Pol identifies a key challenge in enterprise AI adoption: the need to convince business clients that inclusion and access considerations must be integrated into the initial transformation strategy rather than treated as afterthoughts. She recognizes this as a fundamental shift in how businesses approach AI implementation.
EVIDENCE
She asks Archana how to ‘convince your clients that inclusion and even access needs to be really embedded in the first thought of your transformation journey.’
MAJOR DISCUSSION POINT
Integrating inclusion into enterprise AI transformation strategies
Argument 5
Investment priorities significantly shape AI product design decisions and outcomes
EXPLANATION
Rutuja Pol recognizes that investment decisions and priorities have a direct impact on how AI products are designed and what features are prioritized. She seeks to understand the relationship between financial considerations and inclusive design choices in product development.
EVIDENCE
She asks Agastya to explain ‘how do investment priorities shape the design journey’ for Meta’s Ray-Ban Glasses, recognizing this as a significant investment in AI-powered hardware.
MAJOR DISCUSSION POINT
Relationship between investment priorities and AI product design
Argument 6
CSR positioning for AI inclusion initiatives leads to inadequate funding and poor business outcomes
EXPLANATION
Rutuja Pol explores how businesses initially positioned inclusion as a corporate social responsibility initiative versus a core business strategy. She investigates whether this framing has shifted and how it affects the success of inclusive AI projects.
EVIDENCE
She asks Archana whether she ‘started with positioning inclusion as a CSR initiative or just a good-to-have thing’ and how executive reactions have changed over time.
MAJOR DISCUSSION POINT
Evolution of business positioning for AI inclusion initiatives
Argument 7
Accessibility-first innovation in AI devices requires systematic design principles and team instructions
EXPLANATION
Rutuja Pol recognizes that creating truly accessible AI devices requires deliberate design methodologies and clear guidance for development teams. She seeks to understand the systematic approaches that can drive accessibility-first innovation rather than retrofitted accessibility.
EVIDENCE
She asks Agastya about ‘how can AI devices really drive accessibility first innovation’ and what instructions are given to design teams to ensure inclusive device development.
MAJOR DISCUSSION POINT
Systematic approaches to accessibility-first AI device design
Argument 8
Successful inclusive AI requires visible design considerations from initial development stages
EXPLANATION
Rutuja Pol emphasizes that truly inclusive AI solutions demonstrate their commitment to inclusion through visible design choices made from the very beginning of the development process. She seeks concrete examples of how this principle translates into practice across different contexts.
EVIDENCE
She asks panelists for case studies where ‘from the very initial you’ve looked at, and inclusion has just been visible, and that’s helped in many ways.’
MAJOR DISCUSSION POINT
Importance of visible inclusion in initial AI design stages
Agreements
Agreement Points
User-centered design requires direct engagement with end users from the beginning
Speakers: Arghya Bhattacharya, Agustya Mehta, Speaker 1
Engineers and designers must interact directly with end users before writing code Product development requires diverse teams and direct user involvement from target communities Successful AI solutions require understanding end users’ real constraints and environments
All three speakers emphasize that successful AI product development requires direct, early engagement with actual end users rather than assumptions about their needs. They advocate for participatory design processes that involve target communities from the start.
AI systems must be designed for real-world constraints and low-resource environments
Speakers: Nirmal Bhansali, Archana Joshi, Speaker 1
Three interconnected pillars needed: design, access, and investment AI systems must work in real-world conditions including offline capabilities for crisis situations Successful AI solutions require understanding end users’ real constraints and environments
These speakers agree that AI solutions must account for real-world limitations including poor connectivity, low bandwidth, limited device capabilities, and offline scenarios. They emphasize designing for actual deployment conditions rather than ideal laboratory settings.
Language localization is fundamental for inclusive AI deployment
Speakers: Nirmal Bhansali, Arghya Bhattacharya, Speaker 1
Language localization is foundational for enabling AI inclusion AI enables direct access through information tools like WhatsApp chatbots for case status Successful AI solutions require understanding end users’ real constraints and environments
All three speakers recognize that AI systems must support local languages to be truly accessible and useful. They highlight the importance of multilingual capabilities and understanding linguistic contexts for successful AI adoption.
Accessibility benefits all users, not just target populations
Speakers: Nirmal Bhansali, Agustya Mehta
Purple economy represents $150 billion market opportunity for assistive tech in India Accessible design principles benefit all users, not just people with disabilities
Both speakers argue that designing for accessibility and inclusion creates better products for everyone, not just marginalized groups. They frame accessibility as good business practice that improves user experience universally.
Traditional procurement processes are inadequate for rapidly evolving AI technology
Speakers: Speaker 1, Arghya Bhattacharya
Traditional procurement processes are too slow for rapidly evolving technology sectors Non-profit model helps align incentives and build trust with courts for technology adoption
Both speakers identify procurement as a major barrier to AI adoption in public sector contexts. They advocate for alternative approaches that can move faster and build trust with institutions while maintaining accountability.
Similar Viewpoints
Both speakers frame AI inclusion as requiring coordinated attention to design, access, and investment rather than treating these as separate concerns. They emphasize the interconnected nature of these elements.
Speakers: Nirmal Bhansali, Rutuja Pol
Three interconnected pillars needed: design, access, and investment Three interconnected pillars of design, access, and investment must work together to ensure AI inclusion becomes common practice
Both speakers strongly argue against framing AI inclusion as corporate social responsibility, emphasizing that this approach results in insufficient resources and poor outcomes. They advocate for business-case driven inclusion strategies.
Speakers: Archana Joshi, Rutuja Pol
Positioning inclusion as CSR initiative leads to inadequate budgets and poor outcomes CSR positioning for AI inclusion initiatives leads to inadequate funding and poor business outcomes
Both speakers emphasize the importance of being cautious and adaptive in AI implementation, recognizing that initial plans may not match final outcomes and that complex applications require careful consideration of different user needs.
Speakers: Arghya Bhattacharya, Agustya Mehta
Caution on Legal Intelligence Flexible Investment Approach
Unexpected Consensus
Non-profit models as effective pathways for AI adoption in public sector
Speakers: Arghya Bhattacharya
Non-profit model helps align incentives and build trust with courts for technology adoption
While other speakers focus on business models and ROI, Arghya’s success with a non-profit approach in the justice sector demonstrates an unexpected pathway that addresses trust and incentive alignment issues that traditional commercial models struggle with in public sector contexts.
Building infrastructure while implementing AI simultaneously
Speakers: Speaker 1
Building AI capabilities while simultaneously developing digital infrastructure requires agile approach
Speaker 1’s ‘building the plane as we fly it’ approach represents an unexpected consensus around the feasibility of simultaneous infrastructure development and AI implementation, challenging the assumption that mature digital infrastructure must precede AI deployment.
Basic digital literacy as prerequisite for AI adoption
Speakers: Arghya Bhattacharya
Technology training must address basic digital literacy gaps before advanced AI features
The discovery that judges couldn’t update Chrome browsers before learning AI tools represents unexpected consensus around the need to address fundamental digital literacy gaps, which wasn’t initially anticipated as a major barrier to AI adoption in professional contexts.
Overall Assessment

The speakers demonstrate strong consensus around user-centered design principles, the need for AI systems to work in real-world constraints, the importance of language localization, and the inadequacy of traditional procurement processes for AI technology. There is also agreement that accessibility benefits all users and that inclusion should be treated as good business practice rather than charity.

High level of consensus on fundamental principles of inclusive AI development, with speakers from different sectors (justice, technology, development, consulting) arriving at similar conclusions about design requirements, implementation challenges, and business approaches. This suggests these principles are robust across different application domains and organizational contexts.

Differences
Different Viewpoints
Approach to addressing basic digital literacy versus advanced AI implementation
Speakers: Arghya Bhattacharya, Agustya Mehta
Technology training must address basic digital literacy gaps before advanced AI features – Basic Literacy First Accessible design principles benefit all users, not just people with disabilities – Universal Design Benefits
Arghya emphasizes the need to address fundamental digital literacy gaps (like updating Chrome browsers) before introducing AI, while Agustya advocates for universal design principles that make products accessible from the start rather than addressing gaps sequentially
CSR versus business case framing for AI inclusion
Speakers: Archana Joshi, Agustya Mehta
Positioning inclusion as CSR initiative leads to inadequate budgets and poor outcomes – CSR Positioning Problem Accessible design principles benefit all users, not just people with disabilities – Universal Design Benefits
Archana strongly argues against CSR positioning due to budget constraints, while Agustya frames accessibility as inherently good business and innovation driver without explicitly rejecting CSR approaches
Caution level for AI applications in sensitive domains
Speakers: Arghya Bhattacharya, Archana Joshi
Legal intelligence applications like summarization require caution due to different user needs – Caution on Legal Intelligence AI systems must work in real-world conditions including offline capabilities for crisis situations – Real-world Conditions Requirement
Arghya advocates extreme caution and avoiding AI for complex legal tasks like summarization, while Archana pushes for robust AI implementation in crisis situations where reliability is critical
Unexpected Differences
Role of non-profit model in technology adoption
Speakers: Arghya Bhattacharya, Archana Joshi
Non-profit model helps align incentives and build trust with courts for technology adoption – Non-profit Pathway Positioning inclusion as CSR initiative leads to inadequate budgets and poor outcomes – CSR Positioning Problem
Unexpected because both work on inclusive technology, but Arghya advocates for non-profit models as trust-building mechanisms while Archana warns against charitable framing. This reveals different perspectives on how to position inclusive technology initiatives for sustainability
Overall Assessment

The discussion revealed surprisingly few fundamental disagreements among speakers, with most tensions arising around implementation approaches rather than core principles. Main areas of disagreement centered on: sequencing of digital literacy versus advanced AI features, business versus charitable framing of inclusion initiatives, and appropriate caution levels for AI in sensitive domains

Low to moderate disagreement level with high consensus on goals but different tactical approaches. This suggests a maturing field where practitioners agree on inclusive AI principles but are still developing best practices for implementation. The disagreements are constructive and reflect different contextual experiences rather than fundamental philosophical divides, which is positive for advancing inclusive AI development

Partial Agreements
All agree that moving from pilots to scaled implementation is crucial, but disagree on primary barriers – Nirmal focuses on systemic issues like funding and last-mile diffusion, Speaker 1 emphasizes infrastructure readiness and agile procurement, while Archana highlights corporate ROI pressures
Speakers: Nirmal Bhansali, Speaker 1, Archana Joshi
Many AI products remain stuck in pilot stage due to scaling challenges – Pilot Stage Problem Rwanda focuses on scaling rather than just piloting AI solutions for national development – Scaling Focus Strategy Corporate resistance to multilingual AI often stems from ROI pressure rather than technical limitations – ROI vs Inclusion Tension
All agree on the importance of user-centered design, but differ in implementation approaches – Nirmal advocates for systematic participatory design frameworks, Arghya requires mandatory court visits before coding, while Agustya emphasizes diverse hiring and team composition
Speakers: Nirmal Bhansali, Arghya Bhattacharya, Agustya Mehta
Three interconnected pillars needed: design, access, and investment – Three Pillars Framework Engineers and designers must interact directly with end users before writing code – Direct User Interaction Product development requires diverse teams and direct user involvement from target communities – Diverse Team Necessity
Both recognize government intervention is needed to enable AI adoption, but focus on different mechanisms – Speaker 1 emphasizes procurement process reform for speed and agility, while Archana highlights data accessibility initiatives to reduce costs
Speakers: Speaker 1, Archana Joshi
Traditional procurement processes are too slow for rapidly evolving technology sectors – Procurement Speed Problem Cost of diverse datasets is decreasing through government initiatives like AI Kosh – Dataset Cost Reduction
Takeaways
Key takeaways
AI inclusion requires a three-pillar framework: design (embedding inclusion from start with participatory approaches), access (ensuring usability in real-world conditions including offline capabilities), and investment (aligning procurement, capital and incentives) The purple economy represents a $150 billion market opportunity in India for assistive technology, demonstrating that inclusion is a business proposition rather than charity Language localization is foundational for AI inclusion – systems must operate in local languages and contexts to be truly accessible Many AI products fail to scale beyond pilot stage due to surrounding system challenges like last-mile diffusion, funding limitations, and inadequate support infrastructure Accessible design principles benefit all users universally, not just people with disabilities, and many mainstream innovations originated from accessibility efforts Corporate adoption of inclusive AI is shifting from CSR positioning to recognizing it as smart business practice, especially as diverse dataset costs decrease through government initiatives Real-world deployment requires understanding end-user constraints including connectivity issues, device limitations, and basic digital literacy gaps Non-profit models can effectively bridge the gap between innovative AI solutions and institutional adoption by aligning incentives and building trust
Resolutions and action items
The AI inclusion report will be published online soon with documented use cases and recommendations Continued development of innovation-friendly procurement policies that allow agile development cycles for rapidly evolving technology Integration of AI training programs into official professional curricula (as demonstrated with Adalat AI Academy becoming part of judicial training) Emphasis on direct user interaction requirements – engineers and designers must engage with end users before code development Focus on building ‘painkillers before vitamins’ – addressing urgent user pain points rather than nice-to-have features
Unresolved issues
How to effectively scale AI solutions beyond pilot stage across different sectors and geographies Balancing the tension between demonstrating quick ROI to stakeholders while implementing truly inclusive design from the start Addressing the 33% of global population (2.6 billion people) who still lack internet access when designing AI tools Developing comprehensive frameworks for legal intelligence applications in AI while maintaining safety and accuracy Creating sustainable funding mechanisms for inclusive AI development that don’t rely on traditional CSR budget limitations Establishing standardized approaches for building AI capabilities while simultaneously developing digital infrastructure in emerging markets
Suggested compromises
Using non-profit pathways as an intermediate step to build institutional trust and experience before transitioning to commercial procurement Implementing phased approaches that address basic digital literacy before introducing advanced AI features Leveraging government initiatives like AI Kosh to reduce dataset costs while building inclusive AI systems Adopting ‘building the plane while flying’ approaches that allow simultaneous development of infrastructure and AI capabilities Positioning inclusion as economic opportunity rather than CSR to secure adequate budgets while maintaining social impact goals
Thought Provoking Comments
Good technology by itself does not bring in or include people. By adding AI, you’re automatically not going to include more. The last mile gap is still a problem.
This comment challenges the common assumption that AI inherently democratizes access to technology. It’s counterintuitive and forces the audience to reconsider the relationship between technological advancement and inclusion, highlighting that AI might actually create additional barriers rather than removing them.
This opening statement set the entire tone for the discussion, establishing that the panel would challenge conventional wisdom about AI and inclusion. It created a foundation for all subsequent speakers to address real-world barriers rather than theoretical benefits of AI.
Speaker: Nirmal Bhansali
Understanding the power of the purple economy. The market of assistive tech products for people of persons with disabilities and people with special needs… India alone has the potential of $150 billion just in this space… It’s not a charitable cause. It’s a simple business proposition.
This reframes disability inclusion from a moral imperative to an economic opportunity, which is particularly powerful in business contexts. The specific $150 billion figure for India alone makes the argument concrete and compelling.
This comment shifted the entire framing of the discussion from viewing inclusion as a cost center to viewing it as a revenue opportunity. It influenced later speakers like Archana to emphasize business viability over CSR positioning.
Speaker: Nirmal Bhansali
Justice in these settings is really not a question of law. It’s become a question of logistics.
This profound observation reframes the entire justice system challenge, suggesting that the fundamental problem isn’t legal knowledge or jurisprudence, but operational efficiency. It’s a systems-thinking approach that identifies the root cause rather than symptoms.
This insight redirected the conversation toward practical, operational solutions rather than theoretical legal tech applications. It demonstrated how AI can address fundamental systemic issues rather than just automating existing processes.
Speaker: Arghya Bhattacharya
Building the plane as we fly it… the AI revolution started India had mature DPI which means that the focus has been more to actually implementing the AI already on existing and mature and trusted DPI that are in place it’s not a scenario in many places
This metaphor captures the reality of AI implementation in developing contexts where infrastructure and AI development must happen simultaneously. It also provides crucial context about why India’s AI adoption differs from other countries due to existing Digital Public Infrastructure.
This comment introduced a critical perspective on the different starting points countries have for AI implementation, adding nuance to the discussion about scalability and the importance of foundational infrastructure.
Speaker: Speaker 1 (Olivier)
If you position inclusion as a CSR initiative, you are also going to get budgets which match the CSR initiatives, which don’t necessarily translate to good products or make good economic sense. So that never works. Don’t do it.
This is a direct, actionable insight that challenges how many organizations approach inclusion. It’s based on practical experience and provides clear guidance on positioning strategy that affects resource allocation and project success.
This comment provided a concrete strategic framework that other panelists and audience members could immediately apply. It reinforced the business case theme established earlier while providing tactical guidance.
Speaker: Archana Joshi
Being non-profit helped us align incentive with the courts better. It automatically took away a lot of the stress around, oh, what are they going to do with my data? Are they going to profile the judges?
This insight reveals how organizational structure can be a strategic tool for building trust and overcoming adoption barriers, particularly in sensitive sectors like justice. It’s a creative solution to the procurement and trust challenges discussed earlier.
This comment introduced an alternative pathway for AI implementation that other speakers hadn’t considered, showing how organizational design can solve technical and policy challenges. It added a new dimension to the investment and scaling discussion.
Speaker: Arghya Bhattacharya
Accessible design is good design. Universal design is good design… There’s so many innovations that started from accessibility efforts. The flatbed scanner, text-to-speech synthesis, OCR, these started as efforts to read books for blind people.
This comment provides historical context that reframes accessibility from a constraint to a driver of innovation. The specific examples make the argument concrete and demonstrate how accessibility-first design benefits everyone.
This shifted the conversation from viewing accessibility as an additional requirement to seeing it as a catalyst for better design overall. It provided a philosophical framework that connected all the practical examples shared by other panelists.
Speaker: Agustya Mehta
Overall Assessment

These key comments fundamentally shaped the discussion by challenging conventional assumptions about AI and inclusion. The conversation moved from theoretical benefits of AI to practical barriers, from viewing inclusion as a cost to seeing it as an opportunity, and from treating accessibility as an add-on to recognizing it as a driver of innovation. The speakers built on each other’s insights, creating a comprehensive framework that addressed design, access, and investment from multiple perspectives. The discussion was particularly powerful because it combined high-level strategic insights with concrete, actionable examples, making the case for inclusive AI both philosophically compelling and practically achievable.

Follow-up Questions
How to effectively scale AI solutions beyond the pilot stage, particularly addressing last-mile diffusion, funding, and limited support systems
This was identified as a fundamental problem where many AI products get stuck in pilot stage due to surrounding system issues, requiring further research on scaling mechanisms
Speaker: Nirmal Bhansali
How to build technical expertise in AI within government departments to improve procurement standards and technical specifications
Institutional capacity building was identified as a critical variable that can make or break AI adoption, requiring research on effective capacity building approaches
Speaker: Nirmal Bhansali
What are the safest applications of AI in legal contexts, particularly around legal intelligence tasks like summarization
The speaker explicitly stated they steer away from legal intelligence applications and advised caution, indicating need for research on safe AI applications in justice systems
Speaker: Arghya Bhattacharya
How to develop comprehensive datasets for low-resource languages like Kinyarwanda to achieve full AI functionality
The speaker mentioned they are building datasets for Kinyarwanda language as they go, with improvement needed for text and voice to reach perfection in a couple of years
Speaker: Speaker 1 (Olivier)
How to balance the cost of inclusive AI implementation with business viability, particularly regarding diverse dataset acquisition and cleaning
The speaker highlighted the economic trade-offs where inclusion may take a backseat due to higher costs of diverse datasets, requiring research on cost-effective inclusive AI approaches
Speaker: Archana Joshi
How to design effective RFPs for AI procurement that account for rapid technological change in the ICT space
The speaker noted that traditional procurement processes take too long for technology projects where changes occur every three years, requiring research on agile procurement methods
Speaker: Speaker 1 (Olivier)
What are the concrete examples and case studies of innovations that started from accessibility efforts and became mainstream
The speaker mentioned examples like flatbed scanners and text-to-speech but suggested need for more concrete examples to demonstrate to leadership teams how accessibility drives innovation
Speaker: Agustya Mehta

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.