MedTech and AI Innovations in Public Health Systems

MedTech and AI Innovations in Public Health Systems

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion explored the integration of AI and MedTech innovations in public health systems, focusing on three key pillars: cost-effectiveness, care coordination, and operational efficiency. The panel included government officials, healthcare professionals, technology providers, and social impact organization representatives discussing population-scale deployment of AI in healthcare.


Shri Saurabh Jain from the Government of India outlined the SAHI (Strategy for Artificial Intelligence in Public Health) initiative, emphasizing how AI can address specialist shortages in rural areas through tools like X-ray image analysis and diabetic retinopathy screening. The eSanjevani teleconsultation platform was highlighted as enabling primary health center doctors to consult with tertiary care specialists, ultimately reducing out-of-pocket expenditure and building public trust in government healthcare systems.


The discussion revealed a critical challenge: most AI solutions are looking for problems rather than addressing specific healthcare needs. Panelists emphasized the importance of evidence-based implementation, with states like Andhra Pradesh setting clear problem statements for innovators to address. TataMD’s representative described their approach to assisting medical officers with longitudinal patient data, clinical decision support, and operational efficiency improvements, while ensuring doctors remain the ultimate decision-makers.


Key barriers to scaling AI solutions included data quality issues, the need for integration within existing workflows rather than as additional layers, and change management challenges. The importance of building trust in public health systems through improved primary care quality was emphasized as crucial for reducing the burden on tertiary care facilities.


The panel concluded that successful AI implementation requires collaboration between clinical insights, engineering capabilities, and policy support, with preventive healthcare identified as offering the highest return on investment for population health outcomes.


Keypoints

Overall Purpose/Goal

This discussion explored the integration of AI and MedTech innovations in public health systems, focusing on three key pillars: cost-effectiveness of healthcare delivery, care coordination through longitudinal health records, and operational efficiency in patient treatment. The session brought together government officials, private sector representatives, and healthcare practitioners to examine how AI can strengthen public healthcare delivery at population scale.


Major Discussion Points

Government AI Strategy and Digital Infrastructure: Discussion of India’s SAHI (Strategy for Artificial Intelligence in Public Health) initiative and the development of digital public infrastructure similar to UPI, including telemedicine platforms like eSanjevani and efforts to reduce out-of-pocket healthcare expenditure through improved public health systems.


Innovation-to-Implementation Framework: Examination of how healthcare innovations move from development to real-world deployment, emphasizing the need for problem-driven rather than solution-seeking approaches, evidence-based validation, and structured integration of startups with public health systems through initiatives like Andhra Pradesh’s Center for Applied Technology.


AI-Enabled Clinical and Operational Support: Detailed exploration of how AI can assist medical officers with longitudinal patient data, clinical decision support, automated documentation, and help frontline workers like ASHA workers prioritize high-risk cases, while emphasizing that AI should augment rather than replace human decision-making.


Preventive Healthcare and Program Implementation: Focus on AI’s potential in large-scale preventive health programs, particularly in identifying implementation failures before they occur, supporting behavior change initiatives, and improving program effectiveness through predictive analytics and personalized interventions.


Implementation Challenges and Solutions: Discussion of key barriers including data quality issues, change management resistance, digital literacy challenges, connectivity problems, and the need for workflow integration, along with proposed solutions like public-private partnerships, data cooperatives, and improved work culture around evidence-based decision making.


Overall Tone

The discussion maintained a collaborative and constructive tone throughout, with participants openly sharing both successes and challenges. While generally optimistic about AI’s potential in healthcare, speakers were realistic about implementation barriers and willing to be critical of existing systems when asked. The conversation evolved from high-level policy discussions to specific technical challenges and practical solutions, maintaining a problem-solving orientation focused on real-world applications rather than theoretical possibilities.


Speakers

Speakers from the provided list:


Shri Saurabh Gaur – Government official, Andhra Pradesh government, moderator of the session on MedTech and AI innovations in public health systems


Shri Saurabh Jain – Government of India official, involved in healthcare strategy and AI implementation in public health (SAHI – Strategy for Artificial Intelligence in Public Health)


Mr. Shiv Kumar – Works with innovation ecosystem, heads Committee on Advanced Technologies, focuses on institutionalization of healthcare innovations


Ms. Saraswathi Padmanabhan – Representative of TataMD, works on AI-enabled public healthcare systems and public-private partnerships


Mr. Sanjay Seth – Social impact organization representative, works on tobacco control and preventive healthcare programs


Dr. Rakesh Kalapala – Gastroenterologist from AIG Hospital, represents tertiary care and private healthcare sector, involved with AIM Foundation


Audience – Multiple audience members who asked questions during the session


Additional speakers:


None identified beyond the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion explored the integration of artificial intelligence and medical technology innovations in public health systems, bringing together government officials, healthcare professionals, technology providers, and social impact organisation representatives to examine how AI can strengthen healthcare delivery at population scale. The session was structured around three fundamental pillars of public healthcare: cost-effectiveness of delivery, care coordination through longitudinal health records, and operational efficiency in patient treatment.


Government Strategy and Digital Infrastructure Development

Shri Saurabh Jain from the Government of India outlined the SAHI (Strategy for Artificial Intelligence in Public Health) initiative, which targets critical healthcare challenges including the acute shortage of medical specialists in rural areas through AI-enabled solutions such as X-ray image analysis and diabetic retinopathy screening. The eSanjevani teleconsultation platform exemplifies this approach, enabling primary health centre doctors to consult with specialists at tertiary care hospitals.


The strategy aims to reduce out-of-pocket healthcare expenditure by building public trust in government healthcare systems through improved service quality and accessibility. The digitisation efforts are generating substantial health data that can be leveraged to improve hospital workflows and supply chain management. The moderator noted parallels between this emerging healthcare digital infrastructure and India’s successful UPI system, suggesting potential for a Universal Health Interface movement.


Problem-Driven Innovation Framework

A critical insight emerged from Mr. Shiv Kumar’s opening remarks: healthcare solutions are predominantly seeking problems rather than addressing clearly defined needs. He emphasised that successful institutionalisation requires states to set clear agendas and priorities before seeking technological solutions. Andhra Pradesh’s Center for Applied Technology exemplifies this problem-first approach by articulating specific challenges for frontline workers and inviting innovators to develop targeted solutions.


The institutionalisation framework involves establishing who sets priorities, creating bridges between problems and solutions, conducting rigorous ground-level testing to validate health outcomes and cost savings, and building comprehensive use case libraries. The framework also requires robust AI policies that establish guardrails for data sharing whilst ensuring communities benefit from the monetisation of their health data.


AI-Enabled Clinical and Operational Support Systems

Ms. Saraswathi Padmanabhan from TataMD described their implementation in Andhra Pradesh, focusing on four stakeholder groups: medical officers, frontline workers, citizens, and health departments. For medical officers, AI systems structure longitudinal patient data to support clinical decision-making, moving beyond episodic care to provide comprehensive patient histories. The system includes clinical decision support that prompts doctors about necessary investigations, such as foot examinations for diabetic patients.


Importantly, the AI serves as an assistant rather than a decision-maker, with doctors retaining ultimate authority. Operational improvements include automated documentation and conversation summaries, addressing the reality that PHC doctors supposed to see 40 patients daily often end up seeing 60 patients.


For frontline workers like ASHA workers, AI systems help prioritise tasks by identifying high-risk patients requiring immediate attention—particularly valuable when individual workers monitor 50 or more pregnant mothers simultaneously. At the health department level, AI provides analytical capabilities that identify trends and support proactive, preventive care strategies.


Preventive Healthcare and Large-Scale Implementation

Mr. Sanjay Seth highlighted preventive healthcare as offering the highest return on investment, despite receiving limited political support because such programs are “not glamorous” compared to curative interventions. His experience with tobacco control programmes across 20,000 schools in Andhra Pradesh demonstrates AI’s potential in large-scale preventive initiatives, addressing the 48,000 annual tobacco-related deaths in the state.


AI systems can analyse implementation data to predict where failures are likely to occur before they happen, enabling proactive interventions. The tobacco control programme utilises image recognition technology achieving 98% accuracy in determining whether educational activities meet quality standards. The system generates personalised messages for teachers in their preferred languages, significantly improving programme effectiveness.


Implementation Challenges and Barriers

Despite promising potential, significant challenges emerged. Data quality and infrastructure represent fundamental prerequisites many states currently lack. More critically, Mr. Shiv Kumar identified work culture as the biggest challenge—not technology itself. Healthcare workers often resist systems that appear to add complexity to already overwhelming workloads managing approximately 25 different public health programmes.


Ms. Padmanabhan identified three primary integration challenges: workflow integration, change management resistance, and the need for appropriate incentive structures. Digital literacy challenges, connectivity issues, and power availability in rural areas create additional technical barriers. Success depends on early adopters demonstrating value to resistant users and ensuring both personal and systemic incentives align.


Cost Reduction and Public-Private Integration

Dr. Rakesh Kalapala from AIM Foundation provided compelling examples of cost reduction through need-based innovations: an AI algorithm for fatty liver detection costs ₹500 compared to ₹5,000 for traditional diagnosis, whilst the diagnostic machine costs ₹1.2 crore. In private healthcare settings, AI has reduced discharge summary preparation time from 8-10 hours to 30 minutes maximum.


The AIM Foundation’s collaboration with institutions including ISB and IIT Delhi exemplifies effective public-private integration, providing a neutral platform for innovators to validate solutions before public system transfer. Dr. Kalapala argued that public-private integration represents the main strength for scaling healthcare innovations in India, with private sector validation accelerating public implementation.


Data Governance and Community Ownership

Mr. Shiv Kumar proposed an innovative approach where people’s health data should be owned by data cooperatives, with reverse tokenisation systems ensuring communities receive compensation when their data trains AI systems. This represents a fundamental shift towards community-owned data governance structures that could provide sustainable healthcare funding whilst respecting data sovereignty principles.


Mental Health and Specialised Applications

The discussion revealed significant gaps in mental health AI applications. Saurabh Gaur, representing Andhra Pradesh government, shared their experience with QPR (Question, Persuade, Refer) methodology for identifying students at risk of suicide among one million intermediate examination candidates. An audience member from AIIMS Bhopal highlighted challenges in developing AI systems for mental health assessment, which rely on voice and video analysis rather than medical imaging, requiring additional considerations for safety and sensitivity.


Future Directions and Remaining Challenges

The discussion concluded with recognition that diagnostics, particularly medical imaging for tuberculosis and diabetic retinopathy, represent the most immediate and impactful AI applications. These can address specialist doctor shortages by enabling primary healthcare workers to provide more accurate diagnoses with AI support.


Critical unresolved issues include the lack of uniform data collection systems across states, funding challenges for MedTech startups with limited venture capital investment, and the need for government support mechanisms like the ICMR sandbox currently being developed for innovation testing. Speakers emphasised the need for a national platform where validated AI solutions can be shared across states to avoid repetitive piloting processes.


Conclusion

The discussion revealed that successful AI implementation in public healthcare requires a fundamental shift from technology-driven to problem-driven approaches, with robust data infrastructure, effective change management, and appropriate incentive structures as critical success factors. The emphasis on preventive healthcare, public-private partnerships, and community data ownership represents innovative thinking that could reshape AI development and deployment in healthcare settings.


Ultimately, AI’s value in public healthcare lies not in replacing human decision-making but in augmenting healthcare workers’ capabilities, improving operational efficiency, and enabling more effective resource allocation to serve India’s population more effectively and equitably.


Session transcriptComplete transcript of the session
Shri Saurabh Gaur

Thank you so much. Thank you, ma ‘am. Welcome to all the ladies and gentlemen who have found time to be present here today as we explore the topic of MedTech and AI innovations in public health systems. There’s a lot of AI being branded about here. What we aim to explore during the session is in the public health care and with the three pillars that have been traditionally associated with the public good, public health being a public good. The cost of delivery. That public health care. scales, cost of delivery from the government side and also the cost of public health care for the individual also and what can AI bring in terms of having more cost effectiveness.

The second one will be on the care coordination and how do we ensure that the longitudinal health record get built and the clinicians are better equipped to utilize emerging technologies and AI for better care and the third one is on the efficiency, operational efficiency in terms of how do we ensure that the patient standing the line is treated in the best possible manner and in the lowest possible time with quality being associated. So with these three anchors to public health care, I welcome the panel and let me start with you Saurabh, my counterpart in the government of India. When we talk about population scale deployment of AI systems in health care, how do we ensure that the population is

Shri Saurabh Jain

Thank you, Saurabh. So I would like to inform all of you. Most of you must have also learned about the recent healthcare strategy that has been launched by government of India. It is called SAHI, the Strategy for Artificial Intelligence in Public Health. So as part of that, lots of activities in the field of AI are already happening. So if we see in terms of, we know there is a lot of lack of specialists, even especially in the rural areas. So through these AI techniques, the kind of services that are being provided, whether it is through scanning X -ray images or through diabetic retinopathy, also screening is possible through AI tools. So through that, in the resource constraint settings, we are able to provide good quality healthcare services to the citizens.

We have the eSangevani platform, the teleconsultation where… a person who is there in the PSC, a doctor who is there in the PSC, they can take expert opinions from the tertiary care hospitals. Also, I see artificial intelligence in terms of overall reduction of out -of -pocket expenditure because that is also one of the main important goals of ensuring universal health coverage. So by building more and more such systems, by bringing up trust, safety considerations in the public health system, we are actually also creating public trust in the public health system so that people actually come towards public health system and they rely less on the private health care and thereby reducing the out -of -pocket expenditure.

Similarly, also lots of digitization is happening, lots of records. We have the digital data. And through this digital data also, we can improve upon the overall workflows that are there in the hospitals. We have the resource constraint settings. So in terms of supply chain, management also. So lots of innovations. the health ministry is looking for in terms of ensuring that we will be able to provide the universal health coverage to a person even who is in a remotest of area should get the best quality coverage and also at the least cost. So that is how the strategy of government of India is as far as the adoption of artificial intelligence in healthcare. Thank you.

Shri Saurabh Gaur

So you have talked about innovation emerging as a centerpiece in public health and with the strategy of AI adoption in healthcare, the SAI strategy we may look at a UHI movement just like there was a UPI movement where we have the digital public infrastructure in health being set up all the interface layers getting established but that also means bringing a lot of innovation ecosystem to the healthcare. So with the work that you have done Shivji over a lot of time how do you look at the innovation to institutionalization framework in the sense how do we while we every other day there is a health tech startup coming in, how does it get integrated in a structured manner with the public health system.

Mr. Shiv Kumar

Thank you and good afternoon everyone. One of the important things which we need to recognize at least in AI is currently solutions are looking for problems not the other way around. Therefore it is important to marry what is the problem which is important for the state and then bring the solution together. So the first step of institutionalization is about how do you apply technologies and who sets the agenda and who is setting the priority. Like the way Andhra government has set up the Center for Applied Technology which has put out a call to say these are the problems we solve for our frontline workers. I think that’s the first step of institutionalization. Second is about the whole although I said solution looking for problem and problem looking for solution these are not either or.

I think both are important. We never knew we all needed a smartphone. But at the same time smartphone has become a problem now. Right in some sense. So I think continuous bridging and very critical element is to taking it to the ground and actually sharing the real evidence because every innovator will want to say their technology is fantastic. Have you come across any innovator who says their technology is not good? All of them will say it’s fantastic, it works the best, it is the best. That’s okay. That’s what an innovator is supposed to do. Whereas I think the state has the responsibility to test it on the ground, to look at the feasibility, to see does it actually change health outcomes.

Does it actually save cost, as sir was mentioning. And the third element of institutionalization, sir, is also the use case library that we need to build. I think there is a lot of discussion around this can do that, that can do this. Where’s the evidence? Where’s the use case library? Where has it worked? Has it worked with tribal communities? Does it work for the last poor woman, tribal leader or ordinary person? Right? That becomes very, very critical. And the last part is around the AI policy. Policy and processes. where the guardrails are built so that the state is also able to have a very clear policy towards how do we share the data, how do we ensure that we are able to, all the data that is shared by the community is actually monetized for them.

Shri Saurabh Gaur

Thank you. You started with a great point that most of the time innovators come with solutions and they are looking for problem statements. But in Andhra Pradesh, we have articulated the problem statement clearly in terms of how do you at population scale drive an AI -enabled public healthcare system. And that’s where one of our partners is TataMD, which is represented by Saraswati Imam here. I believe you have set a fantastic stall also on the digital system that you have set up for healthcare delivery. So would you want to talk about your experience and how do you bring up a private sector ecosystem into public health and enable a public healthcare system?

Ms. Saraswathi Padmanabhan

Thank you, sir. As the sir mentioned, I represent TataMD. please visit our stall in hall number 5 where we have showcased what we are doing but I will just explain it in simple terms public health system we are looking at AIS assisting the entire public health system so I will just divide it into 3 or 4 aspects one is for the medical officer how can the medical officer gain from the assistance of AI so what we are looking at is how can the so normally when you go to a PHC the doctor would ask for the vitals to be taken the basic he would ask what is the complaint for which the citizen has come but it tends to be episodic it’s not longitudinal so we are looking at how with the help of AI we can share with the medical officer in a structured manner the entire longitudinal data of the citizen so that the doctor knows this is not just an episodic care we are talking about we are talking about continuum So how can we ensure that we understand the citizens?

So if a NCD patient comes, shows, mentions that he has HbA1c of 8, but has it been the same? Has it come down or is it increasing? So that trend will help the doctor to decide the medication. Or normally they would also ask what is the medicine that you are consuming and they would say either continue or stop. But with the longitudinal data, they will be able to say, okay, is this medicine actually working? Is it not working? How do I ensure that the patient is taken care of better? So I think it helps the data to be structured in a manner in which the doctor can use. Secondly, there are sometimes because of the busyness all of, I mean many of, I don’t know how many of you have actually visited a PHC and seen the workload that a medical officer faces.

Many times they are just rushing through the citizens, right? So they do not have the time. Sometimes they do not have the time. Sometimes they may miss an investigation which is required for a particular. So AI can do that prompt saying that, okay, this is the history, this is the data. Maybe we should get a foot examination done for this diabetic patient. He has not done it for last few days. So basically we are looking at AI as assisting the medical officer with the clinical support system so that nothing is lost, there is no oversight. Plus there is a evidence -based treatment guideline which can be shared with the doctor. And finally, the decision maker is the doctor.

We are not here to say that the AI will decide. The decision maker is doctor. The AI is to assist. So this would be more on the clinical side. Similarly on the operational side, right, all of us know the time that is spent in detailing out what the conversation with the patient is. So we are looking at how that can be made in a meaningful manner in a rural public health system. We all know in a closed room probably the listening can be better, the ambient listening can be easier managed. But in a closed room, we are not here to say that the AI will decide. We are looking at different dialects. We are looking at different dialects.

We are looking at different dialects. We are looking at different dialects. different contexts, how can we make that better? So that’s the second part for the clinician. Looking at the frontline workers, I think some of the AI bots, we are looking at how we can help them with their tasks. So if an ASHA is looking at 50 pregnant mothers, how can she prioritize who is the one she needs to look at, who is the high -risk mother whom she needs to prioritize? Because all of them are loaded with work, but AI can help them in scheduling their tasks, do their tasks in a better manner. And lastly, if I had to look at it from a public health system, the government, public, the health department, we are looking at AIs, how can it provide the analysis in a manner that it makes better sense for the government to see the trends?

How can the data show to them that, this is the… key problem in this particular area. We’ve been talking about with Andhra Pradesh government on creating wellness course, composite wellness course which looks at patients, looks at environment, creates a score which can tell them how to look at where the problems are and provide solutions. So basically this is going to strengthen the health department in identifying the risks, predicting the risks and looking at ways to do a proactive preventive care. So this is the way we are looking at ensuring that AI is providing support to all the stakeholders by using the data that is being provided and there’s a lot of deeper work while what I say is on the surface, the deeper work is how do you understand the data, how do you capture data across different geographies.

So that it’s more meaningful and it’s

Shri Saurabh Gaur

I’ll come back to you in terms of the challenges you face while working with government, especially looking at the fact that government is probably implementing 25 odd public health care programs and the flavor is from preventive health care to maternal child health to genetic care and so on and so forth. But bringing a different stakeholder into the conversation, Mr. Seth, you represent a social impact organization and have been working on tobacco control. So where do you think is the maximum value of AI in health care driven from your perspective while you’ve heard the other people talk about digital platforms and enablements and innovation? What in your mind will be the biggest AI value generation for public health?

Mr. Sanjay Seth

Thank you. Thank you, Dr. Mr. Gaur. You know, large public health programs, TB programs, prevention, tobacco control. adolescent health, the real question is where AI can actually help them day to day rather than in theory. And if you see most of these programs across states, the failure is not because of the design, because they’re not reviewed, but because of variable implementation across areas. Now, the data exists, reviews are also done, dashboards exist, but you find we very often find out what’s going wrong after the event when the failure has already occurred. I have heard so many senior level IS officers lament, the dashboards only tell me what I have not done. They don’t tell me what I am supposed to do.

Now, that is where I think AI can come in and add a huge amount of value. By, you know, helping and telling you where the failure is likely to occur. Identify where it is happening. Who has to take action on it? And then that action, I mean, you know, the person can be informed. And then, of course, you know, where the action is, that’s so important to pushing it. But for this, AI has to exist inside the delivery system, not on top of it. So, in my mind, that is where AI fits into the delivery system.

Shri Saurabh Gaur

Typically, a PHC, a Primary Health Care Center… that supposed to see around 40 patients or doctor ends up seeing 60 patients, at least that is the statistics for Andhra Pradesh. You have all been to institution like AIMS where heavily what do I say, the fact that there is no care coordination or absence of care coordination, so everybody seems to be ending up in a tertiary healthcare setup. And there representing a tertiary care unit Dr. Rakesh and coming from AIG hospital, how do you augment, how do you do clinical augmentation for the doctors and in private healthcare, what are the lessons that have been learned and can be adopted in public healthcare also? Dr.

Dr. Rakesh Kalapala

Thanks, Saurabh. In fact, I would start by saying, A is going to reduce the cost both in public and private health not by replacing the doctors but for the early diagnosis and intelligent triage. See, for example, as my co -speaker has said, this is a 1 .4 billion country. And as of now, I think it is going to be around $20 billion. And I think it is going to be around $20 billion. And I think it is going to be around $20 billion. And I think it is going to be around $20 billion. I mean, it’s growing day by day. So any hospital, it can be a primary, secondary, or tertiary care hospital, has got a huge volume of input. And it’s very difficult for anybody, even a robot also cannot match the human scale.

So these need -based innovations are something which we really have to look upon. See, suppose I, in my hospital, have got something where I have difficulty in doing it, and in other hospitals, something else. So need -based innovation we have to catch and then try to solve it. For example, in private setup, as I told you, there are some use cases which we had a personal experience since the last three, four years. I’ll tell you a little bit of economics on this. There is an algorithm which we developed with a pure AI model, costing 500 rupees to pick up a fatty liver, versus getting a machine which is 1 .2 crore and charging 5 ,000 rupees per piece. So this is something which is a need -based innovation for me as a gastroenterologist.

I’m a hardcore clinician, and I look upon any metabolic disorder which is the crux of the entire metabolism. And if you have tools like this, that will give you a lot of value in terms of fast diagnosis as well as the economics getting scaled down at your level. Then there are other use cases where you have the EMR and ESR. In fact, Sarabhiji, there’s a lot of chaos which happens when you have the admissions in hospital. So in that, we have a use case where the patient, they stand there, and the discharge summaries will take 8 to 10 hours for them to come out of the hospital. So we have an AI -enabled system where the discharge summary will happen in half an hour max once I say my patient has been discharged.

Vis -a -vis when you have electronic medical records and you want to have the patient bed management made. so one patient there’s a huge line where you start it’s all a personal experience in this hall everybody goes and you’ll be standing in the queue and even it’s not that you have to blame the hospital authorities but again that is something where in those areas you need these AI enabled systems so it can be digital health or a clinical related AI system so that’s where we have to concentrate on

Shri Saurabh Gaur

so while you do that my question is again to you only go around the panel in a reverse order now and so while you do that and look at private sector efficiencies being coming out how do you think you can collaborate with state governments or governments at large in terms of the fact that you will be an early deployers of medtech solutions and the fact that you will have built it in your cost economics to use them faster how do you think you can accelerate their adoption in government ecosystem also

Dr. Rakesh Kalapala

it’s a very valid point in fact the the see as a private sector person we have the early adaptation and adoption compared to getting into the public but in fact on that note i would say i couldn’t bring the aim foundation which we have which is working closely with the government of anupadhyay and other governments so what we did is we formed a platform with triple it hyderabad indian school of business iat delhi the fit and it is a neutral platform where anybody can come and then pitch their idea we handhold them nurture them and then make it validated at our clinical level and once we have the products for example the journey mitra which is launched in the government of anupadesh as the co -speaker told so that is something the asha workers can pick up with the a enabled system about the high risk pregnant mothers and then the nutritive value to decrease imr mmr so tools like this which we can do at our level and once we validate and we feel confident then we can give it to the public systems so there should be a public private integration, which is the main strength for this country.

And then only this will scale fast because time is running fast and nobody waits for us. And we have to keep up with that and then try to get the solutions because we cannot adopt the Western world solutions to us. Ours is entirely a different system. So we can never take any Western AI algorithm and then try to adapt. We have to have our own algorithms and we can do fast because of the population we have, the volumes we have, and of course the zeal we have.

Shri Saurabh Gaur

That’s great. In fact, we are working with the AIM Foundation and then looking at setting up a biodesign lab in Andhra Pradesh with the AIM Foundation, with all the other institutes that you’ve talked about. And I see a lot of facilitation of deployment of Meta -X solutions happening through the CAT, the Committee on Advanced Technology. And the biodesign lab. But while we talk about all these Meta -X solutions, the core is something that… we believe as a state also that preventive health care has to be strengthened. And that’s where for prevention as an entry point, where do you see AI playing a role in terms of preventive health care being strengthened, Sanjay ji?

Mr. Sanjay Seth

So I think prevention programs, as we all agree, prevention is better than cure and preventive programs will have the highest ROI. Unfortunately, preventive programs are not politically supported. Right. And that is where AI, I mean, if you take adolescent health, you know, student health, nutrition, you know, and I mean, if you take non -communicable diseases, we are talking about behavior change across entire populations, and that’s become the most important, you know, today maximum number of deaths are taking place is because of NCDs. That’s where preventive. So, you know, health comes in. Now where AI can really, I mean why AI really fits into this, because these programs operate at scale. They require continuous and repetitive activities to be done.

And they also show very predictable gaps during implementation. Okay, where the drop -offs are taking place, where the failures are taking place, and actually, you know, prevention, or sorry, the program implementation being done. And the number of variables are also very large, because as soon as you talk of behavior change, you are talking about, you know, huge amount of different cultures taking place. Different cultures respond differently to behavior. And if you take the mass of data, this is where AI can really support the programs and bring the, you know, not just the cost down, but the effectiveness of the programs to take place. But as I said earlier, AI needs. You know, I’m going to do this, and I’m going to do that.

And I’m going to do that. be within the delivery thing, not as a layer on top. And if you then, if we focus on, you know, how these schemes or programs result in outcomes, all right, this is where I feel that AI can give a very vast feedback. All these different entities, facilities, units are feeding a huge amount of program data coming in. AI can analyze where the likely failure rates are. It can escalate it to the appropriate level, bring it to the attention of the senior people, and that will result in far, far better delivery outcomes.

Shri Saurabh Gaur

So can you be more specific in terms of, for example, the tobacco control program that you run, Sanjay ji? Right. Where is it, do you think that the fact that if we are doing it across, say, 20 ,000 schools and I do not know how many schools you are doing it with, are you able to do those kind of predictive outcomes? Outcomes or prediction in terms of where the program is. is probably bound to fail or is looking at a failure condition and the actions that need to be taken.

Mr. Sanjay Seth

Oh, yes, we are getting, we are running tobacco control program in Antara, as you know, more than 20 ,000 schools. And each school is supposed to do a standard set of nine activities. All right. So very early, we are able to see which, you know, schools are not doing some certain activities. All right. And if we manually, if we start analyzing across 20 ,000 schools, there’s no way we can do it. All right. So AI helps us to analyze the data and say this block, this district, this area, there is a failure taking place. These schools are not taking action. All right. And then when we in terms of after the analysis, what we are doing is also the schools which are acting when they do the activities, they upload the activities.

Now we do image recognition. And decide whether the activity is done correctly or not. And we’re very high, 98%. accuracy we are able to see whether the activity has been done correctly or not and that tables enables us to give feedback immediately look within I mean as soon as he enters it within shortly after that he gets feedback you haven’t done this properly please repeat it all right then once we are talking about you know informing people so we are now sending out in Andhra Pradesh for instance 40 ,000 teachers get messages from us personalized messages you know for each person in the language he prefers in the language you know in the tone he prefers and that makes the motivation or the you know the way they act much faster than they used to earlier so we are seeing orders of magnitude improvement in terms of effectiveness of the program taking place I believe is you know works for I mean not in Andhra but in other states we are seeing this same thing happening across other programs which we have been working on

Shri Saurabh Gaur

This is very heartening to see. So while for example you may be doing a tobacco control program with us in Andhra Pradesh, there is a cancer care program happening in Tamil Nadu that we got exposed to in one of the workshops. There are other states doing fantastic work. I saw Odisha for example, the stall today. So while and I probably picture to you Shivkumar ji that while we have all these islands of excellence and innovations, what is it that prevents them from scaling up and what are the structural barriers that probably government is not able to while we talk about ease of doing business, what is that ease of doing governance, public governance and public health care system that will make them scale up?

Mr. Shiv Kumar

in place and the data quality actually improves. AI models really can’t work on top of it. And there is an exception in terms of, you know, you’re doing surveys and various other data points are there. Most states don’t have it. Right? And therefore, the processes unless they throw data out, I think we can all dream about AI, but really having the kind of value that we are talking about is going to be very, very difficult.

Shri Saurabh Gaur

Thank you. Thank you. That’s a great point. And while we are at a certain maturity level in state government of Andhra Pradesh, there are other states which do equally well and there are states which probably are lagging. But with the national framework being put and with the ISHMA and Bharat Digital Mission, I bring it to Saurabh, my colleague. Where do you think in government of India, how do you facilitate all state governments to at least come on par and how do you see AI within the national health systems which become say gold standards or standards at least for all the other states to follow?

Shri Saurabh Jain

So as we know that health is a state subject, so ultimately government of India works in collaboration with the state government. And we understand that the kind of AI systems, the algorithms, the applications that have been developed for AI, ultimately the quality of output that comes out from those systems depends upon the data on which it is trained. And that is why it is very, very important that the data should be representative. It should be from every region because every region has a different kind of disease profile, every kind of various kind of demographic profiles. So this is very, very important that the data should be representative. Data quality, as was mentioned, should be very good.

And in fact, through this Aishwarya Bharat Digital Mission with more and more of digitization, we are and with every person now being provided with the ABAID, which is a… Actually, an ID which is linked with the health record so that the health records can move. with the person. So with all this digitization, with all the data that is being generated, we are able, we can do lots of usage of AI in terms of disease surveillance. We can use it for modeling of various diseases. We can use it for imaging. Lots of MRI because as I have mentioned earlier, still there is a lot of issue about the availability of specialist doctors, especially in the rural settings.

So at least if the AI solutions are available, the basic, at least 90 % of the imaging can be taken care of by the AI. So that only the most suspected cases can be referred to the tertiary care hospitals and the basic healthcare can be managed at the facility level. And as my colleagues have also mentioned, one of the issues in public health delivery is totally preoccupied with lots of administrative work. Lots of data entry, lots of portals that they have to enter data into the portals. That takes a lot of time beyond what is expected out of them, which is their clinical duties. So with this more and more, the AI application and more and more systems getting digitized, we can have a system where the data which is fed into one portal can be automatically populated all across the portals and the administrative work of these healthcare workers, which are our frontline healthcare workers, can be substantially reduced so that they can focus more and more upon the actual clinical work.

So AI is ultimately, it’s about improving efficiencies. It’s about improving the workflows. We have the supply chain management also. It is about optimizing of supply chain management. And in this entire journey, in the adoption of AI, we take states as our partners. Because ultimately, when both government of India and states work together, only then we can have a very robust AI system which can actually deliver quality care to our people.

Shri Saurabh Gaur

We are here to work with the government of India very closely and establish those models. But the point you made, Kher, I actually cannot think of working as a NM myself also, despite being, adding a state health department. The sheer fact that a poor NM or MPHA male, the multi -purpose health assistant or the nurse on the field has to work with 25 programs. And while there is this national architecture coming up, there is so much of digital literacy challenge, not just digital literacy but adoption and using all these apps. This is a real challenge we face at the state level also. And with TATA, when we are building the digital backbone through project Sanjeevani that we are doing together in a collaborative approach, in an example of public -private partnership, I would want, Saraswati, you to play the devil’s advocate role and tell us the three key technology integration challenges that you see.

And please be critical of the system. But tell us that what is it that you would want to see. what are the challenges that you face day in day out when you look at building this care coordination oriented digital backbone for public health in Andhra Pradesh

Ms. Saraswathi Padmanabhan

Thank you sir, tough question to answer especially in a public forum but I will do my best, so one of the things like we have spoken about in a PHC there are lot of things to be done and like you mentioned sir, all of them have a lot of activities lot of programs, lot of reporting that they are doing introducing AI as something like Sanjay ji said as something additional or bringing technology as something outside is definitely a challenge, so our aim and what we have realized is if it is not integrated in their workflow, if it is not something that they find value in, the adoption is going to be a challenge like Shukumar ji mentioned that people are collecting data and just sending data But is that data really helping them?

Is it helping the people who are collecting the data? Is it helping them to do their work better? Are they able to benefit from the work that they are doing? If the answer is no, definitely they will not take it up. So one of the things is how do we integrate in the workflow? And they find value for what is being introduced. The moment that we are able to reach that sweet spot, I think they will start utilizing it. So one is how it’s integrated in the workflow. Second, I think, is it’s less of technology management and more of change management. Whenever there is something introduced, people look at it. I mean, there’s a cycle of adoption similar to what even we all face whenever we get introduced to anything new.

There will be a set of people who are ready to adopt it and are forthcoming. Then there will be a lot of people who are resisting. Slowly, the pattern changes. And people start seeing benefits. So what we are trying to… get at is who are those people who are seeing value for it who are those early adopters how can they bring in and probably with them we train the model like shikumaji rightly said if we do not train the model correctly you’re not going to get the good benefits so who are those people who can be utilized to train the model and who will not resist and then you give the trained model to people who are resisting so that they can see value so i think it’s a lot of change management related resistance which is what we need to address and lastly while i mean andhra is a very progressive state and we see this not as a challenge here but generally the connectivity the power availability all these tend to be one of the other challenges that for doing it a system wide change how these could probably be the this as i said in andhra thankfully those are not issues that we have seen but finally i think it’s about the incentives, right?

What are the incentives for people to adopt? If there are incentives for them to adopt both from the state side and from their personal side, the adoption tends to be easy. So, it’s a lot of work that we need to do to make sure that this is taken at scale, sir. Thank you.

Shri Saurabh Gaur

So, I think we have a round time for one more quick round and I want to keep it short. Thinking aloud, what do you think? And the question is to all of you in the panel. What is the one maximum impact zone or maximum impact innovation that you feel based on your engagement with public health or with healthcare that should be happening? And I start with Dr. Akesh. What do you think will be the one most impactful thing that we can do? That’s a

Dr. Rakesh Kalapala

very difficult question. There are many things to do. But what I would say in a nutshell is with the current scenario where we are, so the clinical insights from the doctors, the engineering capability from the bioengineers or the AI engineers, and the policy support from people like you, so this is something which will make the AI -related or metric -related innovation to go from the lab to lives. So that should be a collective holistic approach which we have to do and join hands together. I would say in that way that’s the need of the hour. Thank you.

Shri Saurabh Gaur

I’ll make it simpler for you, Sanjayji. In preventive healthcare, which is one most impactful innovation that should happen?

Mr. Sanjay Seth

Since I’m working in that area, I guess that is where I will obviously state, but if you look at it, Andhra Pradesh, 48 ,000 deaths every year because of tobacco usage. If you take any of the adolescent health, the future of our youth is how well our adults are. lessons grow. As I said, preventive is the highest return on investment, and it is not glamorous. It is very dull. It requires enormous amount of day -in, day -out discipline. But as a state, if you’re looking at what can really give you the maximum amount of benefit, I’d argue for preventive health. Thank you so much.

Shri Saurabh Gaur

To you, Saraswati ji, in terms of engaging, and in the public -private partnership board, which is the most impactful thing that can be done?

Ms. Saraswathi Padmanabhan

I would probably respond slightly differently. I think in terms of bringing back the trust in the public health system, that would probably be the focus, and that hinges on quality of care that we are able to provide in the primary care, and that is what is going to ensure that the need for tertiary, secondary, and the disease burden that we are envisaging, that would probably… probably be managed if we strengthen the primary care with the trust in the public primary care. Thank you.

Shri Saurabh Gaur

And Shivji, with you heading our committee on advanced technologies and doing all the work with innovation ecosystem, which do you think is the most standout innovation that you have seen that can be impactful for public health?

Mr. Shiv Kumar

Sir, I’m going to be a little controversial on this. I think technology is just an enabler. I think our single biggest problem is going to be work culture. Nature. Work culture, incentives, and today every officer feels that they need to see a dashboard and tell their team what to do. I think if we have to really make AI help everybody decide, I think the work culture around evidence, the work culture around data is going to be the biggest one. But I will answer your question. The biggest innovation should be people’s data should be owned by data cooperatives. Nellore is a district. In Andhra, Nellore people should own the data through a data cooperative. and we should have reverse tokens where people pay for their data.

And we are feeding the AI engines and I think our people should gain from that. When we reverse that, sir, and when we reverse the incentives and the work culture of use of data, I think automatically you will find people coming and telling you this is how I am using it. Thank you.

Shri Saurabh Gaur

That’s very interesting. And what about you? What do you think that can be the most impactful at a national scale also for the health innovation that can be there?

Shri Saurabh Jain

I would just also like to address the work culture issue that you have mentioned. In fact, if we can just sensitize, if we can make our doctors, our health workers confident that the outcomes are predictable, outcomes from the AI systems are good. And actually by adoption of AI systems, their productivity is improving. The kind of work that they have to do in less number of hours, in less time, they are able to use that. Use this. Use the same. Do the same kind of work. And they can do better in terms of their clinical approach and their productivity approach. I think. I think our health workers, they have adopted very swiftly to the technology. And if we can show them the reliability, the outcome that is certain, and overall improvement in their productivity, definitely workforce will adapt to this technology.

And as far as coming to your question, I think diagnostics will play a very, very important role in terms of the adoption of AI. And we are seeing it in the area of tuberculosis and also in diabetic retinopathy, where through the scanning of these images, the doctors can make a very evidence -based decision in a very less time. So in the same time, if they were seeing 10 patients, now with the support of AI, they can see 20 patients or 30 patients with much more accuracy. So the kind of shortage of doctors we have and the kind of patient load we have, especially in the tertiary. I think diagnostics will be playing a very important role. huge role in the field of AI.

Thank you.

Shri Saurabh Gaur

I think that’s all the time we have. We still have time for one or two questions from the audience. If somebody would want to, the gentleman at the back actually raised his hand first or I spotted him first. There’s a mic behind you.

Audience

For mental health perspective. Because that requires additional safety, security as well as sensitivity. But I have not seen anyone touching on yesterday also as well as today. Mostly we talk about medical imaging and that takes because radiology as well as radiology all the innovation. there are developments also but I was thinking I will get some insight but so far

Shri Saurabh Gaur

Dr. Rakesh you want to take it?

Dr. Rakesh Kalapala

I think I have a point on that it’s a very nice question so on the mental health there are people in the western world who have got some apps and they are doing it but in India unfortunately there is no robust system to collect the data in fact if you have suppose you are working in a private hospital you have a robust EMR, EHR then you must be having a questionnaire on which you can build up these things but that uniformity to come it takes a little more evolution but there are people who are working on it in Indian sector probably it will take a little more time in fact you will be the one who can start that at your level

Audience

no we are working actually we are struggling I am at Eames Bhopal and what we have audio recording like we don’t have medical imaging either you have mental status examinations or video recording and based on that voice recording similar like detection of suicidal ideations, detection of depressions, anxiety and those kind of things. So there I was seeking if some assistance or guidance can

Shri Saurabh Gaur

So I will probably just respond to this. What we have done in Andhra Pradesh is we have worked with psychiatrists and so there is a methodology called QPR, question, persuade, refer, which is actually proven in the sense that it is patented, where we worked with them and said that okay, especially with let’s say our students who go through high pressure, people who are in intermediate education, 11th and 12th, and there is pressure to perform an examination, parents are pushing them. So out of those 10 lakh students who are appearing for examination, which are those and our estimate is say around 15 % need to have special focus being paid. So taking all of them through this QPR methodology, working with this organization called Suicide Prevention Foundation of India, SPFI, and working with them, we have been able to at least look at which are those students who need to be given specific focus who are having those kind of ideations or having those kind of vulnerabilities and what kind of messaging needs to go for them and while it is a challenge and I am not saying there is a lot of AI into that because there is lot of what do I say privacy issue also associated with this but the other point you said about a scribe or essentially since people are talking you can actually get into the behavioral insight and understand whether what kind of ideation is happening and getting answers out of that I think that is a great point and would love to work with any innovator who would want to do it as a sandbox with us.

Thank you. One more question. Yeah. No, no, we will go to. You are an in -house person. Yeah, please go ahead.

Audience

So first of all thank you, Saurabh Gaur, sir. The first thing about that MedTech challenge I think this is the first time we have seen state government or government is opening up and telling that why don’t you guys innovate as come with your solutions including small startups like us. and we will give you a platform to pilot it and then finally help us in scaling up. My question more is to Saurabh Jain sir because we need to replicate something like this at a central government level. At the start -up, we definitely cannot go to all states and keep on doing pilot while MedTech is a segment where almost zero VC or private investment is there.

So it’s largely we running on either government grants or our own save money or loan or everything. So is it central government can create a platform which Saurabh Gaur sir or other state government can take those validated solutions and scale up them and we don’t just keep on repeating the same thing?

Shri Saurabh Jain

I think yes. ICMR is developing this kind of sandbox in which the start -ups can come up with their innovations. You can test it in the sandbox. So ICMR is actually developing this kind of mechanism to test the models. And ultimately, it’s about replication as you have mentioned. So once it is tested and it is tested across various settings depending upon the… outputs… Definitely it can be scaled up.

Shri Saurabh Gaur

Thank you so much. The audience, you deserve a round of applause for being very patient audience. And I thank all the panelists also for their very valuable insights. Thank you. Thank you, everyone. There is a moment to give also. We can quickly hand over the moment to Dursu. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Shri Saurabh Jain
5 arguments159 words per minute1216 words456 seconds
Argument 1
Government has launched SAHI (Strategy for Artificial Intelligence in Public Health) to address specialist shortages and improve healthcare delivery in rural areas
EXPLANATION
The Indian government has implemented a comprehensive AI strategy for public health that focuses on leveraging AI technologies to overcome the shortage of medical specialists, particularly in rural areas. This strategy aims to provide quality healthcare services through AI-enabled tools and platforms.
EVIDENCE
Examples include AI tools for scanning X-ray images, diabetic retinopathy screening, and the eSanjevani teleconsultation platform that connects doctors in Primary Health Centers with specialists in tertiary care hospitals
MAJOR DISCUSSION POINT
AI Implementation Strategy and Framework in Public Healthcare
DISAGREED WITH
Mr. Shiv Kumar
Argument 2
AI can reduce out-of-pocket expenditure and build public trust in healthcare systems, supporting universal health coverage goals
EXPLANATION
By implementing AI systems that improve the quality and efficiency of public healthcare, the government aims to increase public confidence in public health systems. This would encourage people to rely more on public healthcare rather than expensive private healthcare, thereby reducing their out-of-pocket expenses.
EVIDENCE
The strategy focuses on building trust, safety considerations, and digitization of records to improve workflows in resource-constrained hospital settings and supply chain management
MAJOR DISCUSSION POINT
Cost Reduction and Efficiency in Healthcare Delivery
AGREED WITH
Dr. Rakesh Kalapala, Ms. Saraswathi Padmanabhan
Argument 3
Representative data from all regions is essential for training AI algorithms due to varying disease and demographic profiles
EXPLANATION
The quality and effectiveness of AI healthcare systems depend heavily on the data used to train them. Since different regions have different disease patterns and demographic characteristics, the training data must be representative of all areas to ensure the AI systems work effectively across diverse populations.
EVIDENCE
Through Ayushman Bharat Digital Mission and ABHA ID system, more digitization is happening to generate representative data for disease surveillance, modeling, and imaging applications
MAJOR DISCUSSION POINT
Data Quality and Infrastructure Challenges
AGREED WITH
Mr. Shiv Kumar, Ms. Saraswathi Padmanabhan
DISAGREED WITH
Mr. Shiv Kumar
Argument 4
Diagnostics, particularly medical imaging for TB and diabetic retinopathy, will play a crucial role in AI adoption
EXPLANATION
AI-powered diagnostic tools, especially for medical imaging, can significantly improve healthcare delivery by enabling faster and more accurate diagnoses. This is particularly important given the shortage of specialist doctors, as AI can handle basic diagnostic tasks and refer only complex cases to specialists.
EVIDENCE
AI can handle 90% of imaging cases, allowing doctors to see 20-30 patients with AI support compared to 10 patients without it, while only referring the most suspected cases to tertiary care hospitals
MAJOR DISCUSSION POINT
Clinical Decision Support and Care Coordination
DISAGREED WITH
Dr. Rakesh Kalapala, Mr. Sanjay Seth
Argument 5
ICMR is developing sandbox mechanisms for startups to test innovations before scaling up
EXPLANATION
The Indian Council of Medical Research is creating a testing environment where healthcare startups can validate their AI and medical technology innovations in controlled settings. Once tested and proven effective across various settings, these solutions can be scaled up and replicated.
MAJOR DISCUSSION POINT
Innovation Ecosystem and Scaling Solutions
AGREED WITH
Dr. Rakesh Kalapala, Ms. Saraswathi Padmanabhan
M
Mr. Shiv Kumar
5 arguments185 words per minute670 words216 seconds
Argument 1
Solutions are currently looking for problems rather than the reverse; states need to set clear agendas and priorities for AI implementation
EXPLANATION
The current AI landscape in healthcare is characterized by technology developers creating solutions and then searching for problems to solve, rather than identifying specific healthcare challenges first and then developing appropriate solutions. States need to take the lead in defining their priority problems and then seek appropriate technological solutions.
EVIDENCE
Andhra Pradesh government has set up the Center for Applied Technology which has put out a call specifying the problems they want to solve for their frontline workers
MAJOR DISCUSSION POINT
AI Implementation Strategy and Framework in Public Healthcare
AGREED WITH
Dr. Rakesh Kalapala, Mr. Sanjay Seth
DISAGREED WITH
Shri Saurabh Jain
Argument 2
Need for use case libraries and evidence-based validation to demonstrate where AI solutions actually work in real-world settings
EXPLANATION
There is a critical need to build comprehensive documentation of where and how AI solutions have been successfully implemented in healthcare. This includes testing solutions on the ground to verify if they actually improve health outcomes, save costs, and work effectively with diverse populations including tribal and marginalized communities.
EVIDENCE
Questions raised about whether solutions work with tribal communities, the last poor woman, tribal leader, or ordinary person, and the need for evidence of actual impact on health outcomes and cost savings
MAJOR DISCUSSION POINT
Innovation Ecosystem and Scaling Solutions
Argument 3
Data quality and robust processes are prerequisites for effective AI implementation; most states lack adequate data infrastructure
EXPLANATION
For AI models to function effectively, they require high-quality data and well-established processes. However, most states currently lack the necessary data infrastructure and quality processes that would enable successful AI implementation in healthcare.
EVIDENCE
States need robust processes in place and improved data quality before AI models can work effectively on top of existing systems
MAJOR DISCUSSION POINT
Data Quality and Infrastructure Challenges
AGREED WITH
Shri Saurabh Jain, Ms. Saraswathi Padmanabhan
Argument 4
Work culture around evidence and data usage is the biggest challenge, more important than technology itself
EXPLANATION
The fundamental barrier to successful AI implementation is not technological but cultural – specifically the work culture around how evidence is used and how data-driven decisions are made. Current practices where officers rely on dashboards to direct their teams need to evolve to allow AI to help everyone make better decisions.
EVIDENCE
Every officer feels they need to see a dashboard and tell their team what to do, but the work culture around evidence and data usage needs to change for AI to be effective
MAJOR DISCUSSION POINT
Data Quality and Infrastructure Challenges
Argument 5
People’s data should be owned by data cooperatives with reverse tokenization so communities benefit from their data
EXPLANATION
Instead of having individual or corporate ownership of health data, communities should collectively own their data through cooperative structures. A reverse tokenization system should be implemented where people are compensated for their data that feeds AI engines, ensuring that the communities providing the data also benefit from its use.
EVIDENCE
Example given of Nellore district where people should own their data through a data cooperative and receive reverse tokens as payment for their data contribution
MAJOR DISCUSSION POINT
Preventive Healthcare and Population-Scale Programs
DISAGREED WITH
Shri Saurabh Jain
M
Mr. Sanjay Seth
3 arguments150 words per minute1017 words404 seconds
Argument 1
AI should exist inside the delivery system, not as a layer on top, to effectively support program implementation
EXPLANATION
For AI to be truly effective in public health programs, it must be integrated within the existing delivery mechanisms rather than being added as an external layer. This integration allows AI to provide real-time support and actionable insights to program implementers rather than just reporting on past performance.
EVIDENCE
Current dashboards only tell officers what they have not done, but don’t tell them what they are supposed to do. AI can help identify where failures are likely to occur and inform the right people to take action
MAJOR DISCUSSION POINT
AI Implementation Strategy and Framework in Public Healthcare
AGREED WITH
Mr. Shiv Kumar, Dr. Rakesh Kalapala
Argument 2
AI can analyze implementation data across large-scale programs to predict failures and improve delivery outcomes
EXPLANATION
Large public health programs generate massive amounts of implementation data, and AI can analyze this data to identify patterns and predict where program failures are likely to occur. This predictive capability allows for proactive interventions rather than reactive responses after problems have already occurred.
EVIDENCE
In tobacco control program across 20,000 schools in Andhra Pradesh, AI helps analyze which schools are not doing required activities and provides immediate feedback. Image recognition with 98% accuracy determines if activities are done correctly, and 40,000 teachers receive personalized messages in their preferred language
MAJOR DISCUSSION POINT
Preventive Healthcare and Population-Scale Programs
Argument 3
Preventive programs require continuous activities and behavior change across populations, where AI can provide significant support
EXPLANATION
Preventive healthcare programs operate at massive scale and require sustained, repetitive activities to drive behavior change across diverse populations. AI is particularly well-suited to support these programs because it can handle the large number of variables involved in behavior change and provide continuous monitoring and support.
EVIDENCE
Preventive programs have the highest ROI but are not politically supported. They require enormous day-to-day discipline and operate across different cultures that respond differently to behavior change interventions. In Andhra Pradesh, 48,000 deaths occur annually due to tobacco usage
MAJOR DISCUSSION POINT
Preventive Healthcare and Population-Scale Programs
DISAGREED WITH
Dr. Rakesh Kalapala
D
Dr. Rakesh Kalapala
4 arguments193 words per minute1035 words320 seconds
Argument 1
Need for public-private integration where private sector validates solutions before transferring to public systems
EXPLANATION
The private healthcare sector, with its early adoption capabilities and robust infrastructure, should serve as a testing ground for AI and medical technology solutions. Once these solutions are validated and proven effective in private settings, they can then be transferred and scaled up in public healthcare systems.
EVIDENCE
AIM Foundation works with government of Andhra Pradesh and other governments, forming a platform with IIT Hyderabad, Indian School of Business, and IIT Delhi to handhold startups, validate solutions clinically, and then transfer successful products like Journey Mitra to government systems
MAJOR DISCUSSION POINT
AI Implementation Strategy and Framework in Public Healthcare
AGREED WITH
Ms. Saraswathi Padmanabhan, Shri Saurabh Jain
Argument 2
AI-enabled systems can reduce costs significantly – example of fatty liver detection algorithm costing 500 rupees versus 5,000 rupees traditional method
EXPLANATION
AI can dramatically reduce healthcare costs by providing more efficient diagnostic methods. The development of need-based AI innovations can replace expensive traditional diagnostic equipment and procedures with much more affordable alternatives while maintaining or improving accuracy.
EVIDENCE
AI algorithm for fatty liver detection costs 500 rupees compared to traditional machine costing 1.2 crore and charging 5,000 rupees per test. AI-enabled discharge summary system reduces patient waiting time from 8-10 hours to 30 minutes maximum
MAJOR DISCUSSION POINT
Cost Reduction and Efficiency in Healthcare Delivery
AGREED WITH
Shri Saurabh Jain, Ms. Saraswathi Padmanabhan
Argument 3
AI can help with early diagnosis, intelligent triage, and reduce administrative burden on healthcare workers
EXPLANATION
AI systems can support healthcare delivery by enabling faster and more accurate early diagnosis, helping to prioritize patients based on urgency and medical need, and automating administrative tasks that currently consume significant time for healthcare workers. This allows medical professionals to focus more on clinical care rather than paperwork.
EVIDENCE
AI can handle patient bed management, electronic medical records, and discharge summaries. In a 1.4 billion population country, hospitals face huge volumes that even robots cannot match human scale, requiring need-based innovations
MAJOR DISCUSSION POINT
Clinical Decision Support and Care Coordination
AGREED WITH
Mr. Shiv Kumar, Mr. Sanjay Seth
Argument 4
Private sector can serve as early adopters and validators before solutions are scaled to public systems
EXPLANATION
Private healthcare facilities have the advantage of early adaptation and adoption of new technologies compared to public systems. They can serve as proving grounds for AI solutions, validating their effectiveness and building confidence before these solutions are implemented in public healthcare systems.
EVIDENCE
Private sector has early adaptation advantage and can validate solutions at clinical level before transferring to public systems. Western AI algorithms cannot be directly adopted and need to be developed specifically for Indian conditions using local population data and volumes
MAJOR DISCUSSION POINT
Innovation Ecosystem and Scaling Solutions
M
Ms. Saraswathi Padmanabhan
4 arguments171 words per minute1528 words533 seconds
Argument 1
AI should assist medical officers with longitudinal patient data and clinical decision support while keeping doctors as final decision makers
EXPLANATION
AI systems should be designed to support healthcare providers by organizing and presenting comprehensive patient history and data in a structured manner, helping doctors make more informed decisions. However, the final clinical decisions should always remain with the medical professionals, with AI serving purely in an assistive capacity.
EVIDENCE
AI can help doctors understand patient trends like HbA1c levels over time, medication effectiveness, and prompt for required investigations like foot examinations for diabetic patients. AI provides evidence-based treatment guidelines but the doctor remains the decision maker
MAJOR DISCUSSION POINT
Clinical Decision Support and Care Coordination
AGREED WITH
Shri Saurabh Jain
Argument 2
AI can help prioritize high-risk cases and optimize resource allocation for frontline workers like ASHA workers
EXPLANATION
Frontline healthcare workers often manage large caseloads and need support in prioritizing their activities. AI can analyze patient data to identify high-risk cases that require immediate attention and help workers schedule and organize their tasks more effectively.
EVIDENCE
AI can help an ASHA worker managing 50 pregnant mothers to prioritize high-risk mothers who need immediate attention, and assist in better task scheduling and management
MAJOR DISCUSSION POINT
Cost Reduction and Efficiency in Healthcare Delivery
AGREED WITH
Dr. Rakesh Kalapala, Shri Saurabh Jain
Argument 3
Integration challenges include workflow adoption, change management, and ensuring healthcare workers see value in new systems
EXPLANATION
The successful implementation of AI in healthcare faces significant challenges related to integrating new technologies into existing workflows, managing the change process, and ensuring that healthcare workers perceive tangible benefits from adopting new systems. Without addressing these human and organizational factors, technology adoption will fail.
EVIDENCE
Three key challenges identified: integration in workflow so users find value, change management to handle adoption resistance with early adopters helping train models, and infrastructure issues like connectivity and power availability
MAJOR DISCUSSION POINT
Data Quality and Infrastructure Challenges
AGREED WITH
Mr. Shiv Kumar, Shri Saurabh Jain
Argument 4
Building trust in public primary healthcare system is crucial for reducing burden on tertiary care facilities
EXPLANATION
Strengthening the quality of care at the primary healthcare level and building public trust in these services is essential for creating an effective healthcare system. When people trust and utilize primary care services, it reduces the overwhelming burden on secondary and tertiary care facilities.
MAJOR DISCUSSION POINT
Innovation Ecosystem and Scaling Solutions
AGREED WITH
Dr. Rakesh Kalapala, Shri Saurabh Jain
A
Audience
2 arguments172 words per minute302 words105 seconds
Argument 1
Mental health AI applications face additional challenges around safety, security, and sensitivity requirements
EXPLANATION
Mental health applications of AI require special consideration for safety, security, and sensitivity issues that may not be as critical in other healthcare applications. The audience member noted that mental health has not been adequately addressed in AI healthcare discussions despite these unique requirements.
EVIDENCE
Unlike medical imaging and radiology where there are developments, mental health AI lacks robust systems for data collection and uniform approaches
MAJOR DISCUSSION POINT
Mental Health and Specialized Applications
Argument 2
Voice and video recording analysis for detecting suicidal ideations and depression shows promise but needs more development
EXPLANATION
AI applications using audio and video analysis for mental health assessment, particularly for detecting suicidal ideations and depression, represent a promising but underdeveloped area. Current work focuses on voice recording analysis and mental status examinations rather than traditional medical imaging approaches.
EVIDENCE
Work being done at AIIMS Bhopal on audio recording analysis for detection of suicidal ideations, depression, and anxiety based on voice recordings and mental status examinations
MAJOR DISCUSSION POINT
Mental Health and Specialized Applications
S
Shri Saurabh Gaur
1 argument164 words per minute2087 words761 seconds
Argument 1
QPR (Question, Persuade, Refer) methodology can be used to identify vulnerable populations like students under examination pressure
EXPLANATION
The QPR methodology provides a systematic approach to identifying and supporting individuals at risk of mental health crises, particularly students facing high-pressure examination situations. This approach can help identify which students need special attention and support among large populations.
EVIDENCE
In Andhra Pradesh, out of 10 lakh students appearing for examinations, approximately 15% are estimated to need special focus. The state works with Suicide Prevention Foundation of India (SPFI) to implement QPR methodology for 11th and 12th grade students facing examination pressure
MAJOR DISCUSSION POINT
Mental Health and Specialized Applications
Agreements
Agreement Points
AI should assist healthcare providers while keeping doctors as final decision makers
Speakers: Ms. Saraswathi Padmanabhan, Shri Saurabh Jain
AI should assist medical officers with longitudinal patient data and clinical decision support while keeping doctors as final decision makers AI can reduce out-of-pocket expenditure and build public trust in healthcare systems, supporting universal health coverage goals
Both speakers agree that AI should serve in a supportive role to enhance healthcare delivery and build trust in public systems, but medical professionals should retain ultimate decision-making authority
Data quality and infrastructure are fundamental prerequisites for effective AI implementation
Speakers: Mr. Shiv Kumar, Shri Saurabh Jain, Ms. Saraswathi Padmanabhan
Data quality and robust processes are prerequisites for effective AI implementation; most states lack adequate data infrastructure Representative data from all regions is essential for training AI algorithms due to varying disease and demographic profiles Integration challenges include workflow adoption, change management, and ensuring healthcare workers see value in new systems
All three speakers emphasize that without proper data infrastructure, quality processes, and representative datasets, AI systems cannot function effectively in healthcare settings
Public-private partnerships are essential for scaling AI innovations in healthcare
Speakers: Dr. Rakesh Kalapala, Ms. Saraswathi Padmanabhan, Shri Saurabh Jain
Need for public-private integration where private sector validates solutions before transferring to public systems Building trust in public primary healthcare system is crucial for reducing burden on tertiary care facilities ICMR is developing sandbox mechanisms for startups to test innovations before scaling up
Speakers agree that collaboration between private and public sectors is crucial, with private sector serving as testing grounds for innovations that can then be scaled up in public systems
AI can significantly reduce healthcare costs and improve efficiency
Speakers: Dr. Rakesh Kalapala, Shri Saurabh Jain, Ms. Saraswathi Padmanabhan
AI-enabled systems can reduce costs significantly – example of fatty liver detection algorithm costing 500 rupees versus 5,000 rupees traditional method AI can reduce out-of-pocket expenditure and build public trust in healthcare systems, supporting universal health coverage goals AI can help prioritize high-risk cases and optimize resource allocation for frontline workers like ASHA workers
All speakers agree that AI implementation can dramatically reduce healthcare costs while improving service delivery and efficiency
Need-based innovation approach is crucial for successful AI implementation
Speakers: Mr. Shiv Kumar, Dr. Rakesh Kalapala, Mr. Sanjay Seth
Solutions are currently looking for problems rather than the reverse; states need to set clear agendas and priorities for AI implementation AI can help with early diagnosis, intelligent triage, and reduce administrative burden on healthcare workers AI should exist inside the delivery system, not as a layer on top, to effectively support program implementation
Speakers agree that AI solutions should be developed to address specific, identified healthcare problems rather than creating solutions and then looking for applications
Similar Viewpoints
Both speakers emphasize that the human and organizational factors – work culture, change management, and user adoption – are more critical challenges than the technology itself
Speakers: Mr. Shiv Kumar, Ms. Saraswathi Padmanabhan
Work culture around evidence and data usage is the biggest challenge, more important than technology itself Integration challenges include workflow adoption, change management, and ensuring healthcare workers see value in new systems
Both speakers see AI’s value in analyzing large datasets to optimize resource allocation and improve program implementation at scale
Speakers: Mr. Sanjay Seth, Ms. Saraswathi Padmanabhan
AI can analyze implementation data across large-scale programs to predict failures and improve delivery outcomes AI can help prioritize high-risk cases and optimize resource allocation for frontline workers like ASHA workers
Both speakers identify medical imaging and diagnostics as the most promising and impactful area for AI implementation in healthcare
Speakers: Dr. Rakesh Kalapala, Shri Saurabh Jain
Diagnostics, particularly medical imaging for TB and diabetic retinopathy, will play a crucial role in AI adoption Diagnostics, particularly medical imaging for TB and diabetic retinopathy, will play a crucial role in AI adoption
Unexpected Consensus
Data ownership and community benefit from AI systems
Speakers: Mr. Shiv Kumar
People’s data should be owned by data cooperatives with reverse tokenization so communities benefit from their data
This represents an unexpected and progressive stance on data governance, suggesting that communities should collectively own and be compensated for their health data that feeds AI systems, which goes beyond typical discussions of data privacy to data economics
Preventive healthcare as highest ROI despite lack of political support
Speakers: Mr. Sanjay Seth
Preventive programs require continuous activities and behavior change across populations, where AI can provide significant support
The frank acknowledgment that preventive healthcare, while having the highest return on investment, lacks political support because it’s ‘not glamorous’ represents an unexpected candid assessment of healthcare policy priorities
Mental health AI applications require special considerations
Speakers: Audience, Shri Saurabh Gaur
Mental health AI applications face additional challenges around safety, security, and sensitivity requirements QPR (Question, Persuade, Refer) methodology can be used to identify vulnerable populations like students under examination pressure
The recognition that mental health applications of AI require fundamentally different approaches due to privacy, safety, and sensitivity concerns represents an important but often overlooked aspect of healthcare AI implementation
Overall Assessment

The speakers demonstrated strong consensus on the fundamental principles of AI implementation in healthcare: the need for AI to assist rather than replace healthcare providers, the critical importance of data quality and infrastructure, the value of public-private partnerships, and the potential for significant cost reduction and efficiency gains. There was also agreement on the need for evidence-based, problem-driven innovation rather than technology-driven solutions.

High level of consensus with complementary perspectives rather than conflicting viewpoints. The speakers represented different sectors (government, private healthcare, social impact, technology) but shared similar visions for responsible AI implementation in healthcare. This consensus suggests strong potential for collaborative implementation of AI strategies in public health systems, though speakers also acknowledged significant challenges around data infrastructure, work culture change, and ensuring equitable access to AI benefits.

Differences
Different Viewpoints
Approach to AI implementation – top-down vs. bottom-up
Speakers: Shri Saurabh Jain, Mr. Shiv Kumar
Government has launched SAHI (Strategy for Artificial Intelligence in Public Health) to address specialist shortages and improve healthcare delivery in rural areas Solutions are currently looking for problems rather than the reverse; states need to set clear agendas and priorities for AI implementation
Government representative emphasizes centralized strategy implementation while innovation expert argues for problem-first approach where states define priorities before seeking solutions
Primary focus area for maximum impact
Speakers: Dr. Rakesh Kalapala, Mr. Sanjay Seth
Diagnostics, particularly medical imaging for TB and diabetic retinopathy, will play a crucial role in AI adoption Preventive programs require continuous activities and behavior change across populations, where AI can provide significant support
Private healthcare representative prioritizes diagnostic applications while public health expert emphasizes preventive healthcare as having highest ROI
Data ownership and monetization models
Speakers: Mr. Shiv Kumar, Shri Saurabh Jain
People’s data should be owned by data cooperatives with reverse tokenization so communities benefit from their data Representative data from all regions is essential for training AI algorithms due to varying disease and demographic profiles
Innovation expert advocates for community data ownership with compensation while government representative focuses on data collection for algorithm training without addressing ownership
Unexpected Differences
Role of technology versus human factors
Speakers: Shri Saurabh Jain, Mr. Shiv Kumar
Diagnostics, particularly medical imaging for TB and diabetic retinopathy, will play a crucial role in AI adoption Work culture around evidence and data usage is the biggest challenge, more important than technology itself
Unexpected that government representative focused heavily on technical solutions while innovation expert argued cultural change is more critical than technology – typically one might expect opposite perspectives
Mental health AI applications receiving limited attention
Speakers: Audience, All panelists
Mental health AI applications face additional challenges around safety, security, and sensitivity requirements No specific arguments from panelists addressing mental health AI
Surprising that despite comprehensive discussion of AI in healthcare, mental health applications were largely overlooked by all panelists until audience raised the issue
Overall Assessment

Main disagreements centered on implementation approaches (centralized vs. problem-first), priority focus areas (diagnostics vs. preventive care), and data governance models (community ownership vs. centralized collection)

Moderate disagreement level with constructive differences in perspective rather than fundamental conflicts. Disagreements reflect different stakeholder priorities and experiences but show potential for synthesis and collaboration in AI healthcare implementation

Partial Agreements
Both agree AI must be integrated into existing workflows rather than added as external layer, but disagree on implementation approach – one focuses on change management while other emphasizes predictive capabilities
Speakers: Ms. Saraswathi Padmanabhan, Mr. Sanjay Seth
Integration challenges include workflow adoption, change management, and ensuring healthcare workers see value in new systems AI should exist inside the delivery system, not as a layer on top, to effectively support program implementation
Both agree on need for validation and evidence, but disagree on approach – private sector representative emphasizes private-to-public transfer while innovation expert focuses on ground-level testing across diverse populations
Speakers: Dr. Rakesh Kalapala, Mr. Shiv Kumar
Need for public-private integration where private sector validates solutions before transferring to public systems Need for use case libraries and evidence-based validation to demonstrate where AI solutions actually work in real-world settings
Both recognize need for testing and validation mechanisms, but disagree on primary barriers – government focuses on technical sandbox while expert emphasizes cultural change
Speakers: Shri Saurabh Jain, Mr. Shiv Kumar
ICMR is developing sandbox mechanisms for startups to test innovations before scaling up Work culture around evidence and data usage is the biggest challenge, more important than technology itself
Takeaways
Key takeaways
AI in public healthcare should be implemented as an integrated part of delivery systems rather than as an additional layer, with clear problem statements driving solution development The SAHI (Strategy for Artificial Intelligence in Public Health) framework provides a national approach to AI adoption, focusing on addressing specialist shortages and improving rural healthcare access Data quality and robust digital infrastructure are fundamental prerequisites for successful AI implementation, with many states currently lacking adequate data systems Public-private partnerships can accelerate AI adoption by allowing private sector early validation before scaling to public systems Preventive healthcare programs offer the highest return on investment and are ideal candidates for AI support due to their scale and predictable implementation patterns AI’s primary value lies in clinical decision support, early diagnosis, intelligent triage, and reducing administrative burden while keeping healthcare professionals as final decision makers Work culture and change management are more critical challenges than technology itself for successful AI implementation Diagnostics, particularly medical imaging for conditions like TB and diabetic retinopathy, represent the most immediate and impactful AI applications
Resolutions and action items
ICMR is developing sandbox mechanisms for startups to test AI innovations before scaling Andhra Pradesh will establish a biodesign lab in collaboration with AIM Foundation and other institutes Government of India will work with states as partners to develop robust AI systems through collaborative approaches States need to create clear problem statements and priorities for AI implementation through bodies like Andhra Pradesh’s Center for Applied Technology Development of use case libraries and evidence-based validation systems to demonstrate real-world AI effectiveness Integration of AI solutions into existing healthcare worker workflows to ensure adoption and value realization
Unresolved issues
Mental health AI applications face significant challenges around safety, security, and sensitivity that lack comprehensive solutions Digital literacy and adoption challenges among healthcare workers managing multiple programs and applications Lack of uniform data collection systems across states, particularly for specialized areas like mental health Funding challenges for MedTech startups with limited VC investment requiring government support mechanisms Scaling validated solutions across different states without repeating pilot processes Establishing data ownership models and reverse tokenization systems for community benefit Addressing connectivity and infrastructure gaps in rural areas for technology deployment
Suggested compromises
AI should assist rather than replace healthcare professionals, with doctors maintaining final decision-making authority Phased implementation approach using early adopters to train models before broader deployment to resistant users Public-private integration model where private sector validates solutions before public system adoption Focus on workflow integration rather than standalone AI applications to ensure user acceptance Incentive structures that benefit both healthcare workers personally and systemically to encourage adoption Representative data collection across regions and demographics to ensure AI algorithms work for diverse populations Neutral platforms for innovation testing that involve multiple stakeholders including government, private sector, and academic institutions
Thought Provoking Comments
Currently solutions are looking for problems not the other way around. Therefore it is important to marry what is the problem which is important for the state and then bring the solution together.
This comment fundamentally reframes the AI innovation discourse by highlighting a critical mismatch between technology development and actual healthcare needs. It challenges the typical tech-driven approach and emphasizes problem-first thinking.
This observation became a recurring theme throughout the discussion, with the moderator specifically referencing it when introducing TataMD’s work with Andhra Pradesh. It shifted the conversation from showcasing AI capabilities to focusing on identifying and solving real healthcare problems at scale.
Speaker: Mr. Shiv Kumar
AI has to exist inside the delivery system, not on top of it… dashboards only tell me what I have not done. They don’t tell me what I am supposed to do.
This insight cuts through the typical AI hype by identifying a fundamental flaw in current implementations – that AI systems often add complexity rather than integrate seamlessly into existing workflows. The quote about dashboards captures a real frustration of healthcare administrators.
This comment influenced subsequent speakers to focus on workflow integration and practical implementation challenges. It led to deeper discussions about change management and the importance of making AI valuable to end users rather than just generating more data.
Speaker: Mr. Sanjay Seth
Our single biggest problem is going to be work culture… every officer feels that they need to see a dashboard and tell their team what to do. I think if we have to really make AI help everybody decide, I think the work culture around evidence, the work culture around data is going to be the biggest one.
This comment provocatively shifts blame from technology limitations to organizational culture, suggesting that the real barrier to AI adoption isn’t technical but cultural. It challenges the assumption that better technology automatically leads to better outcomes.
This observation prompted the final speaker (Saurabh Jain) to directly address work culture issues, acknowledging that healthcare workers need to see predictable, reliable outcomes before they’ll adopt AI systems. It elevated the discussion from technical implementation to organizational transformation.
Speaker: Mr. Shiv Kumar
The biggest innovation should be people’s data should be owned by data cooperatives… and we should have reverse tokens where people pay for their data… When we reverse that, sir, and when we reverse the incentives and the work culture of use of data, I think automatically you will find people coming and telling you this is how I am using it.
This is a radical reimagining of data ownership and monetization in healthcare AI, proposing that communities should benefit financially from their data rather than just tech companies. It introduces concepts of data sovereignty and community ownership that are rarely discussed in healthcare AI contexts.
While this comment came near the end, it represented the most innovative thinking in the entire discussion, suggesting a completely different economic model for healthcare AI that could address both adoption and equity issues simultaneously.
Speaker: Mr. Shiv Kumar
It’s less of technology management and more of change management… If there are incentives for them to adopt both from the state side and from their personal side, the adoption tends to be easy.
This insight reframes AI implementation as fundamentally a human challenge rather than a technical one, emphasizing that successful adoption depends more on managing people and incentives than on perfecting algorithms.
This comment validated and built upon earlier observations about workflow integration and work culture, creating a consensus among panelists that the human factors are more critical than the technical factors for successful AI deployment in public health.
Speaker: Ms. Saraswathi Padmanabhan
Overall Assessment

These key comments fundamentally shifted the discussion from a typical ‘AI showcase’ format to a more nuanced examination of implementation realities. The conversation evolved from highlighting AI capabilities to identifying systemic barriers, with speakers building on each other’s insights about the primacy of human factors over technical factors. Mr. Shiv Kumar’s observations particularly served as inflection points, challenging conventional wisdom and pushing the discussion toward more innovative thinking about data ownership, work culture, and problem-first approaches. The cumulative effect was a discussion that moved beyond surface-level AI applications to address deeper questions about organizational change, community benefit, and sustainable implementation in resource-constrained public health systems.

Follow-up Questions
How can AI solutions be validated and tested across different demographic and geographic settings to ensure representative data quality?
This is critical because AI algorithms depend on quality training data that represents diverse disease profiles and demographic characteristics across different regions of India
Speaker: Shri Saurabh Jain
How can we build comprehensive use case libraries that demonstrate where AI has actually worked in real-world settings, particularly with tribal communities and marginalized populations?
There’s a need for evidence-based documentation of successful AI implementations rather than theoretical claims, especially for vulnerable populations
Speaker: Mr. Shiv Kumar
What specific mechanisms can be developed to reduce the administrative burden on healthcare workers who currently spend excessive time on data entry across multiple portals?
Healthcare workers are overwhelmed with administrative tasks that take time away from clinical duties, and AI could help automate data population across systems
Speaker: Shri Saurabh Jain
How can data cooperatives be established where citizens own their health data and receive compensation when it’s used to train AI systems?
This addresses the ethical and economic question of who benefits when personal health data is used to develop commercial AI solutions
Speaker: Mr. Shiv Kumar
What are the specific technical challenges in implementing AI-powered ambient listening and voice recognition in noisy, multilingual rural healthcare settings?
Rural PHCs have challenging acoustic environments with multiple dialects and languages, making voice-based AI systems technically complex to implement
Speaker: Ms. Saraswathi Padmanabhan
How can AI be integrated into existing healthcare workflows without being perceived as an additional burden by healthcare workers?
Successful adoption requires seamless integration where healthcare workers see immediate value rather than additional work
Speaker: Ms. Saraswathi Padmanabhan
What specific incentive structures need to be developed to encourage adoption of AI systems by healthcare workers at different levels?
Understanding what motivates different stakeholders to adopt new technology is crucial for successful implementation
Speaker: Ms. Saraswathi Padmanabhan
How can AI solutions for mental health be developed and implemented given the additional requirements for safety, security, and sensitivity?
Mental health AI applications require special consideration for privacy and ethical concerns, and there’s limited development in this area in India
Speaker: Audience member from AIIMS Bhopal
Can a centralized platform be created at the national level where validated AI solutions can be shared across states to avoid repetitive piloting?
This would help scale successful innovations more efficiently and reduce the burden on startups to pilot the same solutions in multiple states
Speaker: Startup audience member
How can work culture and incentive structures in government healthcare systems be modified to support evidence-based decision making using AI?
Technology alone isn’t sufficient; organizational culture and incentives need to change to support data-driven decision making
Speaker: Mr. Shiv Kumar
What specific methodologies can be developed for voice and video-based detection of mental health conditions like suicidal ideation and depression in Indian contexts?
Unlike medical imaging, mental health assessment relies on audio/video analysis which requires different AI approaches and validation methods
Speaker: Audience member from AIIMS Bhopal

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Internet Inclusive AI Unlocking Innovation for All

Open Internet Inclusive AI Unlocking Innovation for All

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on democratizing artificial intelligence technology and reducing the dominance of a few major companies concentrated in Silicon Valley. Matthew Prince, CEO of CloudFlare, argued that AI’s current high costs stem from expensive NVIDIA chips and scarce specialized talent, but predicted these barriers will diminish as more people enter the field and chip competition increases. He forecasted that frontier AI models could be built for $10 million or less within five years, making the technology more accessible globally.


Rajan Anandan from Peak15 Partners highlighted India’s approach to AI development, emphasizing that the country doesn’t need to compete in building AGI but rather focus on creating efficient, low-cost models serving India’s 1.4 billion population. He pointed to successful Indian companies like Sarvam that have developed competitive models in local languages at significantly lower costs than global alternatives. Anandan noted that India has launched 12 large language model initiatives and is building a sovereign AI stack spanning chips, compute infrastructure, and applications.


The conversation addressed the importance of open-weight AI models for democratization, with Prince suggesting that claims about AI dangers may be strategically motivated to maintain competitive advantages through regulatory capture. Both speakers emphasized that innovation often comes from resource-constrained environments, citing DeepSeek’s breakthrough in efficient AI processing as an example. They discussed the need for new internet business models as AI disrupts traditional content monetization, drawing parallels to how the music industry transformed after digital disruption. The discussion concluded with optimism about India’s potential to lead in consumer AI applications and the broader democratization of AI technology globally.


Keypoints

Major Discussion Points:

Democratization of AI and breaking the monopoly of a few companies: The conversation centers on Matthew Prince’s vision that AI technology shouldn’t be controlled by “a handful of companies in the same postal code.” Both speakers discuss how constraints and resource limitations can actually drive innovation, citing examples like DeepSeek’s efficient pruning algorithms and India’s approach to building specialized, cost-effective models rather than pursuing AGI.


Open source vs. closed AI models and the economics behind the shift: The discussion explores how the massive investments (hundreds of billions to trillions of dollars) in AI development are driving companies away from open-source models toward closed, proprietary systems. The speakers debate whether AI safety concerns are genuine or strategic moves for regulatory capture to maintain competitive advantages.


India’s unique AI strategy and competitive positioning: Rajan Anandan outlines India’s approach of focusing on highly performant, low-cost models (30-200 billion parameters) rather than competing in the trillion-parameter AGI race. He emphasizes India’s strengths in building sovereign AI capabilities across the entire stack, from chips to applications, with particular success in voice AI and local language models.


The transformation of internet business models due to AI: Matthew Prince discusses how AI is disrupting the traditional internet economy of “create content, drive traffic, sell ads/subscriptions.” He proposes new compensation models for content creators, drawing parallels to how the music industry transformed from an $8 billion industry to one where Spotify alone pays $12 billion annually to musicians.


AI security, trustworthiness, and data sovereignty: The conversation addresses cybersecurity challenges posed by AI-powered attacks while arguing that AI will ultimately make systems more secure. They also discuss the importance of data sovereignty for countries like India and the need for fair compensation models for content creators whose data trains AI systems.


Overall Purpose:

The discussion aims to explore pathways for democratizing AI technology and reducing dependence on a small number of dominant companies, with particular focus on how countries like India can build competitive AI capabilities through innovative approaches, resource constraints, and strategic positioning in the global AI landscape.


Overall Tone:

The tone is optimistic and forward-looking throughout, with both speakers expressing confidence in alternative approaches to AI development. While they acknowledge challenges and potential risks, the conversation maintains an entrepreneurial and solution-oriented perspective. The speakers demonstrate mutual respect and build upon each other’s points, creating a collaborative rather than confrontational dynamic. The tone becomes particularly enthusiastic when discussing specific examples of innovation and success stories from India’s AI ecosystem.


Speakers

Speakers from the provided list:


Announcer: Event host/moderator introducing the speakers and session


Rajan Anandan: Managing Director of Peak15 Partners (formerly Sequoia), founder of Sequoia Capital India in Southeast Asia, technology leader and investor focusing on transformative technology-led businesses in India’s startup and digital ecosystem


Matthew Prince: Co-founder and CEO of CloudFlare, World Economic Forum Technology Pioneer, Council on Foreign Relations member, co-creator of Project Honeypot (largest community tracking online fraud and abuse), degrees from Harvard, Chicago, and Trinity College


Rahul Matthan: Board member and partner in TriLegal’s Bangalore office, leads their technology, media, and telecom practice, extensive experience in high-value TMT transactions and regulatory matters for telecom, internet and data service providers


Audience: Multiple audience members asking questions during the Q&A session


Additional speakers:


None – all speakers in the transcript were included in the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion between Matthew Prince (CEO of CloudFlare), Rajan Anandan (Managing Director of Peak15 Partners), and moderator Rahul Matthan explored the critical challenge of democratising artificial intelligence technology and breaking the current concentration of AI capabilities amongst a handful of companies in Silicon Valley. The conversation provided both strategic insights and practical examples of how alternative approaches to AI development are already succeeding, particularly in India.


The Current State of AI Concentration and Barriers

Matthew Prince opened by diagnosing why AI remains expensive and concentrated today. He identified two primary barriers: the dominance of NVIDIA chips that were originally designed for gaming rather than AI applications, and the scarcity of specialised AI talent. Prince noted the irony that NVIDIA chips evolved from powering gaming consoles to mining Bitcoin and then to AI applications, arguing that purpose-built AI chips would be designed quite differently. The talent shortage stems from AI’s historical reputation as a field of unfulfilled promises across previous decades, which led to reduced investment in AI education until the recent breakthrough.


However, Prince presented compelling evidence that these barriers are already eroding. Computer science programmes worldwide have experienced dramatic growth in just two years, with AI theory courses experiencing unprecedented demand. Universities that previously had limited AI programmes are now rapidly expanding them. This educational shift, combined with the inevitable competition that follows NVIDIA’s transformation from a gaming company to one of the world’s most valuable companies, suggests that both talent and chip costs will decrease significantly.


Prince made a bold prediction: “In five years, you’ll be able to build a frontier-like model within a specialty for $10 million or less.” This represents a dramatic reduction from current costs and would fundamentally alter the AI landscape by making advanced capabilities accessible to a much broader range of organisations and countries.


India’s Strategic Approach to AI Development

Rajan Anandan provided a compelling counter-narrative to the prevailing assumption that countries must compete directly with trillion-parameter models to remain relevant in AI. He argued that “AGI is not the thing that we need” for India, emphasising instead the goal of uplifting 1.4 billion Indians through highly performant, extremely low-cost models.


Anandan presented concrete evidence of India’s success with this approach, highlighting multiple companies achieving breakthrough results. The scope of India’s AI initiative is substantial: twelve large language model projects comprising eleven companies plus Bharat GPT from IIT Bombay, with expectations that this number will grow to fifteen or twenty very quickly. Importantly, he clarified that despite media characterisations, these are genuinely large language models—anything above 30 billion parameters qualifies as such, putting India firmly in the LLM race.


In voice AI specifically, Indian companies have achieved superior performance in both speech-to-text and text-to-speech whilst significantly undercutting global leaders on cost. Current human voice services in India cost 5-20 rupees per minute, whilst AI voice services have reached 3 rupees per minute and could potentially reach 1 rupee per minute with current technology. However, to serve India’s full population, costs must decrease further to 5-10 paisa per minute.


India’s strategy encompasses the entire technology stack. At the semiconductor level, despite having no semiconductor startups four years ago, India now hosts 35-40 such companies spanning from low-power chips to GPUs and memory solutions. Anandan announced recent investments in Agrani (a GPU company) and C2I (focused on memory), illustrating the breadth of India’s sovereign technology development.


The Economics and Politics of Open Source AI

The discussion revealed sophisticated understanding of the tensions surrounding open-source AI development. Prince offered a provocative economic explanation for the shift away from open models, suggesting that companies investing hundreds of billions of dollars have strong incentives to restrict competition. He argued that AI safety rhetoric may serve as a form of regulatory capture, noting that he had never seen another industry advocate so strongly for its own regulation.


This analysis suggests that doomsday scenarios about AI risks may be strategically motivated to justify regulations that favour incumbent players. Prince advocated for treating AI systems more like humans than machines, recommending criminal codes rather than engineering standards for regulation.


Anandan acknowledged the economic reality that makes open-source challenging: “if you invest a trillion dollars, you can’t give it away for free. It’s as simple as that.” However, he hinted at significant developments, mentioning upcoming announcements from major companies regarding their commitment to open source.


The speakers agreed that open-source approaches remain critical for ecosystem development, but recognised that alternative pathways must emerge. The current investment levels make traditional open-source models economically challenging for frontier development, necessitating new approaches to maintaining accessibility whilst enabling continued innovation.


Innovation Through Constraint: Learning from DeepSeek

A central theme throughout the discussion was how resource constraints can drive superior innovation. Prince highlighted DeepSeek’s breakthrough as a perfect example, explaining that the Chinese company developed significant efficiency improvements precisely because they lacked access to unlimited computing resources. DeepSeek’s innovations in model efficiency allowed them to deliver AI capabilities much more cost-effectively than resource-rich competitors.


Prince argued that well-funded US AI companies may be “blinded” to efficiency innovations because they can simply purchase more computing power rather than optimising their approaches. This dynamic suggests that companies operating under constraints may ultimately develop more sustainable and scalable AI solutions. Prince expressed that he wished “DeepSeek had been an Indian company, not a Chinese company,” seeing similar potential for constraint-driven innovation in India’s AI ecosystem.


This analysis validates India’s approach of building specialised, efficient models rather than attempting to match the massive investments of US companies. The constraint-driven innovation thesis suggests that India’s resource limitations may actually prove advantageous in developing more efficient AI architectures and applications.


The Transformation of Internet Business Models

Prince provided detailed analysis of how AI is fundamentally disrupting the traditional internet economy. The historical model—create content, drive traffic, sell subscriptions or advertisements—is breaking down as AI systems consume content without driving traffic back to creators. He presented stark statistics illustrating this shift: ten years ago, Google sent one unique visitor for every two pages scraped; today, the ratios are dramatically different, with some AI companies extracting hundreds of thousands of pages for every visitor they send back.


This disruption threatens the fundamental economics of content creation, as creators lose the traffic necessary to monetise their work through traditional means. However, Prince drew an optimistic parallel to the music industry’s transformation. The music industry, once devastated by piracy, ultimately recovered through new business models, with platforms like Spotify now paying billions annually to musicians.


Prince argued that a similar transformation must occur for internet content, with new business models emerging that compensate creators based on quality rather than traffic. This shift could actually improve societal outcomes by moving away from engagement-driven content towards content that genuinely contributes to human knowledge.


The practical implementation requires creating scarcity around content access. Prince provided evidence that blocking AI crawlers works, citing successful negotiations between publishers and AI companies. Companies that have blocked AI access have secured more favourable licensing deals than those that allowed free access.


AI Security: Balanced Perspectives on Risks and Benefits

The security implications of AI development received nuanced treatment, with Prince acknowledging both immediate risks and ultimate benefits. In the short term, AI will enable more sophisticated attacks, including highly convincing phishing scams and more effective exploitation of security vulnerabilities. Prince described scenarios where AI-generated content could be used for fraud and deception.


However, Prince argued that the long-term security outlook is positive because “the good guys will always have more data than the bad guys do.” Security companies are incorporating AI into their defence systems, with some already identifying novel threats that no human has previously recognised.


The key adaptation required is moving away from appearance-based authentication. Prince recommended practical measures like establishing family passwords to protect against AI-generated impersonation attacks. More broadly, businesses must abandon verification methods that rely on how someone looks or sounds.


Prince’s prediction that “in 10 years, we are more secure online than we are today” reflects confidence that defensive AI applications will ultimately outpace offensive uses, provided that regulatory frameworks don’t prevent security companies from effectively deploying AI technologies.


Consumer AI Applications and Market Opportunities

Anandan provided an optimistic assessment of India’s position in consumer AI applications, revealing that “India today has more consumer AI startups than the US.” This advantage stems from India’s massive user base—900 million internet users—combined with cost constraints that drive innovation.


The consumer AI opportunity spans multiple sectors where AI can dramatically reduce costs and increase accessibility. In education, AI services can offer comprehensive support at extremely low costs, making quality education accessible to populations previously excluded by price. Voice AI represents a particularly promising application area, with the potential to serve populations that may not be comfortable with text-based interfaces.


The consumer AI opportunity extends beyond cost reduction to fundamental accessibility improvements. Achieving scale in India requires image and video interfaces, highly localised language support, and ultra-low costs—areas where Indian companies have natural advantages due to their deep understanding of local markets and constraints.


Data Sovereignty and Strategic Advantages

The discussion addressed critical questions about data ownership and competitive advantages in the AI era. Anandan noted that while India generates vast amounts of data, the country must be more strategic about how this data is collected, processed, and monetised.


However, Anandan highlighted positive examples of Indian companies leveraging proprietary data effectively. Companies with domain-specific data advantages are building specialised AI models that compete effectively in both domestic and international markets, demonstrating how data sovereignty can create competitive moats.


The broader challenge involves establishing frameworks that prevent exploitative data extraction whilst supporting legitimate AI development and international collaboration. This requires both technical measures and regulatory frameworks that ensure fair compensation for data usage.


Regulatory Approaches and Future Outlook

The speakers advocated for pragmatic regulatory approaches that avoid stifling innovation whilst addressing legitimate concerns. Prince argued for treating AI systems based on their outputs and impacts rather than their internal mechanisms, recognising that AI systems are inherently non-deterministic.


The discussion emphasised the importance of avoiding regulatory capture, where incumbent companies use safety concerns to justify regulations that prevent competition. The goal should be enabling innovation and competition whilst addressing genuine risks.


Strategic Implications and Conclusions

The conversation concluded with optimism about AI democratisation and India’s competitive position. Both speakers agreed that current barriers to AI development are temporary and that alternative approaches to frontier model development are not only viable but potentially superior. The combination of decreasing costs, increasing talent availability, and constraint-driven innovation suggests a more distributed and competitive AI landscape.


For India specifically, the strategy of building specialised, efficient models for local needs whilst developing sovereign capabilities across the technology stack appears to be succeeding. The country’s advantages in consumer applications, combined with its growing semiconductor and infrastructure capabilities, position it well for the next phase of AI development.


The broader implications extend beyond individual countries to the global AI ecosystem. The success of constraint-driven innovation demonstrates that breakthrough efficiency improvements can emerge from resource limitations, suggesting a future where AI capabilities are more widely distributed and where different regions develop AI solutions optimised for their specific needs.


The transformation of internet business models presents both challenges and opportunities. While current disruption threatens traditional content monetisation, the potential emergence of quality-based compensation systems could create a healthier information ecosystem that better rewards valuable content creation.


Overall, the discussion presented a compelling alternative narrative to AI concentration scenarios, suggesting that democratisation is not only possible but already underway through innovative approaches, strategic specialisation, and the natural dynamics of technological competition and diffusion. The key insight is that success in AI may not require matching the massive investments of Silicon Valley companies, but rather developing more efficient, targeted solutions that serve specific markets and needs effectively.


Session transcriptComplete transcript of the session
Announcer

Very few individuals have done more to bring revolutionary and transformative technology into the hands of millions than Matthew and Rajan. Matthew Prince is the co -founder and CEO of CloudFlare, a World Economic Forum on Foreign Relations, Forum Technology Pioneer, and a Council on Foreign Relations member. He has degrees from Harvard, Chicago, and Trinity College, and co -created Project Honeypot, the largest community tracking online fraud and abuse. Matthew’s founding mission for CloudFlare was to help build a better Internet, a goal that has become increasingly critical in the age of artificial intelligence. Rajan Anandan is one of India’s most influential technology leaders and investors, currently serving as Managing Director of Peak15 Partners, formerly Sequoia. He is the founder of Sequoia Capital India in Southeast Asia, where he focuses on backing founders building transformative, technology -led businesses.

With decades of experience across entrepreneurship, investing, and global technology leadership, Rajan has played a pivotal role in shaping India’s startup and digital ecosystem. Orchestrating this conversation is Rahul Mathan, who brings the perfect blend of legal insight, policy depth, and the ability to ask the questions everyone else is thinking. Rahul is a board member and partner in TriLegal’s Bangalore office and has their technology, media, and telecom practice. He has extensive experience advising on high -value TMT transactions in the country. He has worked with companies across sectors, from telecom majors to Internet and data service providers, offering advice on regulatory matters and operational issues. So please join me in welcoming three awesome leaders on stage, and with that, the stage is yours.

Thanks, Rahul.

Rahul Matthan

And since I haven’t worked with you, I’m going to square that circle. Matthew, I just heard your keynote up in the big 3 ,000 -seater hall, and you ended with a very powerful statement, which is that that this wonderful AI technology should not be built by a handful of companies in the same postal code. And that, in many ways, seems to be the driving motivation for having this discussion. But it’s easier said than done in that AI is a very big and complicated stack. And a lot of that stack actually involves complex hardware. And it’s hard, really, to move that hardware around the Internet. So if we are to democratize AI and if we are to come up with the infrastructure construct that would democratize AI, what would that look like?

And what is your idea, your vision for how this would be, if not now, but sometime soon?

Matthew Prince

Yeah, so let’s talk first about why AI is hard and expensive today. So the first thing is AI requires lots and lots and lots of chips. And the fifth thing is AI requires lots and lots of chips. And the sixth thing is AI requires lots and lots of chips. largely produced today by one manufacturer, NVIDIA, that use a ton of power and are very, very expensive. They were never built to do this. If we’re totally honest, the NVIDIA chips were built to power gaming consoles, right? And then for a while to mine Bitcoin and then magically to create a superintelligence. But if you had started with, let’s create the superintelligence, you would have designed those chips somewhat differently today.

That’s challenge one that keeps AI very, very hard. Challenge two is that it requires a real specialized set of knowledge. There’s a very small set of people in the world that know how to build these models and how to run these systems. And so you have to ask, why is that not something where everyone knows it? If you had known that you could specialize in this in school and literally make $100 million, there’s a year. We would all have studied AI, right? And yet if you go back just five years ago, the people who were studying AI were kind of the weirdos. Why was that the case? Well, because AI was one of these fields that kind of had promise in the 70s and had promise in the 80s and had promise in the 90s.

And then everyone was kind of like, you know what? We’re tired of this. And so the AI professor was kind of shunted off to the side. And so if those are the things that today make AI extremely expensive, the question is, are those things permanent states or are they going to change? Well, we can measure one of them already. Already, if you look in enrollment in computer science programs across the world, it is up dramatically, even though supposedly there’s no future for computer scientists, in just the last two years. And then secondly, the enrollment in specifically AI theory courses is off the top. Every university that used to sort of shutter their course is now standing it up and building it like.

like crazy. And so I think that over time, we’re gonna have more and more people who are able to do this. And so having to pay enormous salaries for those people, that’s probably not going to be the future. On the chip side, you know, if you have literally a company going from being an obscure gaming company, to the most valuable company in the world, obviously, a whole bunch of people are going to chase after that. And if you look at the history of silicon, anytime there has been a silicon shortage, it turns into a silicon glut over time. And with GPUs, it’s kind of had hit after hit after hit. I think that what we’re seeing, at least is that both from startups, as well as incumbent players, as well as from the hyperscalers and other players, they’re getting involved.

There are so many people who are making this this silicon, that no matter what the price per unit of work done is going to come down. The other thing that I think is encouraging is that if we look at the actual AI models themselves, it doesn’t appear that this is necessarily a one company is running away with it. It’s sort of like Google gets a lead, and then Anthropic passes them, and then OpenAI passes them, and then someone else passes them, and then Google does it again, and it keeps leapfrogging itself. That, to me, suggests that the actual model making is more likely in a steady state in the future to be something like a commodity.

And if that’s the case, if the cost of creating the models is going to come down, if the models themselves are more commodities, I think that we can’t assume that the literally hundreds of billions, if not trillions of dollars that are going into building the leading AI companies today, that that might come crashing down. And my prediction would be that you’ll be able to build models. Models that will be on the frontier. They’ll be more specialized, but they’ll be on the frontier. for tens of millions of dollars in the not -so -distant future. I’ll put a date out there. In five years, you’ll be able to build a frontier -like model within a specialty for $10 million or less.

Rahul Matthan

Rajan, about a year ago, one of these companies and one of these postal codes were here, and you asked a question, you know, what will it take for India to compete? And you were told it’s hopeless, don’t compete. And yet at the summit, we’ve come out with a model that’s, by all accounts, haven’t yet played with it competitive. So what Matthew is saying seems to be working out, but he’s putting a five -year timeline. I would argue that perhaps we could be more aggressive with that timeline. So what is your view on this as someone who is actually, you know, in India, working with some of these really smart people who are working under constraints?

but are yet putting out some fairly impressive models, interesting use cases. What’s life like at the other end of the absolute front? I mean, what people call the absolute frontier of these models. What is life at the other end of it where there are many different applications, many different use cases, and many different types of models? Yeah.

Rajan Anandan

Firstly, hi, everyone. Great to have all of you here. So I think the first thing is, look, Matthew, I don’t know whether you’ve been following a company called Sarvam. I think, firstly, it’s important that India is not trying to get to AGI. With 1 .4 billion humans and a million Indians turning 18 every month, AGI is not the thing that we need. Our focus really is to uplift 1 .4 billion Indians. And I think our ecosystem, our innovators, our government, our investors, our technologists, our engineers, our engineers, our engineers, our engineers, our engineers, are all of the view that we have. But to really do that, you don’t need trillion or five trillion parameter models. What you need are highly performant, extremely low cost models that are a billion parameters to maybe 100 or 200 billion parameters.

And actually, for the amount that you mentioned, we have launched 30 and 100 billion parameter models that are SOTA in Indic languages. In fact, I don’t know whether you know this, Matthew, but if you look at voice AI in Indic languages, SARVAM today is both SOTA in speech -to -text and text -to -speech and is a fraction of the cost of the global models, including the global leader in voice AI. So I think what you’re going to see is the government’s actually – and by the way, the reason SARVAM is able to do that, because of tremendous support from the government, but it’s not just SARVAM. There are 12 large and small language models. By the way, just – I don’t know, I don’t know.

a clarification because when Sarvam launched, I think somebody said India is really good at small language models. The last time I checked Chakchipiti and Gemini, anything above 30 billion is actually a large language model. So we are actually in the large language model race. Just a clarification there. So basically, we have 12 companies, actually 11 companies and Bharat Chen, which is part of IIT Bombay, that are building these models. I think this number goes to 15, 20 very, very quickly. And I would say actually well within this year, Matthew, in many, many things that India needs, right? We need to uplift 100 million farmers, and for that, we need to build basically models that work on feature fonts, right, in local languages.

That’s done, right? That was actually launched on Wednesday and so on. So I think that’s the first thing. And now when you ask this other question of, look, true frontier, which is, you know, the frontier today, let’s call it a few trillion, maybe it’ll go to 5 trillion, 10 trillion. I think it’s going to be very, if you define frontier that way, part of it is also definition of frontier and more importantly, what’s the objective, right? If you define that frontier that way, I think it’s going to, it’s not going to be, Indians are not going to be able to do it with these set of architectures. But what we are going to do is to Matthew, to the point that you made, which is, look, LLMs are the most inefficient compute machines ever.

I mean, this is the, you know, these are not efficient architectures, right? We believe that this is the beginning, not the end. We believe there’s going to be many more to come after transformers. And I think that is where the bets are going to be made. In fact, I mean, you know, Jan Lukun is here and he sort of said that, look, you know, this is not sort of the, this is not going to lead us to AGI. We don’t really want to get to AGI, but even if you just look at where AI is going to go. So I’d say at the model layer this week, India entered the race, but we are going to play this race differently, which is IE.

We’re not going to try to build trillion parameter models and we’ll do it at super, super low cost. Now, coming to the chip player, that’s harder for India, but I don’t know whether you know this, Matthew, but 20 % of the world’s semiconductor designers are in India. Four years ago, we had no semiconductor startups. Today, we have about 35 to 40. They span the spectrum from low -power, call it 20 nanometer chips, these all SoCs, all the way up through – actually, two weeks ago, we announced an investment in a GPU company. It’s a very seasoned team, Intel, AMD, a company called Agrani. Monday, this week, we announced an investment in a company called C2I, which is going to make memory, focus on memory.

So we’ll see even at the chip player, because what is very clear to us and I think to many in India is we have to have the sovereign stack. Our friends are no longer friends, or sometimes they’re friends, or sometimes they’re not. And as India, we just need to have the sovereign stack. And we can’t – we, of course, are going to have alliances. And today, I think a very important alliance. This alliance was announced with Paxilica and so on. But we’ve got to actually have a sovereign stack. So whether it’s the chip layer, whether it’s the compute layer, I think it’s great that both Adani, Reliance announced $100 billion investments into AI infra this week at the model layer.

Where we have excelled is on the application layer. And what I can tell you is at the application layer, I joined Google in 2011. At the time, India had 10 million connected smartphone users, no venture capital, and no unicorns. Today, we have a lot of venture capital. We have 900 million smartphone users. We don’t have enough capital, but we have enough venture capital, and we have 125 unicorns. At the application layer, I can confidently say, whether it’s consumer, whether it’s enterprise, Indian companies will win. Because the traditional formats of consumer consumption, which is called search, or now Gemini, ChatGPT, et cetera, will scale to, in my view, will probably scale to 200 or 300 million. They’re not going to scale to a billion Indians.

To scale to a billion Indians, you’ve got to. You’ve got to have image, you’ve got to have video, you’ve got to have highly local language, and it’s got to be. ultra, ultra low cost. So I do think we have a shot. I think what you’ve just described, we ship this week. I think by the end of the year, we’re going to ship 100x more because, Matthew, what’s happened in India is we don’t, we can’t, we’re trying to do things differently. Payments is an example, which I think the whole world knows about. And you’ll see that playing out in many other things. And I’ll last end with this. Matthew, 2015, India had two space tech

Matthew Prince

And I would just say, don’t sell yourself short. Like you may not, India may not need AGI, but India may still build AGI. Right. And I think that the thing that actually might end up holding back the biggest AI companies is that they are so unresourced constrained. If you look at what was the biggest over the last two years, innovation that really drove AI forward, it wasn’t anything that Google or OpenAI or anyone else did. It was actually DeepSeek, and DeepSeek’s ability to say that within the constraints of the chips that they had access to, that they would – they had two incredible innovations. They would prune the tree more efficiently, and they’d be able to process that pruned tree much more quickly.

I wish DeepSeek had been an Indian company, not a Chinese company. It would have ended a little bit more constrained. I think we have a thought room. But it is – I actually think those places with constraints, and I would not be scared away if you’re an Indian AI company by hearing the hundreds of billions of dollars that the big U .S. AI companies are pouring in. That seems like an asset. That seems like an advantage that they have. But in some ways, it’s blinding them to what will be the real innovations that cause AI to become more efficient, that cause AI to become – more scalable, and there is no way that the long -term solution to this is you have to turn up a mothballed nuclear power plant.

So we’re going to get more efficient, and I would bet that that efficiency comes from places just like this.

Rahul Matthan

So if I can push back to both of you, I think there’s – not that I disagree with any of this, but to say that – so one of the things that DeepSeek did was it came up with this reasoning model. I guess other people were working on it, but they did a really good job of doing reasoning really well and really powerfully. I mean the real – the DeepSeek innovation was being able to say that you’ve got to build this giant tree if you’re building an AI model. And they were able to say probabilistically there’s a whole bunch of branches on the tree that we can ignore. Like there’s a bunch of things in your life that have happened to you that your brain is just really good at forgetting about, whereas there are a few just salient moments that have formed who you are as a person.

What DeepSeek did is did a better job of pruning that. The big US AI models don’t have to do that because they can just – well, let’s just buy another H200, right? And let’s just keep throwing more money at the problem. By having the constraints and the specialization. in this case, the memory constraints, it forced DeepSeek to come up with a better pruning algorithm, which allowed them to then just be able to deliver AI at a much, much, much more efficient level. And I suspect Sarvam did something similar because, I mean, I spoke to Pratyush and he said that one of the things that the big guys have been coming around when we told him was how do you do this with 15 people?

And it’s certainly some of those constraints that they’re at work. I wanted to talk along similar lines on this idea of open source, open weights, perhaps as a stick with open weights because open source is a controversial definition. A lot of the early models at that time, there was a lot more open weight stuff coming out. Of late, that’s gone down. And the power of open weights, of open weights models and perhaps open sources different is that you can actually, I mean, develop open weights and you can actually develop open weights and you can actually can actually tinker around with the model. and customize it to their use case. But increasingly, we’ve seen a sort of a drop -off other than the Chinese models, Kimi and Qen, which are still open -weight.

I wanted to just discuss among the two of you, perhaps from different perspectives, maybe the use case perspective and perhaps just the whole internet infrastructure perspective, how important open -weight is. And some of the backroom chatter I’ve been getting is as these models get more performant, it becomes increasingly dangerous to put out highly performant models as open -weights because it’s something that OpenAI called malicious fine -tuning and the fact that as these models become better, it’s easier to undo the fine -tuning guardrails that have been established so these models don’t do bad things. And so that’s why it won’t be delivered. So I know that the ecosystem needs open -weights because not everyone has the time to do the training and the time to do the training.

And someone just perhaps wants to do the pre -training and get the model. out. But I’m also hearing from the other side that open weights has this fundamental security challenge. And I know, but we go to security separately, but just on open weights, what’s the, you know, what’s the way We thread this needle between these two things?

Matthew Prince

Well, I think that, okay, let’s, let’s, let’s, I’m going to tell a story. I don’t know that 100 % of the story is right, but I think it adds up to something that approximates what’s right. Let’s imagine that over time, you are one of these major model makers, you’re open AI, you’re, you’re anthropic, you’re, you’re Google. And you look at this and you say, huh, if we keep playing this out, then this is a commodity. And the only way that we win is if we restrict as many people from getting into the game as we possibly can. So how do you do that? I mean, one of the best ways to do that is just to scare everyone.

that if everybody has this technology, that the world is going to end. And so the next time you come across an AI doomer and they say, if everyone has this, the world is going to end, just keep pushing them. Just be like, okay, and then what happens? And then what happens? And then what happens? And basically, you know, the most likely, the scariest scenario is these things can design very bad maybe pathogens or other malicious vectors, biological vectors, that could then get synthesized and spread around society, to which I say, well, then shouldn’t we be regulating the synthesizers, not regulating the sort of technology that’s out there? But, again, it gets to be very hand -wavy.

But if you think about it as a strategy, if you believe that these ultimately are commodities, then what you want to do is actually regulate. Regulate them in order to make it so that yours is the only company that can be safely trusted to handle this. And I think that that’s, again, somewhat cynically, a lot of the explanation for why the people that are building these horribly dangerous, scary things keep telling you how horribly dangerous and scary they are. I’ve never seen another industry that has done that. You never – you don’t see like the automobile industry be like, you know, this could plow through a crowd of people and kill and be used in a mass murder event, right?

You don’t – that just doesn’t make any sense. And so the only way that I can make sense of that world is if, from a business perspective, if it’s actually trying to do some sort of regulatory capture. And so I am pretty discounting on what the risks are here. I tend to think that more open is going to win, and I tend to think that the Chinese approach right now is the smartest approach to take on what looks like this enormous kind of just money machine which the U .S. is creating. And so it’s – I think that as India thinks about how it’s going to regulate AI – I would be careful about listening to sort of the AI doomers.

I would be especially careful about trying to regulate the output of what is fundamentally at least a pseudo non -deterministic system. We have built machines that act like humans, and yet we think we can regulate them like machines. The better way to regulate them is actually more like humans. Look to the criminal code, not the engineering code, in order to figure out what that regulation should look like. And so I am very much pro -open. I think we should think about what these risks are and what these dangers are. We should definitely be testing and looking for those. But I tend to think that they are somewhat overblown. And if you want to understand why they are somewhat overblown, I would argue it is because it’s a strategy in order to keep the people , they’re currently in the lead going forward.

Rahul Matthan

I think you may be absolutely right, because on that big stage that you were at just a short while ago, yesterday, there was a call by one of these companies for an IAEA for AI, that it should be regulated like nuclear technology. And the other example I keep giving is, you know, at the turn of this last century, people were just walking around the streets and getting electrocuted, because electricity is highly dangerous. And yet we sit in this room, which literally the walls are buzzing with electricity, and we’re completely safe. And this is the nature of all technologies. But on the positive side, Rajan, all the AI deployers that we have in India, a large number of them are relying on open source.

And if the open source pipeline starts, to diminish, where are they going to go? I mean, Sarvam can certainly deliver these models, but how important is it actually to the community of people that are, I mean, AI and chat GPT and all this is all well and good, but it’s really those applications, the voice applications that people need. How dependent are they on open source? What can we do to continue to keep this open?

Rajan Anandan

Look, I mean, firstly, as Matthew said, look, if you invest a trillion dollars, okay, you can’t give it away for free. It’s as simple as that. It’s just economics. So you can position it any which way you want, but fundamentally it’s about economics, right, and how do you build a business, especially if you have to invest so much. No, look, open source is absolutely critical. You know, I think the – I mean, Lama is the most recent example, right, where the reality is if you’re going to launch the next state -of -the -art version of Lama, it’ll be closed because otherwise how are they going to monetize this, right? And especially if you have – you know, they’re spending $80 billion, $100 billion.

There are other ways to make money. They don’t make money. Even for them, $100 billion a year is kind of a lot, you know. So anyway, coming back, look, it’s super important to the ecosystem. I don’t know. I don’t have the answer to that. I think in March. In March, you’ll see one of the big companies make a big announcement, massive, massive announcement on their commitment to open source. But it is, you know, I think the only way you do this is there has to be a different path, okay, because if you’re going to have to invest hundreds of millions of dollars to build a new model or billions of dollars to build a new model or tens of billions of dollars, that doesn’t, that’s not really open.

You can’t keep those models open, right? So to the first question that you ask in Matthew’s response, you really need to have a different way of doing this, right? So actually, by the way, if you look at voice AI, for instance, I’ll give you some data. You know, India has very low labor costs. If you look at human cost of voice today in India, it’s five rupees a minute to about 20 rupees a minute. Five rupees a minute is the lowest you can get. Amex would be probably 40 or 50 rupees a minute. Today’s voice AI costs about three rupees a minute. So already, and that’s why you’re beginning to see voice AI really begin to take off in India, right?

But even with today’s SOTA model, you can get to about maybe a rupee. Right now, the question is, even at a rupee, now you’re one, you’re twenty one fifth the cost of humans. So it’s going to really take off. But if you want to make voice the primary medium through which one point four billion Indians will access AI, that’s still too expensive. Right. You’ve got to get it down to maybe five paisa or ten paisa. And that’s actually not about open source. It’s about compute and it’s about the cost of inference. So if you ask me, open source is really, really important. Important. But we have to find a way to get the cost of inference down.

Obviously, model size, all of these things matter. And, you know, we can talk about that as well. But a short answer is, look, it’s really important. But it is not clear to me how you do this, especially in the current game that we’re in. Because anybody that wants to be at the frontier, the way a frontier is defined today actually has to go out and invest. Right. And honestly, I don’t know how the Chinese are doing it because, you know, it’s a bit opaque as to exactly how much are they investing. You’re right. It’s kind of a hedge fund.

Matthew Prince

Which is basically what deep seek is and they have this on the side META is the fascinating question here because it took me a really long time to understand META’s strategy like why are they doing all this VR why are they doing all of this AI what they learned was the lesson that if you are caught on the wrong side of a platform shift and you then become beholden to some other platform where in the past they were on the web and that was fine and social worked and no one controlled the underlying platform and then the platform shifted and all of a sudden it was on mobile and they were beholden to both Apple and Google that put them on a back foot and it really limited their business.

So they are so desperate to whatever the next platform shift is stay in front of that platform shift and so for a while that looked like it might be VR that was less likely today although never count these technologies out The real next platform shift is almost certainly going to be AI. And so if you control the social graph, which is an unreplicable kind of asset that they have, they need to be – they need to make sure that whatever the next platform is, that they control that or at least have an equal seat at the table of everyone else. And so if they continue to invest in open source and you’re like, why are they spending so much money in order to do this?

It’s to make sure that as the next platform shifts, that it’s going to be – that they aren’t in the same back -footed position that they were with Apple and Google. That would be my analysis of META.

Rahul Matthan

I don’t want to comment either way, that AI is going to accelerate cyber attacks because agentic swarms, et cetera, can do things much more dangerous. So what’s the evidence that you have for this?

Matthew Prince

Yeah, so I think this is sort of a long -term good news, short -term scary headline story. The long -term good news – let’s start with short -term scary headline. There are going to be a whole bunch of scary headlines of bad things that AI does. There will be a story about an Indian family who lost all their money because they wired it to some criminals that made it seem like their daughter had been kidnapped. I mean, we’re already seeing the level and sophistication of phishing scams go through the roof in terms of what is being done. And so the bad guys are going to use that to attack. The other thing that we’re seeing is – so there was an example.

There was a company called SalesLock. It had a program called Drift, a piece of software that was connected into hundreds of thousands of Salesforce instances. SalesLoft got breached by a Russian hacker. The Russian hacker didn’t understand how Salesforce worked, so they kind of fumbled around for a really long time. Had they just used AI, which is what we’re now seeing a lot of North Korean and Chinese hackers do, they would have been able to just be instantly knowledgeable on how to get as much information out of Salesforce as quickly as possible, and the breach could have been orders of magnitude worse. So those are the bad stories, and there’s going to be real hardship and real pain that’s caused from it.

The counter to that is that folks like us, I was just with Nikesh from Palo Alto Networks, Jay from Zscaler is here. We’re all using AI. We’re all using AI in our own systems to make them smart. In fact, Cloudflare, we would have never described ourselves this way, but the whole theory of the company was let’s get as much Internet traffic flowing through a machine learning system to be able to predict where security threats were in the same way that three years ago we all looked at ChatGPT and were like, whoa, that’s amazing. internally about three years ago was the first time the system said, bloop, here’s a new threat that no human has ever identified before.

And that went from being something that happened once in the first 15 years of CloudFlare’s history to now it’s happening on an incredibly regular basis where the machine learning is able to win. And so I think the good news is that the good guys will always have more data than the bad guys do. Again, a caveat to regulation preventing us from using it in order to do cybersecurity in various ways. But largely, we’re able to do that. And I think that we will actually use AI in order to stay ahead of these threats. That’s what we’re seeing. It’s going to require some change in any part of your life where you are today relying on what someone looks like or what they sound like in order to verify who they are and give them access to anything, secure, confidential.

That’s got to change. And so the simple thing that you should all do with your immediate family at your next holiday meal. is decide on a family password. And that seems silly, but I guarantee you at some point, some hacker is going to call up and say, hey, your son or your dad or your grandmother or whatever needs money. And if you say, hey, what’s the family password? And they say, I don’t know, Aardvark, you’ll know that it’s a scam, right? So it’s a simple thing that you can do. And it’s going to be these simple things, which I think are going to get translated. And I think businesses have got to go away from, oh, the person looked right, so we let them in the door.

Like that can’t happen in the cyber world. And so we’re going to have to lock systems down. There are going to be some scary stories, but I would predict again that in 10 years, we are more secure online than we are today.

Rahul Matthan

Rajan, I wanted to talk about data because a lot of the conversation is around how the models that we have, CERN accepted, are models that are largely built in the West and therefore are Western systems. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question. And I think that’s a very important part of the question.

And I think that’s a very important part of the question. And I think that’s a very important part of the question. and I know there’s something of a land grab going on for the data so as far as the data companies in India are concerned the companies are actually hoovering up the data, annotating it, making it ready what’s the business model for them? Are they feeding this all up back to that one pin code in the US or what’s the negotiating part and I ask this question because we have a lot of data in this country but I know that there are countries in Africa where the deal is already done and the data is out the door there are deals I know for 25 years worth of medical data out of Africa in exchange for setting up an EHR system because that’s a deal they’ve done.

And I was wondering whether we are thinking about this in a nuanced way and then I know Matthew you’ve got some ideas on crawling, I’ll come to you on that. What is actually happen on the ground with this?

Rajan Anandan

I think the first thing is we don’t have as many I mean there are there are you know initiatives or NGOs like AI for Bharat that are collecting data but if you look at the leading global data companies they’re not Indian right India probably has a handful of startups that are that are that are actually in the quote unquote business of AI data for AI so first and foremost I mean because these companies are global you’re absolutely right all the data that Indians are generating is actually going to you know going to those sort of few handful of companies now that being said look firstly you know for for Indian companies to actually keep the data here we have to have model companies right like you know what I mean otherwise you have to sell it because if you’re in the data business you have to sell it to somebody but I think the benefit we have is you know like honest like you know we’ve only collected probably one less than one percent of the data we actually need if you really want to get to AGI right like if you look at but physical intelligence and things like that.

And India really has a competitive advantage. In fact, we’ve been looking for startups we could find, fund that would basically do all kinds of data collection for robotics and things like that. So that’s number one. I think number two, we’re also beginning to see companies that are leveraging their proprietary data in a very, very interesting way. I’ll just give you one example. So we have a company called Cloud Physician. It’s an Indian startup. They run these remote ICUs in tier two, tier three towns. In India, they’ve been doing that for four or five years. They’ve got this extraordinary amount of proprietary data that they’ve now used to actually build about a dozen or so specialized models in healthcare.

And now they’re actually taking those models to market in the U .S. And the kind of data that they have, which they’ve collected over four or five years for a different, for a sort of a healthcare delivery business, if you will, has been very valuable. So, you know, in our portfolio, we only have a handful of companies in different spaces that are using. And data is an advantage to actually build a final proposition, which is usually tied to some sort of domain model or something like that. But I do think, you know, we need probably some sort of – firstly, we need a lot more innovation around this. I’m surprised we don’t have more companies that are actually trying to sort of build businesses around this India’s data advantage.

And second, we need to have – I do think we need to have some smart regulation. I don’t know where the regulatory framework is on data. I think that’s going to be super, super important. I do know, like, AI for Bharat, et cetera, are being quite thoughtful about who they share data with, which is great. So, yeah, that’s sort of where it is. But it’s a huge opportunity for India. I don’t – you know, my real view is, like, look, basically, you know, it’s all the data on the Internet. That’s accessible to everybody. You just need, like, literally large amounts of capital. Most of the data that we need to get to AGI we don’t have yet.

And we have 1 .4 billion people. Well, created.

Rahul Matthan

But, Matthew, you – You wanted to intervene in that whole thing. You had – is something called – maybe you didn’t call it, maybe the media started to call it pay -to -crawl, and you may have something more sophisticated like an AI audit or something like that. What’s the idea behind that? Because that’s also part of this democratization of AI as I see it.

Matthew Prince

So firstly, correcting a little bit of a misconception, all the money in the world and you still can’t even crawl the Internet. So how much less of the Internet does Microsoft see than Google? Microsoft Bing, they’ve thrown a ton of money at it. For every six pages that Google sees, Microsoft sees one. OpenAI knows how much of an advantage that is. So every 3 .5 pages that Google sees, OpenAI sees one. But that means that two -thirds of the Internet is hidden to – the most sophisticated model. Anthropic, it’s almost 10 to 1 in terms of what’s there. And so if you want to ask why did Gemini just leapfrog open AI, I don’t think it’s the chips. I don’t think it’s the researchers.

I actually think it’s the data. And I think getting access to data is important. And so if we want to have a level playing field, there’s a real risk that Google is going to leverage the monopoly position they had indexing the Internet yesterday in order to win in the AI market tomorrow. And that’s something that we’re really concerned about. And I think we have to do one of two things. We either have to bring Google down and say that they have to play by the same rules as all the other AI companies. That’s something that you could do from a regulatory perspective. And that’s something that the U .K. is looking into. Canada is looking into.

Australia is looking into. The alternative is how do we give all the other AI companies the same access that Google does. And that’s, I think, an opportunity to also solve some of the democratization challenges out there. One of the things I really worry about. is that AI is going to disrupt the fundamental internet business model. The fundamental internet business model was create content, drive traffic, and then sell things, subscriptions, or ads. That was it. I don’t care if you’re B2B, B2C. I don’t care if you’re a media company. That was it. Create great content, drive traffic, sell things, subscriptions, or ads. AI doesn’t work that way. So just take a media company. If AI scrapes your ads and takes it, let’s say it’s the New York Times, or the Times of India, or whatever it is, you can now go to your AI and just say, show me all, summarize all the articles from the New York Times that would be of interest to me.

And you’re going to read it there. Now, that’s great for you as a user. It’s better as a user experience. So it’s going to win. But now the Times of India isn’t selling a subscription or an ad. Now the New York Times isn’t getting anyone to click on an ad. And that’s going to make it harder. And to make this clear how much harder it’s gotten. Ten years ago, for every two pages that Google scraped on the Internet, they sent you one unique visitor. And then you could monetize that visitor again by selling them things, subscriptions, or ads. Today, what is it? 50 to 1. Actually, excuse me, 30 to 1 in Google’s case, 50 to 1 in Bing’s case. That’s the good news.

In OpenAI’s case, it’s 3 ,500 to 1. In Anthropic’s case, it’s half a million to 1. They take half a million pages for every one page they give back. So AI takes, but it doesn’t always give back. And if the currency of the Internet has been traffic, that traffic is gone. And it’s getting harder and harder to then make money through the traditional business model of the Internet. So one of two things happens. One is, well, the Internet just dies. But that’s not going to happen because the AI companies need the content. They need the information. They need the things that are out there. And so the Internet. The alternative is a new business model emerges. So what happens?

and that’s what’s going to happen over the course of the next five years a new business model is going to emerge for the internet and think how exciting that is think how rare new business models for something as grand and as large as the internet are how often they emerge almost never and yet we’re all going to live through it and that’s an incredible opportunity and i don’t know quite what it is but it has to be some way that the people who are creating the content and creating the value get compensated for the things that they are creating and what the encouraging version of this is to think about the music industry the entire music industry 22 years ago was valued at 8 billion us dollars which is a lot of money but it’s not that much money because that was the beatles and rolling stones and like everything right why was it that well because napster and grokster and kazan all these things had commoditized they were basically taking a music and musicians weren’t getting paid for it and they were getting paid for it and they were getting paid for it and they weren’t getting paid for the music anymore what changed one day steve jobs walked on stage and he said it’s going to be 99 cents per song right itunes launched almost 22 years ago to this day and that wasn’t the business model that won but at least was a business model and it started the conversation and that evolved into what is the business model that won which is something closer to spotify which is now i don’t know what it is in india but in the u .s it’s like ten dollars a month and what’s incredible is that spotify last year sent over 12 billion dollars to musicians more than the entire music industry was worth 22 years ago and that’s just spotify there’s apple music and and uh title and tiktok and youtube and tons of people there’s more money going into music creation today than at any other time in human history by an order of magnitude now different winners and losers and we can debate whether or not the right people are winning the right people are losing but there is more money going into music creation today than any time in human history and so as we figure out what the next business model of the internet is going to be, let’s try not to make it one that’s worse.

Let’s try and learn the lessons because traffic was always a terrible proxy for quality. So let’s actually find something that is a proxy for quality and let’s reward the people who are creating that. And the good news is, I think that’s what everyone in this room wants, but it’s what Sam wants. It’s what Daria wants. It’s even what Elon probably wants. And that’s the sort of thing that is actually going to drive not only a healthier internet ecosystem, but I actually think that a lot of what’s wrong with the world today is that we have monetized traffic. And what that has meant is we have monetized basically making people emotional or angry or whatever it gets to click on things, which is part of what’s driven society apart in a lot of ways.

I think if instead what we monetize and what we reward is the creation of human knowledge. That’s what the AI companies want. That’s what we all want. And I think that’s what we can actually do to actually bring our society back together in.

Rahul Matthan

I want to turn it over to the audience for questions. I don’t want to be the only one asking questions. I’ll take – hands are going. I’m going to take three questions at a time.

Matthew Prince

I like Indian audiences. They ask questions. Like you go to the UK and everyone just sits on their hands.

Rahul Matthan

No, no. Indian audiences are very, very – now we’ll have to shut them up because we don’t have time. I’m going to take this one. I’m going to take that one. I’m going to take this one, right? So first up here, yeah? And I have a rule, a question, not a statement. So it has to end with your voice going up a little bit. Then I know it’s a question.

Audience

Sir, this is for you. You’ve touched upon a lot of interesting topics across domains. First of all, I remember you talking about the deterministic AI outcomes. Now AI having crossed the threshold –

Rahul Matthan

Give me the question.

Audience

Okay. So how – So what, in your view, would make AI trustworthy? Is it something to do with explainability, deterministic AI, and what would be the pathways?

Rahul Matthan

Let me get a couple more. Otherwise, we won’t get through. The lady at the back there. So one is, I’ll keep track of it. How do we make AI more trustworthy?

Audience

My question is for Matthew. So you mentioned about the paper crawl. We see robots .txt getting ignored. My question for you is, what makes you believe that AI companies would be equally invested in a creator -based compensation when AI creates the Internet and is not giving back attribution or compensation?

Rahul Matthan

Trustworthy, and how do creators get paid? And attribution. I think she also wants to do attribution. This gentleman here.

Audience

Hi. My question is to Rajan. So, Rajan, what do you think you were explaining about the consumer and vertical part in the application layer? So what do you think? Where are we in terms of investment from a venture capital side point of view in terms of how can we match the Y Combinator and AI 16Z level in terms of investments?

Matthew Prince

Great. So AI is already more trustworthy than most humans. The simple fact is that AI is a better driver than ninety nine point nine nine percent of humans that are on the road today. Literally, since I started talking within a kilometer of where we’re sitting, there was an accident where between two cars. I mean, we just know that’s happening. We’re sitting in Delhi. Right. You will not be able to find any news about that anywhere on any publication anywhere on Earth. And yet, if one of those two cars had been a self -driving car, it would have been front page news around the world. There are expectations for AI are too high. We have built a system that acts like humans and we need to think of it as acting like humans.

The smartest CEO that I know in terms of doing this is Robin Vince at BNY Mellon. In their case, they actually have AI employees. The AI employees get an employee number. They get an email address. They get a quarterly review. They can get fired if they don’t do a good job. They can get promoted if they do a good job. I asked if there are any AIs that are supervising humans. He said, not yet, but it’s inevitable. That’s the way to think of it, right, is that they act like humans because they are like humans. And, again, we are all fallible, and we’re all going to make mistakes, but already we see in certain disciplines like driving, AI is better than human beings are.

In terms of getting paid, I think the empirical evidence is that when you’ve actually seen – forget robots at TXC. That’s like a no trespassing sign. Anyone can ignore it. When you actually block the AI agents, which is what we have done, then they come to the table. And so with big publishers like Condé Nast, DotDash, Meredith, and others, where starting July 1st, we said all of the AI companies are blocked, they actually came to the table, and they were able to get paid. Things done. In the case of Reddit, Reddit was willing to block everyone. including even Google. And as a result, they got the public number is seven times as much for the Reddit corpus licensing that than the New York Times did, even though the two corpuses are about the same.

So again, I think that the first step in any market is having some level of scarcity. As long as you’re making it easy for anyone to take your data, then you’re not going to get paid for it.

Rajan Anandan

Yeah, I think on the question on consumer AI, I don’t know, very few people know this, but India today has more consumer AI startups than the US. In fact, on Tuesday this week at the Pitchfest, just our firm, one firm, we announced five new seed investments in AI companies. Four out of the five are consumer AI companies, right? And the reason is, and we think this is going to explode because we have 900 million Indians on the internet, 850 million of them are active every day, seven hours a day on the internet. and every space has potential for tremendous innovation, right? If you take education, education hasn’t been accessible to a large part of online education because it’s just been too expensive, right?

But today with AI, you can have a 99 rupees a month plan with an AI tutor across. In fact, the fastest growing AI education company in the world is in India and nobody’s really heard of it because actually, fortunately, these guys are all just being stealth and just building, which is very good. So I think it’s a great time to be building consumer. Actually, it’s a great time building AI companies, but especially in consumer AI, we’re going to see some breakouts. Look, the world’s leading consumer AI companies in education, healthcare, entertainment, et cetera, will be either here or in China. They won’t be in the Western world because we just need it.

Rahul Matthan

The one beautiful thing about this summit is there have been so many wonderful, rich, diverse conversations. This is one of them. Matthew, Rajan, thank you so much. Thank you all for being such a good audience. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Matthew Prince
11 arguments186 words per minute4453 words1431 seconds
Argument 1
AI costs will decrease dramatically due to increased competition in chips and talent availability – current barriers are temporary
EXPLANATION
Prince argues that the current high costs of AI are due to temporary factors: expensive NVIDIA chips originally designed for gaming and a shortage of AI specialists. He predicts these barriers will diminish as more companies enter chip manufacturing and universities expand AI education programs.
EVIDENCE
NVIDIA chips were originally built for gaming consoles, then Bitcoin mining, not AI. Computer science enrollment is up dramatically in the last two years, and AI theory course enrollment is ‘off the top.’ Silicon shortages historically turn into gluts over time. Predicts frontier-like specialized models will cost $10 million or less within five years.
MAJOR DISCUSSION POINT
Democratization of AI and Infrastructure
AGREED WITH
Rajan Anandan
DISAGREED WITH
Rajan Anandan
Argument 2
Constraints force innovation – companies with limited resources often develop more efficient solutions than well-funded competitors
EXPLANATION
Prince argues that resource constraints can be advantageous for innovation, citing DeepSeek’s breakthrough in AI efficiency. He suggests that well-funded companies may be blinded to efficiency innovations because they can simply throw more resources at problems.
EVIDENCE
DeepSeek’s innovation in pruning algorithms and processing efficiency came from chip constraints. DeepSeek developed better pruning algorithms while US companies just ‘buy another H200.’ Sarvam achieved similar efficiency with only 15 people, surprising larger competitors.
MAJOR DISCUSSION POINT
Democratization of AI and Infrastructure
AGREED WITH
Rajan Anandan
Argument 3
AI companies promote doom scenarios to justify regulatory capture and maintain competitive advantages
EXPLANATION
Prince suggests that AI companies strategically promote fears about AI dangers to create regulations that would restrict competition and maintain their market position. He argues this is a business strategy rather than genuine concern about safety.
EVIDENCE
No other industry promotes how dangerous their products are like AI companies do. Automobile industry doesn’t emphasize mass murder potential. AI doomers’ scenarios become ‘hand-wavy’ when pressed for details. Regulation would benefit companies that can be ‘safely trusted’ to handle dangerous AI.
MAJOR DISCUSSION POINT
Open Source vs Closed AI Models
DISAGREED WITH
Rahul Matthan
Argument 4
More open approaches will ultimately win, and regulation should focus on outputs rather than restricting access to models
EXPLANATION
Prince advocates for open AI development and argues that regulation should treat AI systems like humans rather than machines. He believes the Chinese approach of openness is strategically smarter than restrictive US approaches.
EVIDENCE
AI systems are pseudo non-deterministic and act like humans. Better to use criminal code rather than engineering code for regulation. Chinese open approach is ‘smartest’ against the US ‘money machine.’
MAJOR DISCUSSION POINT
Open Source vs Closed AI Models
AGREED WITH
Rajan Anandan, Rahul Matthan
Argument 5
AI will initially enable more sophisticated attacks but ultimately make systems more secure through better defense capabilities
EXPLANATION
Prince acknowledges that AI will enable more sophisticated cyber attacks in the short term but argues that defenders will ultimately benefit more from AI capabilities because they have access to more data than attackers.
EVIDENCE
Examples of AI-enhanced phishing scams and potential for worse breaches like SalesLoft incident. Cloudflare’s ML system now regularly identifies new threats no human has seen before. Good guys have more data than bad guys. Predicts we’ll be more secure online in 10 years.
MAJOR DISCUSSION POINT
AI Security and Cybersecurity
Argument 6
Authentication methods must evolve beyond appearance-based verification to combat AI-generated impersonation
EXPLANATION
Prince warns that AI’s ability to replicate voices and appearances will make traditional verification methods obsolete. He advocates for simple solutions like family passwords to protect against AI-enabled social engineering.
EVIDENCE
AI can now convincingly impersonate voices and appearances for scams. Recommends family passwords as immediate protection. Businesses must move away from appearance-based verification systems.
MAJOR DISCUSSION POINT
AI Security and Cybersecurity
Argument 7
AI is disrupting traditional internet monetization by taking content without driving traffic back to creators
EXPLANATION
Prince explains how AI is breaking the fundamental internet business model of creating content to drive traffic for monetization. AI systems scrape content but don’t send users back to original sources, threatening content creators’ revenue streams.
EVIDENCE
Traditional model: create content, drive traffic, sell subscriptions/ads. 10 years ago Google sent 1 visitor per 2 pages scraped, now 30:1. OpenAI ratio is 3,500:1, Anthropic is 500,000:1. AI summarizes content without sending traffic back.
MAJOR DISCUSSION POINT
Data and Internet Business Models
AGREED WITH
Rahul Matthan
Argument 8
A new business model for the internet must emerge that compensates content creators, similar to how the music industry evolved
EXPLANATION
Prince draws parallels between the current AI disruption and the music industry’s transformation from Napster to Spotify. He argues that a new compensation model will emerge that could be better than the current traffic-based system.
EVIDENCE
Music industry was worth $8B 22 years ago due to piracy, now Spotify alone sends $12B+ to musicians. iTunes started at $0.99/song, evolved to $10/month streaming. Traffic was always a ‘terrible proxy for quality.’ New model should reward human knowledge creation.
MAJOR DISCUSSION POINT
Data and Internet Business Models
Argument 9
Blocking AI crawlers creates scarcity that forces AI companies to negotiate fair compensation with content creators
EXPLANATION
Prince argues that content creators can force AI companies to pay by blocking access to their content. He provides evidence that this strategy works when implemented effectively.
EVIDENCE
Cloudflare blocked AI agents starting July 1st, leading to deals with Condé Nast, DotDash Meredith. Reddit got 7x more than New York Times by blocking everyone including Google. ‘First step in any market is having some level of scarcity.’
MAJOR DISCUSSION POINT
Data and Internet Business Models
DISAGREED WITH
Audience
Argument 10
AI is already more trustworthy than humans in many applications but faces unrealistic expectations
EXPLANATION
Prince argues that AI systems like autonomous vehicles are already safer than human operators but are held to impossibly high standards. He suggests this double standard hinders AI adoption.
EVIDENCE
AI is better driver than 99.99% of humans. Car accidents happen constantly in Delhi but go unreported, while any self-driving car accident becomes global news. BNY Mellon treats AI as employees with ID numbers, email addresses, and performance reviews.
MAJOR DISCUSSION POINT
AI Trustworthiness and Regulation
Argument 11
AI should be regulated like humans using criminal codes rather than engineering standards for deterministic systems
EXPLANATION
Prince advocates for treating AI systems more like human employees rather than deterministic machines. He argues this approach is more practical given AI’s human-like behavior patterns.
EVIDENCE
AI systems are ‘pseudo non-deterministic’ and act like humans. BNY Mellon gives AI employees ID numbers, quarterly reviews, can be fired or promoted. Some AIs may eventually supervise humans.
MAJOR DISCUSSION POINT
AI Trustworthiness and Regulation
R
Rajan Anandan
6 arguments195 words per minute2620 words802 seconds
Argument 1
India can compete by building specialized, cost-effective models for local needs rather than pursuing AGI
EXPLANATION
Anandan argues that India doesn’t need to compete in the AGI race but should focus on building highly performant, low-cost models for specific applications that serve India’s 1.4 billion people. He emphasizes that smaller, specialized models can be more effective for India’s needs.
EVIDENCE
India has 1.4 billion people with a million turning 18 monthly – AGI isn’t needed. Sarvam launched 30 and 100 billion parameter models that are state-of-the-art in Indic languages. Sarvam’s voice AI is both SOTA and fraction of cost of global models. 12 companies building large language models in India.
MAJOR DISCUSSION POINT
Democratization of AI and Infrastructure
AGREED WITH
Matthew Prince
DISAGREED WITH
Matthew Prince
Argument 2
India has entered the AI race with competitive models and is building a sovereign technology stack across chips, compute, and applications
EXPLANATION
Anandan describes India’s comprehensive approach to AI development, including model creation, semiconductor design, and infrastructure investment. He emphasizes the need for technological sovereignty given changing geopolitical relationships.
EVIDENCE
12 companies building LLMs, 35-40 semiconductor startups (up from zero 4 years ago), 20% of world’s semiconductor designers in India. Investments in GPU company Agrani and memory company C2I. Adani and Reliance announced $100B AI infrastructure investments. 125 unicorns and strong application layer success.
MAJOR DISCUSSION POINT
Democratization of AI and Infrastructure
AGREED WITH
Matthew Prince
Argument 3
Open source is critical for the ecosystem but economically challenging when companies invest hundreds of billions in model development
EXPLANATION
Anandan acknowledges the importance of open source for the AI ecosystem while recognizing the economic reality that companies investing massive amounts cannot give away their models for free. He suggests the need for alternative development approaches.
EVIDENCE
If you invest a trillion dollars, ‘you can’t give it away for free. It’s just economics.’ Meta spending $80-100 billion annually. Llama’s next version will likely be closed to enable monetization. March announcement expected from major company on open source commitment.
MAJOR DISCUSSION POINT
Open Source vs Closed AI Models
AGREED WITH
Matthew Prince, Rahul Matthan
Argument 4
India has competitive advantages in data collection but needs more companies building businesses around this data advantage
EXPLANATION
Anandan notes that while global data companies are not Indian, India has significant untapped potential in data collection and application. He emphasizes the need for more innovation in leveraging India’s data advantages for AI development.
EVIDENCE
Cloud Physician built specialized healthcare models using proprietary ICU data collected over 4-5 years, now marketing in US. AI for Bharat collecting data thoughtfully. India has collected ‘less than 1% of data actually needed’ for AGI. Need for robotics data collection startups.
MAJOR DISCUSSION POINT
Data and Internet Business Models
Argument 5
India has more consumer AI startups than the US and is well-positioned for AI applications serving its large user base
EXPLANATION
Anandan highlights India’s strength in consumer AI applications, driven by its massive internet user base and specific market needs. He predicts major breakthroughs in education, healthcare, and entertainment sectors.
EVIDENCE
India has more consumer AI startups than US. 900 million Indians online, 850 million active 7 hours daily. Peak15 announced 5 seed investments, 4 in consumer AI. Fastest growing AI education company globally is Indian. 99 rupees/month AI tutor plans now viable.
MAJOR DISCUSSION POINT
Investment and Market Opportunities
Argument 6
Consumer AI will see major breakthroughs in education, healthcare, and entertainment, particularly in India and China
EXPLANATION
Anandan predicts that the world’s leading consumer AI companies will emerge from India and China rather than Western markets, due to the scale of need and user base in these regions.
EVIDENCE
World’s leading consumer AI companies in education, healthcare, entertainment will be in India or China, not Western world. Voice AI costs dropped from 5-20 rupees/minute to 3 rupees, targeting 5-10 paisa for mass adoption. Education becoming accessible at 99 rupees/month with AI tutors.
MAJOR DISCUSSION POINT
Investment and Market Opportunities
R
Rahul Matthan
4 arguments158 words per minute1775 words673 seconds
Argument 1
AI democratization requires addressing the complex hardware infrastructure challenge beyond just software solutions
EXPLANATION
Matthan argues that while democratizing AI is desirable, the reality is that AI relies heavily on complex hardware infrastructure that cannot easily be distributed or moved around the internet. This creates fundamental challenges for making AI truly accessible globally.
EVIDENCE
AI involves a ‘very big and complicated stack’ with ‘complex hardware’ that is ‘hard to move around the Internet’
MAJOR DISCUSSION POINT
Democratization of AI and Infrastructure
Argument 2
Open-weight models are essential for AI ecosystem development but face security challenges as models become more performant
EXPLANATION
Matthan highlights the tension between the ecosystem’s need for open-weight models that can be customized and the increasing security risks from ‘malicious fine-tuning’ as models become more capable. He notes that developers need access to pre-trained models but this creates vulnerabilities.
EVIDENCE
Ecosystem needs open-weights because ‘not everyone has the time to do the training’ but there are concerns about ‘malicious fine-tuning’ and ability to ‘undo the fine-tuning guardrails’ in highly performant models
MAJOR DISCUSSION POINT
Open Source vs Closed AI Models
AGREED WITH
Matthew Prince, Rajan Anandan
Argument 3
Data sovereignty and fair compensation for data providers is critical, with concerning precedents from other regions
EXPLANATION
Matthan raises concerns about data extraction patterns where companies are ‘hoovering up’ data from countries like India, while pointing to problematic deals in Africa where 25 years of medical data was exchanged for EHR systems. He questions whether India is approaching data governance strategically.
EVIDENCE
Companies are ‘hoovering up the data, annotating it, making it ready’ and there are ‘deals for 25 years worth of medical data out of Africa in exchange for setting up an EHR system’
MAJOR DISCUSSION POINT
Data and Internet Business Models
AGREED WITH
Matthew Prince
Argument 4
Technology safety concerns may be overblown, drawing parallels to electricity adoption challenges
EXPLANATION
Matthan suggests that current AI safety fears may be exaggerated by comparing them to historical technology adoption. He notes that electricity was initially dangerous with people getting electrocuted in the streets, but society learned to safely integrate it into daily life.
EVIDENCE
At the turn of the last century, ‘people were just walking around the streets and getting electrocuted, because electricity is highly dangerous’ yet now ‘we sit in this room, which literally the walls are buzzing with electricity, and we’re completely safe’
MAJOR DISCUSSION POINT
AI Trustworthiness and Regulation
DISAGREED WITH
Matthew Prince
A
Audience
3 arguments196 words per minute177 words54 seconds
Argument 1
AI trustworthiness requires focus on explainability and deterministic outcomes
EXPLANATION
An audience member questions what would make AI more trustworthy, specifically asking about the role of explainability and deterministic AI systems. They suggest these might be key pathways to building trust in AI systems.
EVIDENCE
Question about ‘explainability, deterministic AI, and what would be the pathways’ to trustworthy AI
MAJOR DISCUSSION POINT
AI Trustworthiness and Regulation
Argument 2
AI companies may not be genuinely committed to creator compensation despite ignoring existing protection mechanisms
EXPLANATION
An audience member challenges the assumption that AI companies will fairly compensate creators, pointing out that they already ignore robots.txt files and questioning their commitment to attribution and compensation systems.
EVIDENCE
Observation that ‘robots.txt getting ignored’ and questioning ‘what makes you believe that AI companies would be equally invested in a creator-based compensation when AI creates the Internet and is not giving back attribution or compensation’
MAJOR DISCUSSION POINT
Data and Internet Business Models
DISAGREED WITH
Matthew Prince
Argument 3
India needs to match international venture capital investment levels to compete in AI development
EXPLANATION
An audience member asks about India’s venture capital investment capacity compared to major international players like Y Combinator and Andreessen Horowitz, suggesting this is crucial for India’s AI competitiveness.
EVIDENCE
Question about ‘how can we match the Y Combinator and AI 16Z level in terms of investments’
MAJOR DISCUSSION POINT
Investment and Market Opportunities
A
Announcer
2 arguments147 words per minute266 words108 seconds
Argument 1
Matthew Prince and Rajan Anandan are transformative technology leaders who have brought revolutionary technology to millions
EXPLANATION
The announcer establishes the credibility and impact of both speakers by highlighting their roles in democratizing technology access. Matthew Prince co-founded CloudFlare with a mission to build a better Internet, while Rajan Anandan has shaped India’s startup ecosystem and focuses on backing transformative technology-led businesses.
EVIDENCE
Matthew Prince is co-founder and CEO of CloudFlare, has degrees from Harvard, Chicago, and Trinity College, co-created Project Honeypot tracking online fraud. Rajan Anandan is Managing Director of Peak15 Partners (formerly Sequoia), founder of Sequoia Capital India, with decades of experience in entrepreneurship and investing.
MAJOR DISCUSSION POINT
Speaker Introductions and Credentials
Argument 2
The discussion focuses on democratizing AI technology beyond a handful of companies in the same geographical area
EXPLANATION
The announcer frames the central theme of the discussion around Matthew Prince’s keynote statement about preventing AI concentration in a few companies within the same postal code. This sets up the conversation about making AI more accessible and distributed globally.
EVIDENCE
Reference to Matthew Prince’s keynote statement that ‘this wonderful AI technology should not be built by a handful of companies in the same postal code’
MAJOR DISCUSSION POINT
Democratization of AI and Infrastructure
Agreements
Agreement Points
Resource constraints drive innovation and efficiency in AI development
Speakers: Matthew Prince, Rajan Anandan
Constraints force innovation – companies with limited resources often develop more efficient solutions than well-funded competitors India can compete by building specialized, cost-effective models for local needs rather than pursuing AGI
Both speakers agree that having limited resources can actually be advantageous for AI innovation. Prince cites DeepSeek’s breakthrough efficiency innovations due to chip constraints, while Anandan describes how Indian companies like Sarvam are achieving state-of-the-art results with smaller teams and budgets by focusing on specialized, cost-effective models rather than trying to match the massive investments of US companies.
AI costs and barriers will decrease over time making the technology more accessible
Speakers: Matthew Prince, Rajan Anandan
AI costs will decrease dramatically due to increased competition in chips and talent availability – current barriers are temporary India has entered the AI race with competitive models and is building a sovereign technology stack across chips, compute, and applications
Both speakers are optimistic about AI becoming more accessible and affordable. Prince predicts frontier-like models will cost $10 million or less within five years due to increased competition and talent availability. Anandan provides evidence that this is already happening, with Indian companies building competitive models and the country developing its own semiconductor capabilities.
Open source/open weights are important for the ecosystem but face economic and security challenges
Speakers: Matthew Prince, Rajan Anandan, Rahul Matthan
More open approaches will ultimately win, and regulation should focus on outputs rather than restricting access to models Open source is critical for the ecosystem but economically challenging when companies invest hundreds of billions in model development Open-weight models are essential for AI ecosystem development but face security challenges as models become more performant
All three speakers acknowledge the importance of open approaches for AI development while recognizing the practical challenges. Prince advocates for openness and warns against regulatory capture, Anandan explains the economic reality that massive investments make free distribution difficult, and Matthan highlights the security concerns with malicious fine-tuning as models become more capable.
AI will transform internet business models and require new compensation mechanisms for content creators
Speakers: Matthew Prince, Rahul Matthan
AI is disrupting traditional internet monetization by taking content without driving traffic back to creators Data sovereignty and fair compensation for data providers is critical, with concerning precedents from other regions
Both speakers recognize that AI is fundamentally disrupting how content creators are compensated online. Prince provides detailed analysis of how AI systems scrape content without sending traffic back, breaking the traditional create-content-drive-traffic-monetize model. Matthan raises concerns about data extraction patterns and the need for fair compensation, referencing problematic deals in other regions.
Similar Viewpoints
Both speakers are skeptical of AI doom narratives and believe in distributed, competitive AI development rather than concentration in a few large companies. Prince argues that doom scenarios are strategically promoted for regulatory capture, while Anandan demonstrates through India’s success that alternative approaches to AI development are viable and competitive.
Speakers: Matthew Prince, Rajan Anandan
AI companies promote doom scenarios to justify regulatory capture and maintain competitive advantages India has more consumer AI startups than the US and is well-positioned for AI applications serving its large user base
Both speakers believe that AI safety concerns are often exaggerated and that AI systems are already performing better than humans in many applications. They draw parallels to historical technology adoption challenges, suggesting that society will adapt and develop appropriate safety measures over time.
Speakers: Matthew Prince, Rahul Matthan
AI is already more trustworthy than humans in many applications but faces unrealistic expectations Technology safety concerns may be overblown, drawing parallels to electricity adoption challenges
Both speakers see data as a valuable asset that should be properly monetized. Prince provides examples of successful negotiations when content is protected, while Anandan highlights India’s data advantages and the need for more companies to build businesses around this asset.
Speakers: Matthew Prince, Rajan Anandan
Blocking AI crawlers creates scarcity that forces AI companies to negotiate fair compensation with content creators India has competitive advantages in data collection but needs more companies building businesses around this data advantage
Unexpected Consensus
AI regulation should be practical rather than restrictive
Speakers: Matthew Prince, Rajan Anandan, Rahul Matthan
AI should be regulated like humans using criminal codes rather than engineering standards for deterministic systems India has entered the AI race with competitive models and is building a sovereign technology stack across chips, compute, and applications Technology safety concerns may be overblown, drawing parallels to electricity adoption challenges
Despite coming from different perspectives (infrastructure provider, investor, and legal expert), all three speakers converge on the view that AI regulation should be practical and not overly restrictive. This consensus is unexpected given the current global trend toward more stringent AI regulation and the different stakeholder interests they represent.
Democratization of AI is both necessary and achievable
Speakers: Matthew Prince, Rajan Anandan
Constraints force innovation – companies with limited resources often develop more efficient solutions than well-funded competitors India can compete by building specialized, cost-effective models for local needs rather than pursuing AGI
Both speakers, despite representing different sectors (US tech infrastructure vs Indian venture capital), strongly agree that AI democratization is not only desirable but actively happening. This consensus is unexpected given the narrative of AI concentration in a few large US companies, and their agreement suggests a more optimistic future for global AI competition.
Overall Assessment

The speakers demonstrate remarkable consensus on key issues around AI democratization, the temporary nature of current barriers, the importance of open approaches, and the need for practical rather than restrictive regulation. They agree that resource constraints can drive innovation, that AI costs will decrease, and that new business models must emerge to fairly compensate content creators.

High level of consensus with significant implications for AI policy and development. The agreement between speakers from different backgrounds (US infrastructure provider, Indian investor, and legal expert) suggests these views may represent broader industry sentiment. Their shared optimism about AI democratization and skepticism of doom scenarios could influence policy discussions, while their agreement on the need for new internet business models highlights the urgency of addressing creator compensation in the AI era.

Differences
Different Viewpoints
Timeline for AI cost reduction and democratization
Speakers: Matthew Prince, Rajan Anandan
AI costs will decrease dramatically due to increased competition in chips and talent availability – current barriers are temporary India can compete by building specialized, cost-effective models for local needs rather than pursuing AGI
Prince predicts frontier-like models will cost $10 million or less within five years, while Anandan argues India has already achieved competitive results this week and suggests the timeline could be more aggressive
Approach to AI safety and regulation concerns
Speakers: Matthew Prince, Rahul Matthan
AI companies promote doom scenarios to justify regulatory capture and maintain competitive advantages Technology safety concerns may be overblown, drawing parallels to electricity adoption challenges
Both speakers are skeptical of AI safety concerns but for different reasons – Prince sees it as strategic business manipulation while Matthan draws historical parallels to technology adoption
Effectiveness of blocking AI crawlers for creator compensation
Speakers: Matthew Prince, Audience
Blocking AI crawlers creates scarcity that forces AI companies to negotiate fair compensation with content creators AI companies may not be genuinely committed to creator compensation despite ignoring existing protection mechanisms
Prince provides evidence that blocking works citing successful negotiations, while audience member questions AI companies’ genuine commitment given they already ignore robots.txt
Unexpected Differences
AI trustworthiness standards and expectations
Speakers: Matthew Prince, Audience
AI is already more trustworthy than humans in many applications but faces unrealistic expectations AI trustworthiness requires focus on explainability and deterministic outcomes
Unexpected because Prince argues AI is already trustworthy enough and faces unfair standards, while audience member seeks traditional technical solutions like explainability – represents fundamental disagreement about whether the problem is technical or perceptual
Investment capacity requirements for AI competitiveness
Speakers: Rajan Anandan, Audience
Consumer AI will see major breakthroughs in education, healthcare, and entertainment, particularly in India and China India needs to match international venture capital investment levels to compete in AI development
Unexpected because Anandan is optimistic about India’s AI prospects with current resources while audience member suggests India needs to match Y Combinator and a16z investment levels, representing disagreement about whether current funding is sufficient
Overall Assessment

The discussion revealed surprisingly few fundamental disagreements among speakers, with most differences centered on timelines, approaches, and emphasis rather than core principles. Main disagreements involved the pace of AI democratization, the effectiveness of current strategies for creator compensation, and whether existing AI systems meet trustworthiness standards.

Low to moderate disagreement level with high convergence on goals but different perspectives on implementation strategies and timelines. This suggests a productive foundation for collaboration while highlighting areas needing further discussion around practical implementation details.

Partial Agreements
Both agree open source is important for the ecosystem, but disagree on feasibility – Prince advocates for more openness while Anandan acknowledges economic realities make it difficult for companies investing massive amounts
Speakers: Matthew Prince, Rajan Anandan
More open approaches will ultimately win, and regulation should focus on outputs rather than restricting access to models Open source is critical for the ecosystem but economically challenging when companies invest hundreds of billions in model development
Both agree that resource constraints can drive innovation and efficiency, but Prince focuses on this as a general principle while Anandan specifically applies it to India’s strategy of building specialized rather than general AI models
Speakers: Matthew Prince, Rajan Anandan
Constraints force innovation – companies with limited resources often develop more efficient solutions than well-funded competitors India can compete by building specialized, cost-effective models for local needs rather than pursuing AGI
Both recognize India’s data advantages and the need for better data governance, but Anandan focuses on business opportunities while Matthan emphasizes sovereignty and protection concerns
Speakers: Rajan Anandan, Rahul Matthan
India has competitive advantages in data collection but needs more companies building businesses around this data advantage Data sovereignty and fair compensation for data providers is critical, with concerning precedents from other regions
Takeaways
Key takeaways
AI democratization is achievable as costs will decrease dramatically due to competition in chips and talent – current barriers like expensive NVIDIA chips and scarce AI expertise are temporary states that will resolve within 5 years India can compete in AI by focusing on specialized, cost-effective models for local needs rather than pursuing AGI – constraints often drive more efficient innovation than unlimited resources Open source AI models are critical for ecosystem development but face economic challenges as companies invest hundreds of billions in development – this creates tension between accessibility and business viability AI will initially enable more sophisticated cyberattacks but ultimately make systems more secure through better defense capabilities – the key is evolving authentication beyond appearance-based verification The traditional internet business model (create content, drive traffic, sell ads/subscriptions) is being disrupted by AI, necessitating a new compensation model for content creators similar to how the music industry evolved AI should be regulated like humans using criminal codes rather than engineering standards, and treated as employees with accountability measures rather than deterministic machines India has significant advantages in consumer AI applications due to its large user base (900 million internet users) and cost constraints that drive innovation
Resolutions and action items
Content creators should block AI crawlers to create scarcity and force AI companies to negotiate fair compensation agreements Families should establish password systems to protect against AI-enabled social engineering attacks India should continue building its sovereign technology stack across chips, compute, and applications rather than relying solely on foreign providers Businesses must move away from appearance-based authentication systems to more secure verification methods The industry needs to develop new internet business models that reward quality content creation rather than just traffic generation
Unresolved issues
How to maintain open source AI development when companies need to recoup massive investments in model development What specific regulatory framework should govern AI development and deployment, particularly regarding data usage and model access How to balance AI safety concerns with democratization goals – the tension between restricting access for safety versus enabling innovation What the new internet business model will actually look like beyond general principles of compensating content creators How to ensure fair attribution and compensation for content used in AI training when current systems largely ignore creator rights Whether India can successfully build competitive semiconductor capabilities to achieve true technological sovereignty How to scale AI applications to serve India’s full 1.4 billion population at affordable costs (getting voice AI from current 3 rupees per minute to 5-10 paisa per minute)
Suggested compromises
Focus on specialized AI models for specific use cases rather than competing directly with frontier AGI models – allows for innovation within resource constraints Implement graduated access to AI models based on use case and safety considerations rather than blanket restrictions or complete openness Develop hybrid approaches where basic models remain open source while advanced capabilities require licensing agreements Create international cooperation frameworks for AI development while maintaining sovereign capabilities in critical areas Establish industry standards for content creator compensation that balance AI company needs with fair payment for training data Regulate AI outputs and applications rather than restricting access to underlying models and research
Thought Provoking Comments
Already, if you look in enrollment in computer science programs across the world, it is up dramatically… And then secondly, the enrollment in specifically AI theory courses is off the top. Every university that used to sort of shutter their course is now standing it up and building it like crazy… In five years, you’ll be able to build a frontier-like model within a specialty for $10 million or less.
This comment reframes the AI democratization debate by providing concrete evidence that the barriers to AI development are already eroding. Rather than accepting the narrative that AI will remain concentrated among a few companies, Prince presents data showing structural changes in education and predicts dramatic cost reductions. His specific $10 million prediction creates a measurable benchmark for the discussion.
This comment fundamentally shifted the conversation from whether AI can be democratized to how and when it will happen. It prompted Rajan to provide concrete examples of Indian companies already achieving competitive results with constrained resources, moving the discussion from theoretical to practical examples.
Speaker: Matthew Prince
AGI is not the thing that we need. Our focus really is to uplift 1.4 billion Indians… What you need are highly performant, extremely low cost models that are a billion parameters to maybe 100 or 200 billion parameters.
This comment challenges the fundamental assumption that success in AI means competing directly with frontier models. Anandan redefines the problem space entirely, arguing that different regions need different AI solutions based on their specific challenges and constraints. This represents a strategic pivot from imitation to innovation.
This reframing allowed the discussion to explore alternative pathways to AI success, leading to detailed examples of Indian companies achieving state-of-the-art results in specific domains. It shifted the conversation from a zero-sum competition narrative to a more nuanced view of specialized AI applications.
Speaker: Rajan Anandan
I wish DeepSeek had been an Indian company, not a Chinese company… I actually think those places with constraints… it’s blinding them to what will be the real innovations that cause AI to become more efficient… there is no way that the long-term solution to this is you have to turn up a mothballed nuclear power plant.
This comment inverts the conventional wisdom about resource constraints being disadvantages. Prince argues that unlimited resources can actually hinder innovation by preventing the development of more efficient solutions. This paradoxical insight challenges the assumption that more funding automatically leads to better outcomes.
This observation validated Rajan’s examples of Indian companies achieving impressive results with limited resources and introduced the concept that constraints can drive superior innovation. It led to a deeper discussion about efficiency versus brute force approaches in AI development.
Speaker: Matthew Prince
Let’s imagine that over time, you are one of these major model makers… the only way that we win is if we restrict as many people from getting into the game as we possibly can. So how do you do that? I mean, one of the best ways to do that is just to scare everyone… I’ve never seen another industry that has done that.
This comment provides a cynical but compelling economic explanation for AI safety rhetoric, suggesting that doomsday scenarios may be strategically motivated rather than purely safety-driven. Prince’s comparison to other industries (noting that car companies don’t emphasize their products’ potential for mass casualties) is particularly striking.
This comment dramatically shifted the tone of the open-source discussion, reframing safety concerns as potential regulatory capture attempts. It led to a more skeptical analysis of AI regulation and reinforced arguments for keeping AI development open and competitive.
Speaker: Matthew Prince
The fundamental internet business model was create content, drive traffic, and then sell things, subscriptions, or ads… In OpenAI’s case, it’s 3,500 to 1. In Anthropic’s case, it’s half a million to 1. They take half a million pages for every one page they give back.
This comment identifies a fundamental economic disruption that extends far beyond AI companies to the entire internet ecosystem. The specific ratios (3,500:1, 500,000:1) make the scale of value extraction viscerally clear and highlight an unsustainable dynamic that threatens content creation.
This observation expanded the discussion beyond technical AI development to broader economic implications. It introduced the music industry analogy and led to a forward-looking discussion about new business models, suggesting that current disruption could ultimately lead to better compensation for creators.
Speaker: Matthew Prince
AI is already more trustworthy than most humans. The simple fact is that AI is a better driver than ninety nine point nine nine percent of humans… There are expectations for AI are too high. We have built a system that acts like humans and we need to think of it as acting like humans.
This comment challenges the framing of AI trustworthiness by comparing AI performance to human performance rather than to perfect performance. Prince’s observation about media coverage bias (human accidents ignored, AI accidents front-page news) reveals how perception shapes reality in AI adoption.
This reframing shifted the trustworthiness discussion from abstract concerns about AI reliability to practical comparisons with human performance. It introduced the innovative example of BNY Mellon treating AIs as employees, providing a concrete model for AI integration in organizations.
Speaker: Matthew Prince
Overall Assessment

These key comments fundamentally reshaped the discussion from a conventional narrative about AI concentration and risks to a more nuanced exploration of alternative pathways and hidden dynamics. Prince’s economic analysis of AI development costs and safety rhetoric, combined with Anandan’s strategic reframing of India’s AI goals, created a counter-narrative to Silicon Valley dominance. The conversation evolved from defensive positioning (‘how can we compete?’) to offensive strategy (‘how can we innovate differently?’). The discussion’s progression through technical capabilities, economic models, and societal implications was driven by these provocative reframings that challenged basic assumptions about AI development, regulation, and deployment. The overall effect was to transform what could have been a standard panel about AI challenges into a strategic discussion about alternative approaches to AI development and deployment.

Follow-up Questions
How can India develop a sovereign semiconductor stack given the constraints and competition from established players?
This is critical for India’s technological independence and ability to compete in AI infrastructure without relying on foreign suppliers
Speaker: Rajan Anandan
What specific regulatory framework should India adopt for AI data governance and protection?
India needs smart regulation around data usage for AI development to protect its competitive advantages while enabling innovation
Speaker: Rajan Anandan
How can the cost of AI inference be reduced to 5-10 paisa per minute to make voice AI accessible to 1.4 billion Indians?
Current costs of 1 rupee per minute are still too expensive for mass adoption across India’s population
Speaker: Rajan Anandan
What new business models will emerge to replace the traditional internet traffic-based monetization as AI disrupts content consumption?
The fundamental shift from traffic-driven revenue to AI-mediated content consumption requires entirely new economic models for content creators
Speaker: Matthew Prince
How can quality-based compensation systems be developed to reward content creators in an AI-driven internet ecosystem?
Moving beyond traffic as a proxy for value to actual quality metrics could create a healthier internet ecosystem and better societal outcomes
Speaker: Matthew Prince
What mechanisms can ensure AI companies provide fair attribution and compensation to content creators whose work they use for training?
Current AI systems often ignore robots.txt and don’t provide attribution, raising questions about fair compensation for creators
Speaker: Audience member
How can India scale venture capital investments to match the level of Y Combinator and Andreessen Horowitz for AI startups?
India needs stronger funding ecosystems to support its growing AI startup community and compete globally
Speaker: Audience member
What technical approaches beyond current transformer architectures could lead to more efficient AI systems?
Current LLMs are described as inefficient compute machines, suggesting need for research into alternative architectures
Speaker: Rajan Anandan
How can AI systems be made more explainable and deterministic to increase trustworthiness?
As AI systems become more powerful and autonomous, understanding their decision-making processes becomes crucial for trust and adoption
Speaker: Audience member
What data collection strategies should India pursue for robotics and physical intelligence applications?
India has competitive advantages in data collection that could be leveraged for next-generation AI applications beyond language models
Speaker: Rajan Anandan

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Panel Discussion Inclusion Innovation & the Future of AI

Panel Discussion Inclusion Innovation & the Future of AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

This panel discussion focused on navigating the trade-offs between excellence and inclusion in AI governance, examining how policymakers can foster innovation while ensuring equitable access and benefits. The conversation explored the tension between self-regulation and innovation-first approaches in AI policy development.


Dean W. Ball argued that existing legal frameworks should be the starting point for AI governance, with the burden of proof on those advocating for new regulations to demonstrate why current laws are insufficient. He emphasized that AI is already regulated through various existing mechanisms and suggested that proactive governance should focus primarily on potential catastrophic risks or “tail events.” Ball also advocated for treating AI compute infrastructure as critical infrastructure, similar to ports or railroads.


Gabriela Ramos challenged the assumption that government intervention creates market distortions, pointing out that foundational AI technologies were built on government-funded research. She argued that current AI markets show natural monopoly tendencies that require government intervention to prevent concentration of power among a few players. Ramos emphasized that inclusion and competitiveness are not opposing forces but complementary elements necessary for sustainable economic growth.


Ivana Bartoletti discussed the practical implementation of AI governance in organizations, describing it as a strategic capability that goes beyond risk management and compliance. She stressed the importance of embedding privacy, security, and fairness into AI systems while bringing employees along in the transformation process. The panelists identified significant blind spots in current AI discourse, including underestimating the transformative potential of frontier models, insufficient investment in education system upgrades, and the lack of global alignment on ethical red lines for AI development. The discussion concluded that effective AI governance requires a comprehensive approach combining infrastructure development, competitive dynamics, and institutional evolution.


Keypoints

Major Discussion Points:

AI Governance Framework and Regulatory Approach: The panel debated whether to rely on existing legal frameworks versus creating new AI-specific regulations, with Dean advocating for applying current laws first and only creating new regulations when clear gaps are demonstrated, while others emphasized the need for proactive governance.


Government’s Role in AI Innovation and Market Dynamics: Discussion centered on balancing government intervention with private sector innovation, addressing market concentration concerns, and the government’s historical role in foundational AI research (like DARPA and the internet), while preventing market distortions and monopolistic tendencies.


Practical AI Governance Implementation: Eva detailed how organizations must move beyond policy statements to measurable accountability through comprehensive governance frameworks that include risk management, employee engagement, technical safeguards, and strategic value creation rather than just compliance.


Infrastructure and Access as Critical Components: The conversation addressed treating AI compute infrastructure as critical national infrastructure, with Dean highlighting U.S. commitments to subsidize AI data centers in the global south, and the broader challenges of ensuring equitable access to AI capabilities.


Education and Skills Development: Gabriela emphasized education as a major blind spot, calling for massive upgrades to educational systems and teacher training to prepare people for an AI-driven future, while the moderator noted India’s early adoption of AI as a school subject.


Overall Purpose:

The discussion aimed to explore the complex trade-offs between achieving AI excellence and ensuring AI inclusion, examining how policymakers can navigate regulatory frameworks, market dynamics, and implementation strategies to make AI beneficial for broader populations while maintaining national competitiveness.


Overall Tone:

The discussion maintained a constructive and collaborative tone throughout, with panelists building on each other’s points rather than engaging in adversarial debate. The conversation was forward-looking and solution-oriented, with participants sharing practical experiences and policy recommendations. The tone remained optimistic about AI’s potential while acknowledging serious challenges, and there was a notable spirit of international cooperation and shared responsibility for addressing global AI governance challenges.


Speakers

Speakers from the provided list:


Moderator: Role as discussion facilitator and host of the panel session


Dean W. Ball: Works at the frontier of AI policy through the Foundation for American Innovation; formerly worked in the Trump administration at the White House Office of Science and Technology Policy; helped shape the administration’s AI action plan and AI export program


Gabriela Ramos: Economist; co-chairing the task force on inequalities financial disclosure; expertise in economic policy and inequality issues


Ivana Bartoletti: Global AI governance strategy professional at Wipro; specializes in translating AI policy into practice; expertise in AI governance, privacy, and responsible AI frameworks; recently published on agentic AI design for trust


Additional speakers:


– No additional speakers were identified beyond those in the provided speakers names list


Full session reportComprehensive analysis and detailed insights

This panel discussion examined the complex relationship between achieving AI excellence and ensuring AI inclusion, bringing together diverse perspectives on regulatory frameworks, market dynamics, and implementation strategies. The conversation revealed fundamental tensions between different approaches to AI governance and the role of government intervention in emerging AI markets.


Regulatory Philosophy and Governance Frameworks

Dean W. Ball presented a distinctive perspective on AI regulation, arguing that existing legal frameworks should serve as the foundation for AI governance rather than assuming new regulations are automatically necessary. Ball emphasized that “the burden of proof should be on the person who wants the regulation to show this is why existing law doesn’t work.” He noted that AI is already regulated through various existing mechanisms, including liability doctrine and product regulations, and highlighted the United States’ common law tradition as a robust foundation for addressing AI-related issues.


However, Ball acknowledged that proactive governance may be necessary for addressing “tail events” – low-probability, high-impact scenarios. He mentioned writing “supportively about transparency laws” while maintaining his general preference for existing legal frameworks over new regulatory approaches.


Gabriela Ramos offered a contrasting perspective that challenged the binary framing of government intervention versus market freedom. Drawing on historical examples, she highlighted how foundational AI technologies emerged from government-funded research, particularly citing DARPA’s role in creating the internet. This analysis reframed government intervention from a potential impediment to innovation to a proven catalyst for technological advancement.


Ramos argued that current AI markets exhibit characteristics that require government intervention to prevent harmful concentration of power. She described AI technologies as functioning like “natural monopolies,” where early developers gain overwhelming market advantages, leading to oligopolistic market structures.


Market Dynamics and Access Challenges

Ramos introduced the concept of a “broken diffusion machine” in AI innovation, arguing that when market concentration occurs at the top levels of AI development, the traditional mechanism by which innovations create widespread benefits becomes dysfunctional. She emphasized that a few entities controlling compute capacity, talent acquisition, and financial resources prevents broader distribution of AI benefits.


Ball focused on the importance of frontier AI models, warning against dismissing advanced capabilities in favor of “good enough” alternatives. He argued that frontier AI represents the development of systems that will be “smarter than humans at all cognitive labour,” opening possibilities for applications and “concepts that you will invent” that don’t yet exist. Ball noted that the US plans to spend “a trillion dollars this year” on AI development, highlighting the scale of investment in advanced capabilities.


The discussion revealed different theories about how AI benefits should diffuse through society, with Ball emphasizing access to cutting-edge capabilities and Ramos focusing on addressing market concentration to enable broader participation.


Organizational Implementation and Governance

Ivana Bartoletti, from Wipro, provided insights into translating AI governance principles into organizational practice. She described how companies initially scrambled to address employee access to AI tools following the widespread adoption of generative AI, leading to the establishment of governance boards and ethics committees.


Bartoletti emphasized a shift from traditional risk management approaches to “AI for good,” focusing on engineering fairness and inclusivity directly into systems rather than treating these as afterthoughts. She stressed the importance of bringing employees along in AI transformation, leveraging their expertise to develop practical use cases and ensuring workforce preparation for changing work patterns.


She also highlighted the need for design approaches that maintain human agency while enabling the benefits of autonomous systems, particularly as AI systems become more sophisticated and capable of independent decision-making.


Infrastructure and Global Considerations

Ball advocated for treating AI data centers as critical infrastructure comparable to ports or railways. He highlighted the US government’s stated policy to “subsidize the development of AI data centers in the global south,” representing a significant shift towards international cooperation in AI infrastructure development.


The moderator framed AI inclusion as extending beyond equitable representation in datasets to encompass access to compute resources, technical standards, supportive policy frameworks, and clear regulatory guidance. This comprehensive view recognizes that meaningful participation in the AI economy requires addressing multiple layers of access and capability.


Education and Capacity Building

Ramos identified education reform as a critical blind spot in current AI discourse, highlighting a fundamental mismatch between the transformative AI future being developed and educational institutions’ preparedness. She called for massive upgrades to educational pedagogy and teacher training, using the example of choosing neighborhoods based on good schools to illustrate how existing inequalities could be amplified in an AI-driven economy.


The moderator noted India’s early adoption of AI as a school subject in 2019, demonstrating how national education policies can proactively address AI readiness. This example illustrated the possibility of systematic educational reform while acknowledging the global nature of the challenge.


Global Coordination and Ethical Boundaries

Bartoletti raised questions about global coordination in AI governance, particularly regarding the establishment of ethical boundaries for AI development. She noted the absence of international alignment on what AI applications should never be pursued, regardless of technical feasibility.


The discussion touched on the need for international cooperation that addresses fundamental questions about values and principles guiding AI development, including ensuring that AI systems respect diverse languages, dialects, and cultural norms rather than imposing homogeneous approaches.


Key Tensions and Implications

The panel revealed several unresolved tensions that will likely shape future AI governance debates. The fundamental disagreement between Ball’s preference for existing legal frameworks and Ramos’s call for comprehensive government intervention reflects deeper philosophical differences about the appropriate role of government in emerging technology markets.


The tension between Ball’s emphasis on frontier AI capabilities and Ramos’s focus on addressing market concentration represents different theories about how technological benefits diffuse through society. These competing perspectives have significant implications for policy development.


The conversation highlighted the challenge of balancing national competitiveness with international cooperation, while speakers agreed on the importance of global participation in AI development. The panel’s emphasis on comprehensive governance approaches, government investment in infrastructure and research, and education reform suggests potential foundations for effective AI governance frameworks that move beyond simple regulatory approaches to encompass broader ecosystem development.


Session transcriptComplete transcript of the session
Moderator

And it’s such a pleasure to be here with such lovely panelists and an audience who’s possibly going to skip some of the lunchtime to join us today in our discussions. Let me get started by really talking about, you know, we are towards the end of the week. It’s been a fantastic week, lots of conversations. And one thing which I reflect back on most of the conversations has been what is the most defining question of our time, which is who all is artificial intelligence really benefiting and with what rules? If I look at it, AI’s enterprise infrastructure, AI’s public sector capability, AI’s even geopolitical leverage is what we’ve seen across all these days. But more importantly, AI has become a part and parcel of our daily lives.

It stretches from everything from making our work life easier. to making sure that we get our entertainment as and when and how we require it. And more importantly, from healthcare to hiring to anything you can possibly imagine. When we really focus on inclusion in AI, one thing which has kind of stayed as a thought for the last five days is inclusion in AI is way beyond equitable representation in data sets. It’s, you know, it’s everything. It’s about access to compute. It’s about standards. It’s about having a right policy framework, which encourages everyone, everywhere. And more important, it’s also getting clarity on regulations, which are there across countries, to see how it can really be beneficial.

Now, to take the discussion ahead, today’s conversation is going to be really about trade -offs. Excellence and inclusion. It’s been interesting on how to navigate both these terminologies whenever you think of any policy or a framework. So I’m going to start with my first question to Dean. So Dean, you know, you’ve been working at the frontier of AI policy. You’ve been at the institutional design through the Foundation for American Innovation. There is a lot of growing debate between self -regulation and innovation -first approaches. Where should policymakers really draw the line without really undermining national competitiveness?

Dean W. Ball

So I think it’s a, first of all, thanks for being here. And thank you for having me. It’s an honor to be here. The way I think about this is that, you know, we will govern AI through a very large intersecting web of different things, right? It’s not just going to be one day one bill is going to get passed and that’s going to be the AI bill and then AI is regulated, right? AI is currently regulated today. It’s regulated by many different things. It’s regulated in the United States by things like liability doctrine and a lot of existing products. regulations and things like that. So I think step number one for government is let’s take the existing bodies of law, you know, many of which just as, you know, as in India and the United States, we’re quite proud of.

Many countries, you know, are very proud of their regulatory and legal traditions. We have a common law tradition in the United States that we are proud of. So let’s take those things and let’s figure out how to apply them to AI. And then, you know, the companies, I think, thus far, the major AI labs have been, I think, responsible stewards when it comes to the major risks. Now, I think the area where you might need proactive governance first is, at least in my view, is really this domain of tail events, potential events that could be very serious, have very serious consequences that are relatively unlikely. So, So, you know, pandemic is an example of a tail event.

And I think AI might have some tail, you know, sort of catastrophic type risks associated with it. And so this is an area where some proactive governance, I think, is needed. And I’ve written supportively about transparency laws in the United States along those lines. So I think that’s where when we have a clear and demonstrated threat model and we have a, you know, clear evidence that existing law is not sufficient. I think one area, one aspect of AI governance that I often push back on and that I often dispute is there’s this kind of assumption baked in whenever we talk about AI regulation that the existing law is insufficient and that the current status quo is that AI is unregulated in some way.

And I think that should actually be, we should have the opposite presumption. We should presume that existing law is sufficient and that there is some sort of good solution. And then. Yeah. It should be, the burden of proof should be on the person who wants the regulation to show this is why existing law doesn’t work.

Moderator

Thank you, Dean. That’s very interesting that we go with an assumption. And with that, Gabriela, let me move on to you. So should, you know, how can governments foster open innovation, assuming to whatever Dean said, while minimizing the risk of market distortions?

Gabriela Ramos

Well, I think that it’s a very nice segue because I completely agree with Dean that there are a very broad portfolio of policy interventions that has not only to be with regulations. Regulations is looking at the way the technology is developed. But we need to think about this as an ecosystem that needs to be nurtured, that need investments. That needs incentives. that needs institutions and that needs infrastructure. And therefore it’s not only the technological conversation about what do we do with AI, but what kind of an economy we want that is really productive, that delivers for people with AI, and for that you need government intervention. And let me tell you, what is very interesting is we usually tend to think that the private sector is an innovative force and the government is a break.

In the U .S. that was not the case. The U .S. was the place where the massive investment in innovation in DARPA, in the creation of the Internet, all the foundational issues that we are seeing now were financed at some point by basic research that was paid by the government of the U .S. And many countries fill that space and that’s why it’s so important that we invest in research because it cannot be that the research is being done only by the private sector. and then it’s also true that when the government gets into the research it’s open research because it needs to bring everybody around the table and then it needs to be shared which is not always the case when you have a private sector innovation so I will contest this also way of framing the issues in terms of the government’s only creating market distortions because at the end it’s about how the government can be effective to address the market distortions that we see many times emerged in this case I like to see the AI technologies as natural monopolies somebody invented something somebody laid the whole network to operate it and then it was a monopoly and now it’s oligopoly at the end it’s very concentrated so there are market distortions now that needs to be addressed by government policies again there is a wide gap of things that needs to be done to ensure that the main distortion that can occur nationally and globally, which is that this is a story of a lucky few, is prevented.

Moderator

Thanks, Gabriela. And you know, it’s interesting you mention that because at least in India, whenever we speak about public -private partnerships, it’s all about how we are moving from a culture of competition to cooperation, to really working together so that the markets stay healthy. With that, we move over to you, Amanda. So, Amanda, you know… Eva. Sorry. Yeah, Amanda is missed. So, Eva, let’s talk about the global AI governance strategy at Wipro, right? Many organizations are developing a responsible AI framework. How do we move beyond policy statements? Through measurable accountability. and specifically when we have to do that at scale.

Ivana Bartoletti

Thank you very much and it’s great to be here and thanks to all of you for joining. So I have what I say often, I have the best job in the world, which is basically to translate a lot of the things that we’ve been discussed over the last few days into practice. So basically means we’ve heard democratisation, we’ve heard inclusion, we’ve heard how it’s important that AI is inclusive and by inclusivity, it’s not just about access, as was said, but it’s also making sure that many get the opportunity to participate in the design of this technology, but also in the decisions around what we are producing and who is going to be benefiting from that.

We, I think in a lot of our work, a lot of organisations, what happened over the last few years when generative AI came about, a lot of organizations we had to face something quite dramatic if you think about it because before then AI was very much for engineers for scientists to work with if they think about machine learning people who knew about AI then what happened a few years ago is that generative AI came and everybody got access to it did you remember and do you remember how companies started to scramble with who’s got access do we leave people our employees to access this the systems do we create our own private instance how do we navigate the fact that we want people to play with these tools with the fact that we have to be safe and secure as an enterprise and then things evolved and a lot of organizations if you know how the debate around governors started and you know a lot of organizations started to set up governance boards and they started to set up ethics boards and all of it and I think we realized at some point and I took on the challenge of AI governance from a privacy standpoint the reason for this and many people in organizations took on AI governance from a privacy standpoint not only because a lot of AI harms are actually privacy harms but also because privacy professionals knew about risk management and then we realized that actually governance of AI is much more than that it’s much more than risk management it’s much more than compliance we realized and I think this summit show that really clearly that AI governance is really about a strategic capability that an organization must have to create long -term value What does that mean?

It means that you have to do two things. First, you have to look at what you want to deploy or develop and that is where you need to embed privacy, security, legal protections, resilience into the products that you’re working on. That is not an easy one. It’s not an easy one. It requires knowledge. It requires investment in privacy enhancing and security enhancing technologies. It requires what, for example, India is promoting which is a techno -legal approach. It’s not just about the law but it’s also about how you translate the law into technical tools. So you have to do all of that and then you have to look at what happens once the product is in production.

So how do you monitor it once it’s out in production? How do you make sure that if, for example, you’re using AI to fire and fire, as sometimes it happens, you have tools to pull the trigger if something goes wrong? Now we are into the realm of agentic AI. If you’re interested in this, I’ve just published an article on the World Economic Forum of a subject I’m really fascinated in, which is what is the design for trust in agentic AI? So, for example, governance means that you do design these agents, but you give people, according to security standards, but also according to their own preferences, the right to intervene when they don’t want the machine to make a decision in an autonomous way.

And then you make sure that you protect from cascading hallucinations, from model drifting, all of that. So governance, to me, is very much about… . the capability that organizations have to think laterally about AI, which means impact, design choices, the trust stack that enables people and employees to trust the product. And one element which to me is very important is to make sure that companies bring their employees with them. That is a very crucial part of governance because the work is going to change. People are going to change the way that they work. They’re going to, and it’s important, the people are going to know best how to use AI are the people working in a company.

This is why I’ve seen successful companies developing a lot of use cases based on their activity and asking their employees, how should we innovate this? This is a fundamental part of governance, I believe, because it brings people with them. us. So very encompassing approach to governance. I think we are evolving and changing how we see it but certainly I think it’s become very clear over the last sort of few years and especially with things like this summit talking about impact that it’s way beyond compliance and it’s way

Moderator

Thanks Eva and that makes me very curious enough to ask you a very quick question. So do you still feel you’re underestimating the risks because you spoke about AI trust?

Ivana Bartoletti

No, I think let’s be honest here. Over the last few years we’ve seen amazing benefits coming from AI, right? Beautiful stuff, fantastic. Every day there is a piece of news that makes us hopeful that we can improve our well -being and we can feel better in the world we live in. But at the same time we’ve seen the risks too and we’ve got to be honest that looking at the success without looking at the risks is very naive. We can’t. because we’re not going to be able to deploy AI successfully if we don’t look at the risks. We’ve seen disinformation. We see deepfakes. We have seen AI softwareizing existing inequalities into decision -making around people, future, rights, and livelihood.

That’s not okay. So we’re not underestimating the risks. But we can’t approach governance from a risk management control. We have to shift our approach and do AI for good and change the way that we look into this. So we have to engineer fairness into the systems that we create. We have to engineer inclusivity into the systems that we create. And, of course, we have to manage the risk. But the mindset has to really shift.

Moderator

Thanks. And that gets me back to Dean. So my question to you is, inclusion at the national level often intersects with computer access. And research infrastructure. You spoke about public -private partnerships, spoke about trust in… emerging technologies like artificial intelligence and maybe even quantum going ahead, should governments treat compute as critical infrastructure?

Dean W. Ball

Yeah, I think they should. The data centers that power frontier AI systems are going to be a part, you know, like ports or railroads. They’re going to be critical infrastructure of the future. I believe that’s true. Prior to my current role, I worked in the Trump administration in the White House Office of Science and Technology Policy. And in particular, I was one of the people that shaped the administration’s AI action plan and AI export program, which my former boss, Michael Kratios, was just here talking about and announcing some next steps on. I was really excited to see that. One of the key messages of that that I feel was I feel this is maybe a communications failure on our part.

But. You know, the United States government has publicly said the president. has come out and made as a flagship of his AI policy that we intend to subsidize the development of AI data centers in the global south. That is a policy of the United States under this administration. And we don’t have the interest in exercising control over the technology in the way that I think the prior administration did in some ways. We don’t want to control other countries’ use in the same way that the prior administration did. So I do think you should think of it as critical infrastructure. And I think that you should think of the United States as a partner in the construction of that.

And I think that owning infrastructure of this kind is an asset that states and regions can use for years to come.

Moderator

Thank you. So… So, you know, it’s been interesting because whenever we speak about AI at scale, when we talk about taking AI to every single person… across the planet, there are always three vectors we look at. So that can be mindset, skill sets, and tool sets. You just spoke about tool sets, which is extremely relevant. And that takes my question to you, Gabriela. When we talk about mindset, should inclusion be framed primarily as an ethical imperative or a competitive strategy or even both? What’s your take on that?

Gabriela Ramos

I really like the way this question is framed, my dear, because I’m sure that people think that going for inclusive policies might hinder competitiveness. Who from the public believes that? Can we have a show of hands? That being inclusive might hinder competitiveness? Investing in competitiveness might go against inclusiveness? I really think I’m an economist, and I think in this area we really need to think. We need to think about economic inclusiveness. because if we just think about social policies that might be needed when some people are left behind and therefore we need to invest in communities or in infrastructure or in people, kids that are in deep need of education, those things are very important.

But more importantly, we need to consider how do we foster market economies that are inclusive and that’s the core issue here. And I can tell you because I have been looking at the question of inequalities. Actually, I’m now co -chairing the task force on inequalities financial disclosure. And what we have seen is that when you have market concentration, productivity flattens. And what happened here, and we saw it at the OECD report that we did some years ago, when you have concentration at the top as the one we are seeing now, we saw that the OECD report which is companies having the whole concentration of compute capacities, the capacity to sort out skills and attract the skills, the capacity of having the financial means to invest.

What happens is that the diffusion machine, which is this very important element that trickles down the innovative developments into a broader set of users and benefits, is broken. Now the diffusion machine is broken. And therefore we need to see how do we ensure that the diffusion is faster. And to do that, of course, I agree with Ivana. The question is how do we ensure that we create the capacities of people and economies that are lagging behind. But we also need to see how do we diminish market dominance. And I know that there are many other considerations, geopolitics, competition matters, trade secrets matter, all these things matter. But for me, competitiveness, inclusiveness has to do with creating the highest well -being for people and that’s the outcome and that’s where ethics competitiveness, all of these narratives collide together because at the end what are we looking at?

that we have well distributed 70 % of wealth in many countries, 60 % of wealth 50 % of wealth is owned by just the top 10 % income groups but that’s not sustainable I get into Europe and Mexico and I was asking where do I put my children because I need good schools and they told me choose the right neighborhood that’s not possible and therefore I feel that there needs to be this set of policies and who is there to ensure a level playing field who is the one that needs to be using the tax systems or the incentive systems or the investment systems or to ensure that people are not left behind or the anti -competition or the non -competitive practices who is there to I pay my taxes so that the governments deliver on their promises so I think this is super important and I feel for example that what India has managed, this question of the digital registry I was with Mr.

Murthy when he presented his plan so many years ago I never could believe that you were going to be doing registry for 100 million people every month, it was just like you’re crazy, that will never happen who finances the government and now you have all India with the digital identification, it’s just amazing and then you go with the financial thing so I feel this is this is this is the this is the this is the this is the this is the this is the this is the this is the this is the this is the this is the this is the this

Moderator

Thanks, Gabriela. Since with this vision that the world of tomorrow with AI would certainly be a better world and hopefully be a better world than what it is today, I have a common question for all the panelists. And the question is, what do you see as the most significant blind spot in recent times AI discourse, keeping in all the conversations you’ve possibly had this week and even prior to that? And maybe what we can do is, Dean, we can go with you first.

Dean W. Ball

Yeah. So in terms of blind spots, I think maybe the most important thing I could possibly say here would be one thing I’ve heard repeated a lot in the conversations I’ve had this week is this notion that the frontier models, the best AI systems are not… necessary. You can find good enough models that can, you know, that are cheaper to run. And in some cases, I think that will be true. But I would point to the very significant blind spot there that, you know, I believe that what we are doing is building systems that are going to be smarter than humans at all cognitive labor. That is a very serious goal. The United States is currently spending, like, it’s not a joke, right?

That’s not a joke. That’s not hype. That’s not crypto. We’re spending a trillion dollars this year on that. That’s the plan. We’re going to do it. It’s going to happen, right? And so the capabilities of those systems and the way that that will change the way the world works, I think ambitious people will be able to do unbelievably broad range of things. And I think this could really be an incredible opportunity for countries in the global south and really everyone in the world to participate in building the future together. And this is not something rejecting frontier models out of some sort of belief that that preserves sovereignty because there are existing use cases you can think of that can be done you know with with with cheaper models i would use a think of think of frontier ai as being useful for stuff that uh we don’t even have words for today right concepts that you will invent uh and you know that we will all invent together that’s the future that we’re building um and uh i think that’s a it’s an easy thing to miss and i think missing it is basically missing the ball game

Moderator

that’s very interesting thank you for sharing that gorilla what about you

Gabriela Ramos

i would say a traditional education education education okay i am really that takes awake my sleep why aren’t we upgrading massively the education pedagogy Why are we changing the way we go in school? Why don’t we invest in our teachers for them to understand how these technologies can help improve student outcomes and at the same time make their life easier? I see a lot of teachers complaining about all the administrative work that they need to do that don’t leave any space for them to invest in quality changes with their students. And I’m not seeing that happening. And we need that pipeline. If the future that Dean is projecting is going to happen, is going to arrive, we need people to be very well equipped.

And where do we get that equipment? I’m fine to invest in the workers in the market. That’s very important. And I think that we need to upgrade that too, the skills of the people in the market. But the school system needs to be upgraded. And actually, I haven’t seen it really happening anywhere. This is a challenge. This is a challenge for North, South, East, West. and I invite that for all to confront this challenge.

Moderator

Okay, I love the fact that you brought education and skilling as a part of it because building AI readiness has become so essential to ensure natural competitiveness no matter which market we are talking about. And just to share an incidence, India was one of the few countries where AI was introduced as a school subject way back in 2019, even before the COVID era. So students could learn AI as they would possibly learn a biology or a physics. But yes, that’s a major challenge which we’re trying to work on. Ivana, your book.

Ivana Bartoletti

I was very impressed yesterday when your prime minister spoke. And I was very impressed by one thing that he said. But he said, you know, develop here and serve humanity. And I think that to me went to point that it’s been very strong. here and where I see because the I like also something that has been missing so far so he said something very very important and he said that AI needs to be used for inclusion for economic well -being inclusion as we said as access but also as participation for many as reduction of the gap between areas of society in geographies across India inclusion also as creating models that respect your languages and your dialects and the ethical norms bind in this country together because the eye that we have now is often not reflecting of the diversity of the world one thing that following on this one thing that has been good has been to see many leaders coming from all over the world.

One thing that I’ve always supported is how we haven’t aligned, I’ve always thought this, how we haven’t aligned on what the red lines of AI are. Are there things that us as a society or as a world, we are never going to do or we don’t want to do? Regardless of, are they, and we’ve seen appeals coming over recent years. We’ve had massive debates around ethics of AI in different ways, whether it’s the US, whether it’s Europe, whether it’s, in different ways, or everyone in different ways. But I think when it comes to something which is far more than technology, because AI is far more than technology, AI’s power is geopolitics, is earth, cables, sea, so much.

I think one of the things that probably we are overlooking is how we, and if we will be, as a world able to come together and have some red lines and say, well actually, we’re not going to go

Moderator

Thank you. So I just want to take a moment to thank the panelists and maybe I can ask Dean for you to sum it up.

Dean W. Ball

Well, I think there’s a lot of different things. Unfortunately, the subject of AI governance is so difficult because it’s so capacious, right? It’s such an enormous topic. But look, I think we have a very real infrastructure development sort of challenge ahead of us. We have a huge complex of new types of institutions and old institutions that are going to change and evolve in various ways. And there’s all sorts of interlocking work to do on things like that that are going to be critical for the governance of AI for both everyday types of harms and also sort of catastrophic things that feel futuristic. But I think that are going to be real parts of our lives in the pretty near future.

And then I think, you know, another thing I would I would kind of double click on is this need for competitive. Yes, which I feel. agree with. And one of the things that I think is exciting about AI is that the price per token of models does drop quite quickly. And so there are a lot of good competitive dynamics here. There’s also centralizing tendencies. And so I think working together to figure out how to prevent those tendencies, I think that’s going to be extremely important. The concentration of power in AI and that issue in the long term is going to be, I think, one of the most important parts of the political economy of this topic.

So yeah, I think that’s how I see it.

Moderator

Thank you. That’s fantastic. We spoke about trade -offs, we spoke about potential solutions, and we spoke about building AI readiness for national competitiveness. Thank you so much to all the panelists, and it was lovely having a conversation.

Gabriela Ramos

Thanks to the moderator. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dean W. Ball
3 arguments169 words per minute1310 words464 seconds
Argument 1
Existing legal frameworks should be presumed sufficient until proven otherwise, with burden of proof on those advocating for new regulations
EXPLANATION
Ball argues that AI is already regulated through existing bodies of law like liability doctrine and product regulations, and that we should apply these existing legal traditions to AI rather than assuming new regulation is needed. He believes the burden of proof should be on those wanting new regulations to demonstrate why existing law is insufficient.
EVIDENCE
References the US common law tradition and existing liability doctrine and product regulations that already apply to AI
MAJOR DISCUSSION POINT
AI governance and regulatory approaches
AGREED WITH
Gabriela Ramos, Ivana Bartoletti
DISAGREED WITH
Gabriela Ramos
Argument 2
Data centers powering frontier AI systems should be treated as critical infrastructure like ports or railroads
EXPLANATION
Ball contends that AI data centers will become essential infrastructure for the future economy, similar to how ports and railroads function as critical infrastructure today. He advocates for treating them with the same level of strategic importance and government support.
EVIDENCE
References his work in the Trump administration on AI policy and mentions the current US policy to subsidize AI data center development in the global south
MAJOR DISCUSSION POINT
Government role in AI infrastructure development
AGREED WITH
Gabriela Ramos
DISAGREED WITH
Gabriela Ramos
Argument 3
Rejecting frontier models in favor of ‘good enough’ cheaper models is a significant blind spot that misses transformative potential
EXPLANATION
Ball warns against dismissing frontier AI models as unnecessary, arguing that the goal is building systems smarter than humans at all cognitive labor. He believes this represents a trillion-dollar investment that will create opportunities for ambitious people globally to participate in building the future.
EVIDENCE
Cites the US spending a trillion dollars this year on developing frontier AI systems and emphasizes this is not hype but a serious, funded effort
MAJOR DISCUSSION POINT
The transformative potential of frontier AI capabilities
DISAGREED WITH
Gabriela Ramos
G
Gabriela Ramos
5 arguments143 words per minute1299 words544 seconds
Argument 1
AI governance requires a broad portfolio of policy interventions beyond just regulations, including investments, incentives, institutions and infrastructure
EXPLANATION
Ramos argues that AI governance is not just about regulations but requires a comprehensive ecosystem approach including government investments, incentives, institutions, and infrastructure. She emphasizes the need to think about what kind of productive economy we want with AI that delivers for people.
EVIDENCE
References historical US government investment in DARPA and the creation of the Internet as examples of successful government-funded foundational research
MAJOR DISCUSSION POINT
Comprehensive approach to AI governance beyond regulation
AGREED WITH
Dean W. Ball, Ivana Bartoletti
DISAGREED WITH
Dean W. Ball
Argument 2
Government has historically been crucial for foundational AI research and should continue investing in open research that benefits everyone
EXPLANATION
Ramos challenges the notion that private sector is innovative while government creates market distortions, pointing to historical examples where government investment enabled foundational technologies. She argues government research tends to be more open and shared compared to private sector innovation.
EVIDENCE
Cites US government investment in DARPA and Internet creation as foundational to current AI capabilities
MAJOR DISCUSSION POINT
Government’s role in AI innovation and research
AGREED WITH
Dean W. Ball
Argument 3
Current AI market shows concentration resembling natural monopolies that create distortions requiring government intervention
EXPLANATION
Ramos describes AI technologies as natural monopolies where someone invents something, lays the network infrastructure, and creates a monopoly that becomes an oligopoly. She argues this concentration creates market distortions that need government policies to address, particularly to prevent AI from becoming a story of only a lucky few.
EVIDENCE
References OECD research showing that market concentration leads to flattened productivity and broken diffusion mechanisms
MAJOR DISCUSSION POINT
Market concentration and the need for government intervention
DISAGREED WITH
Dean W. Ball
Argument 4
Inclusion and competitiveness are not opposing forces – market concentration actually flattens productivity and breaks the innovation diffusion machine
EXPLANATION
Ramos argues that inclusive policies don’t hinder competitiveness but rather that market concentration at the top breaks the diffusion machine that spreads innovative developments to broader users. She contends that when companies concentrate compute capacity, skills, and financial resources, it prevents broader economic benefits.
EVIDENCE
Cites OECD research on inequalities and productivity effects of market concentration, and references wealth distribution statistics showing 60-70% of wealth owned by top 10% income groups
MAJOR DISCUSSION POINT
The relationship between inclusion and competitiveness in AI
AGREED WITH
Ivana Bartoletti, Moderator
DISAGREED WITH
Dean W. Ball
Argument 5
Massive upgrading of education pedagogy and teacher training for AI integration is critically needed but not happening anywhere
EXPLANATION
Ramos identifies education as a critical blind spot, arguing that there’s an urgent need to upgrade education pedagogy and train teachers to understand how AI technologies can improve student outcomes while making teachers’ lives easier. She sees this as essential for building the pipeline of people equipped for an AI-driven future.
EVIDENCE
References teachers’ complaints about administrative work that prevents quality engagement with students, and notes this challenge exists across North, South, East, and West globally
MAJOR DISCUSSION POINT
Education system transformation for AI readiness
AGREED WITH
Ivana Bartoletti, Moderator
I
Ivana Bartoletti
5 arguments143 words per minute1426 words594 seconds
Argument 1
AI governance is a strategic capability for creating long-term value, requiring embedding of privacy, security and legal protections into products
EXPLANATION
Bartoletti argues that AI governance has evolved beyond risk management and compliance to become a strategic organizational capability for creating long-term value. This requires embedding privacy, security, legal protections, and resilience into AI products during development, not just managing risks after deployment.
EVIDENCE
References the scramble organizations faced when generative AI became widely accessible, and mentions her recent World Economic Forum article on design for trust in agentic AI
MAJOR DISCUSSION POINT
AI governance as strategic capability beyond compliance
AGREED WITH
Dean W. Ball, Gabriela Ramos
Argument 2
Successful AI governance must bring employees along in the transformation and leverage their expertise for developing use cases
EXPLANATION
Bartoletti emphasizes that employees who will work with AI daily are best positioned to know how to use it effectively. She argues that successful companies develop use cases based on their activities and ask employees how to innovate, making employee engagement a fundamental part of governance.
EVIDENCE
Observes that successful companies she’s seen develop many use cases by asking their employees how they should innovate with AI
MAJOR DISCUSSION POINT
Employee engagement in AI transformation
AGREED WITH
Gabriela Ramos, Moderator
Argument 3
AI governance must shift from pure risk management to engineering fairness and inclusivity into systems while managing risks
EXPLANATION
Bartoletti acknowledges that AI has brought both amazing benefits and serious risks including disinformation, deepfakes, and amplification of existing inequalities. She argues that governance cannot approach this from a pure risk management perspective but must actively engineer fairness and inclusivity into systems.
EVIDENCE
Cites examples of AI risks including disinformation, deepfakes, and AI amplifying existing inequalities in decision-making around people’s rights and livelihoods
MAJOR DISCUSSION POINT
Balancing AI benefits and risks through inclusive design
Argument 4
AI development should focus on ‘develop here and serve humanity’ with models that respect diverse languages, dialects and ethical norms
EXPLANATION
Bartoletti was impressed by the Indian Prime Minister’s vision of developing AI locally while serving humanity globally. She emphasizes the need for AI models that reflect the diversity of the world and respect different languages, dialects, and ethical norms rather than the current AI that often doesn’t reflect global diversity.
EVIDENCE
References the Indian Prime Minister’s statement about ‘develop here and serve humanity’ and notes that current AI often doesn’t reflect the diversity of the world
MAJOR DISCUSSION POINT
Inclusive AI development that serves humanity
AGREED WITH
Gabriela Ramos, Moderator
Argument 5
The global community needs to establish red lines for AI development and align on what society will never accept regardless of technical capability
EXPLANATION
Bartoletti argues that despite various ethics debates and appeals from different regions, the world hasn’t aligned on fundamental red lines for AI development. She believes there’s a need for global agreement on what society will never accept in AI development, recognizing that AI encompasses far more than just technology.
EVIDENCE
References various ethics debates and appeals that have emerged from the US, Europe, and other regions over recent years
MAJOR DISCUSSION POINT
Need for global alignment on AI ethical boundaries
M
Moderator
5 arguments158 words per minute938 words355 seconds
Argument 1
AI inclusion extends far beyond equitable representation in datasets to encompass access to compute, standards, policy frameworks, and regulatory clarity
EXPLANATION
The moderator argues that when focusing on inclusion in AI, the scope must be comprehensive and include access to computational resources, establishing proper standards, creating supportive policy frameworks that encourage participation from everyone everywhere, and achieving clarity on regulations across countries. This represents a holistic view of what true AI inclusion requires.
EVIDENCE
References observations from five days of conference discussions and mentions AI’s integration across enterprise infrastructure, public sector capabilities, and geopolitical leverage
MAJOR DISCUSSION POINT
Comprehensive definition of AI inclusion beyond data representation
AGREED WITH
Gabriela Ramos, Ivana Bartoletti
Argument 2
AI has become integral to daily life across work, entertainment, healthcare, hiring, and virtually all aspects of human activity
EXPLANATION
The moderator emphasizes that AI is no longer a futuristic concept but has become embedded in everyday experiences, from making work more efficient to personalizing entertainment delivery and influencing critical decisions in healthcare and employment. This ubiquity makes the question of who benefits from AI and under what rules increasingly urgent.
EVIDENCE
Cites AI’s presence in making work life easier, delivering entertainment on demand, and its role in healthcare and hiring processes
MAJOR DISCUSSION POINT
AI’s pervasive integration into daily life
Argument 3
The central challenge of our time is determining who benefits from AI and establishing appropriate governance rules
EXPLANATION
The moderator frames the fundamental question facing society as understanding who gains advantages from AI development and deployment, and what regulatory and governance frameworks should guide this technology. This reflects concerns about equitable distribution of AI benefits and the need for proper oversight mechanisms.
EVIDENCE
References conversations throughout the week about AI’s impact on enterprise infrastructure, public sector capabilities, and geopolitical leverage
MAJOR DISCUSSION POINT
Core governance challenge of AI benefit distribution
Argument 4
Building AI readiness requires addressing three critical vectors: mindset, skill sets, and tool sets
EXPLANATION
The moderator identifies a framework for achieving AI readiness at scale that encompasses changing attitudes and approaches (mindset), developing necessary capabilities and knowledge (skill sets), and ensuring access to appropriate technologies and infrastructure (tool sets). This comprehensive approach is essential for taking AI to every person across the planet.
EVIDENCE
References the goal of taking AI to every single person across the planet and the need for comprehensive readiness
MAJOR DISCUSSION POINT
Framework for achieving global AI readiness
AGREED WITH
Gabriela Ramos, Ivana Bartoletti
Argument 5
India pioneered AI education by introducing it as a school subject in 2019, demonstrating proactive approach to building AI readiness
EXPLANATION
The moderator highlights India’s forward-thinking approach to AI education by making it a formal school subject before the COVID era, allowing students to learn AI alongside traditional subjects like biology and physics. This represents an early recognition of the importance of AI literacy for national competitiveness and citizen preparedness.
EVIDENCE
Cites India’s introduction of AI as a school subject in 2019, before the COVID pandemic
MAJOR DISCUSSION POINT
Proactive educational approaches to AI readiness
Agreements
Agreement Points
AI governance requires comprehensive approaches beyond just regulation
Speakers: Dean W. Ball, Gabriela Ramos, Ivana Bartoletti
Existing legal frameworks should be presumed sufficient until proven otherwise, with burden of proof on those advocating for new regulations AI governance requires a broad portfolio of policy interventions beyond just regulations, including investments, incentives, institutions and infrastructure AI governance is a strategic capability for creating long-term value, requiring embedding of privacy, security and legal protections into products
All speakers agree that AI governance cannot be addressed through regulation alone but requires a multifaceted approach involving existing legal frameworks, strategic investments, institutions, and comprehensive organizational capabilities
Government has a crucial role in AI infrastructure and research development
Speakers: Dean W. Ball, Gabriela Ramos
Data centers powering frontier AI systems should be treated as critical infrastructure like ports or railroads Government has historically been crucial for foundational AI research and should continue investing in open research that benefits everyone
Both speakers recognize the essential role of government in supporting AI infrastructure development and research, with Ball advocating for treating AI data centers as critical infrastructure and Ramos emphasizing government’s historical success in foundational research
AI must be developed to serve humanity inclusively while respecting diversity
Speakers: Gabriela Ramos, Ivana Bartoletti, Moderator
Inclusion and competitiveness are not opposing forces – market concentration actually flattens productivity and breaks the innovation diffusion machine AI development should focus on ‘develop here and serve humanity’ with models that respect diverse languages, dialects and ethical norms AI inclusion extends far beyond equitable representation in datasets to encompass access to compute, standards, policy frameworks, and regulatory clarity
All three agree that AI development must prioritize inclusive approaches that serve humanity broadly, respect cultural diversity, and ensure equitable access rather than concentrating benefits among a few
Education and capacity building are critical for AI readiness
Speakers: Gabriela Ramos, Ivana Bartoletti, Moderator
Massive upgrading of education pedagogy and teacher training for AI integration is critically needed but not happening anywhere Successful AI governance must bring employees along in the transformation and leverage their expertise for developing use cases Building AI readiness requires addressing three critical vectors: mindset, skill sets, and tool sets
All speakers emphasize the urgent need for comprehensive education reform and capacity building to prepare people for an AI-driven future, from school systems to workplace transformation
Similar Viewpoints
Both speakers recognize that current AI development creates problematic concentrations of power and that governance must actively address inequalities rather than just managing risks
Speakers: Gabriela Ramos, Ivana Bartoletti
Current AI market shows concentration resembling natural monopolies that create distortions requiring government intervention AI governance must shift from pure risk management to engineering fairness and inclusivity into systems while managing risks
Both speakers believe in the importance of ambitious AI development and government support for advancing frontier capabilities, though from different perspectives
Speakers: Dean W. Ball, Gabriela Ramos
Rejecting frontier models in favor of ‘good enough’ cheaper models is a significant blind spot that misses transformative potential Government has historically been crucial for foundational AI research and should continue investing in open research that benefits everyone
Unexpected Consensus
Government’s positive role in AI development
Speakers: Dean W. Ball, Gabriela Ramos
Data centers powering frontier AI systems should be treated as critical infrastructure like ports or railroads Government has historically been crucial for foundational AI research and should continue investing in open research that benefits everyone
Despite Ball’s generally pro-market stance and emphasis on existing legal frameworks, both he and Ramos agree on the essential role of government in AI infrastructure and research, showing unexpected alignment between different ideological approaches
Need for proactive AI governance beyond pure market solutions
Speakers: Dean W. Ball, Ivana Bartoletti
Existing legal frameworks should be presumed sufficient until proven otherwise, with burden of proof on those advocating for new regulations AI governance is a strategic capability for creating long-term value, requiring embedding of privacy, security and legal protections into products
Despite Ball’s preference for existing legal frameworks, both speakers acknowledge the need for proactive governance approaches, suggesting convergence between regulatory skepticism and practical governance needs
Overall Assessment

The speakers demonstrated remarkable consensus on key AI governance principles despite coming from different backgrounds and perspectives. Main areas of agreement include: the need for comprehensive governance approaches beyond regulation alone, government’s crucial role in AI infrastructure and research, the importance of inclusive AI development that serves humanity broadly, and the critical need for education and capacity building.

High level of consensus with significant implications for AI policy development. The agreement across diverse perspectives suggests these principles could form the foundation for effective AI governance frameworks that balance innovation, inclusion, and responsible development. The consensus particularly strengthens the case for government investment in AI infrastructure and education while maintaining focus on inclusive development approaches.

Differences
Different Viewpoints
Role of government in AI regulation and market intervention
Speakers: Dean W. Ball, Gabriela Ramos
Existing legal frameworks should be presumed sufficient until proven otherwise, with burden of proof on those advocating for new regulations AI governance requires a broad portfolio of policy interventions beyond just regulations, including investments, incentives, institutions and infrastructure
Ball advocates for minimal new regulation, preferring existing legal frameworks with burden of proof on those wanting new rules. Ramos argues for comprehensive government intervention including investments, incentives, and institutions, challenging the view that government creates market distortions.
Assessment of current AI market structure and need for intervention
Speakers: Dean W. Ball, Gabriela Ramos
Data centers powering frontier AI systems should be treated as critical infrastructure like ports or railroads Current AI market shows concentration resembling natural monopolies that create distortions requiring government intervention
While Ball sees AI infrastructure as critical that should be supported (mentioning US subsidies for global south), Ramos views current AI market concentration as problematic monopolistic behavior requiring active government intervention to prevent market distortions.
Importance and accessibility of frontier AI models
Speakers: Dean W. Ball, Gabriela Ramos
Rejecting frontier models in favor of ‘good enough’ cheaper models is a significant blind spot that misses transformative potential Inclusion and competitiveness are not opposing forces – market concentration actually flattens productivity and breaks the innovation diffusion machine
Ball emphasizes the critical importance of frontier AI models and warns against dismissing them for cheaper alternatives, viewing this as missing transformative opportunities. Ramos focuses on how market concentration around these advanced capabilities breaks the diffusion mechanism that would spread benefits more broadly.
Unexpected Differences
Prioritization of frontier AI models versus broader access
Speakers: Dean W. Ball, Gabriela Ramos
Rejecting frontier models in favor of ‘good enough’ cheaper models is a significant blind spot that misses transformative potential Current AI market shows concentration resembling natural monopolies that create distortions requiring government intervention
This disagreement is unexpected because both speakers seem to want AI benefits to reach more people globally, but Ball argues this requires access to the most advanced models while Ramos argues the concentration around these models is precisely what prevents broader benefits. They have fundamentally different theories about how AI benefits diffuse through society.
Government’s historical and future role in AI innovation
Speakers: Dean W. Ball, Gabriela Ramos
Existing legal frameworks should be presumed sufficient until proven otherwise, with burden of proof on those advocating for new regulations Government has historically been crucial for foundational AI research and should continue investing in open research that benefits everyone
Despite Ball’s background in government AI policy and Ramos’s economist perspective, they have opposing views on government’s role. Ball advocates minimal intervention despite working on government AI initiatives, while Ramos argues for extensive government involvement based on historical success stories like DARPA and the Internet.
Overall Assessment

The main disagreements center on the appropriate level of government intervention in AI development and markets, the prioritization of frontier AI capabilities versus broader access, and different theories about how AI benefits should diffuse through society. While all speakers agree AI governance needs comprehensive approaches, they fundamentally disagree on whether existing frameworks are sufficient or whether extensive new interventions are needed.

Moderate to high disagreement on fundamental approaches to AI governance, with significant implications for policy direction. The disagreements reflect deeper philosophical differences about market dynamics, government roles, and pathways to inclusive AI development that could lead to very different policy outcomes.

Partial Agreements
All speakers agree that AI governance requires comprehensive approaches beyond simple regulation, but they disagree on the extent of government intervention needed. Ball prefers existing legal frameworks with minimal new regulation, Ramos wants extensive government investment and intervention, while Bartoletti focuses on organizational strategic capabilities.
Speakers: Dean W. Ball, Gabriela Ramos, Ivana Bartoletti
Data centers powering frontier AI systems should be treated as critical infrastructure like ports or railroads AI governance requires a broad portfolio of policy interventions beyond just regulations, including investments, incentives, institutions and infrastructure AI governance is a strategic capability for creating long-term value, requiring embedding of privacy, security and legal protections into products
Both agree that inclusion should be actively engineered into AI systems rather than treated as secondary to competitiveness, but Ramos focuses on market-level interventions to prevent concentration while Bartoletti emphasizes organizational-level design choices to embed fairness.
Speakers: Gabriela Ramos, Ivana Bartoletti
Inclusion and competitiveness are not opposing forces – market concentration actually flattens productivity and breaks the innovation diffusion machine AI governance must shift from pure risk management to engineering fairness and inclusivity into systems while managing risks
Both recognize education and capacity building as critical for AI readiness, but Ramos emphasizes the urgent need for systemic education reform that isn’t happening globally, while the Moderator presents a framework approach and highlights India’s proactive educational initiatives.
Speakers: Gabriela Ramos, Moderator
Massive upgrading of education pedagogy and teacher training for AI integration is critically needed but not happening anywhere Building AI readiness requires addressing three critical vectors: mindset, skill sets, and tool sets
Takeaways
Key takeaways
AI governance requires a comprehensive ecosystem approach involving regulations, investments, incentives, institutions, and infrastructure rather than just regulatory frameworks Existing legal frameworks should be presumed sufficient until proven otherwise, with new regulations needed primarily for tail events and catastrophic risks Inclusion and competitiveness are complementary rather than opposing forces – market concentration actually reduces productivity and breaks innovation diffusion AI governance must evolve from pure risk management to strategic capability building that engineers fairness and inclusivity into systems Data centers powering frontier AI systems should be treated as critical infrastructure similar to ports or railroads Government investment in AI research and infrastructure is crucial, as historically demonstrated by foundational technologies like the internet Frontier AI capabilities represent building systems smarter than humans at cognitive labor, opening transformative possibilities beyond current imagination Massive education system upgrades and teacher training for AI integration are critically needed but not happening globally AI development should focus on serving humanity while respecting diverse languages, dialects, and ethical norms
Resolutions and action items
Treat compute infrastructure as critical national infrastructure requiring government investment and protection Shift AI governance approach from risk management to strategic capability building that creates long-term value Invest heavily in upgrading education pedagogy and teacher training for AI integration Develop transparency laws for AI systems that pose potential catastrophic risks Engineer fairness and inclusivity directly into AI systems during development rather than addressing issues post-deployment Leverage employee expertise to develop practical AI use cases within organizations Focus on faster innovation diffusion to prevent market concentration from stifling productivity
Unresolved issues
How to establish global red lines for AI development that all nations can agree upon Specific mechanisms for preventing AI market concentration while maintaining innovation incentives Concrete strategies for upgrading education systems globally to prepare for AI transformation How to balance national competitiveness with international cooperation in AI development Methods for ensuring AI models respect diverse cultural values and languages at scale Practical implementation of techno-legal approaches that translate legal requirements into technical tools How to manage the transition for workers whose jobs will be transformed by AI Specific governance frameworks for agentic AI systems that can make autonomous decisions
Suggested compromises
Balance proactive governance for catastrophic AI risks while relying on existing legal frameworks for routine applications Combine private sector innovation with government investment in open research that benefits everyone Develop AI governance that manages risks while engineering positive outcomes rather than purely controlling negative ones Create public-private partnerships that move from competition to cooperation while maintaining healthy markets Allow frontier AI development while ensuring broader access through infrastructure investment and capability building Implement transparency requirements for high-risk AI systems while avoiding over-regulation of beneficial applications
Thought Provoking Comments
We should presume that existing law is sufficient and that there is some sort of good solution. And then the burden of proof should be on the person who wants the regulation to show this is why existing law doesn’t work.
This comment fundamentally challenges the prevailing assumption in AI governance discussions that new regulations are automatically needed. It flips the burden of proof and suggests a more conservative, evidence-based approach to regulation that builds on existing legal frameworks rather than creating entirely new ones.
This comment set the tone for the entire discussion by establishing a counterintuitive starting point. It influenced subsequent speakers to address the role of government intervention more thoughtfully, with Gabriela directly building on this by discussing the ‘broad portfolio of policy interventions’ beyond just regulation.
Speaker: Dean W. Ball
In the U.S. that was not the case. The U.S. was the place where the massive investment in innovation in DARPA, in the creation of the Internet, all the foundational issues that we are seeing now were financed at some point by basic research that was paid by the government… I like to see the AI technologies as natural monopolies… there are market distortions now that needs to be addressed by government policies
This comment powerfully reframes the government’s role from a potential impediment to innovation to a historical catalyst for it. By citing concrete examples like DARPA and the Internet, and characterizing AI as creating ‘natural monopolies,’ she challenges the binary thinking about public vs. private sector roles.
This comment shifted the discussion from whether government should intervene to how it should intervene effectively. It introduced historical context that grounded the theoretical debate in practical examples, and introduced the critical concept of market concentration as a key challenge requiring government response.
Speaker: Gabriela Ramos
AI governance is really about a strategic capability that an organization must have to create long-term value… governance of AI is much more than risk management it’s much more than compliance… it’s much more than that it’s much more than risk management it’s much more than compliance we realized… that AI governance is really about a strategic capability
This comment fundamentally redefines AI governance from a defensive, compliance-focused activity to a proactive, value-creating strategic function. It moves beyond the typical risk-mitigation framing to position governance as essential for business success and innovation.
This redefinition elevated the entire conversation about governance from a necessary burden to a competitive advantage. It influenced the moderator to ask about underestimating risks, leading to a more nuanced discussion about balancing innovation with responsibility.
Speaker: Ivana Bartoletti
When you have market concentration, productivity flattens… the diffusion machine, which is this very important element that trickles down the innovative developments into a broader set of users and benefits, is broken. Now the diffusion machine is broken.
This comment introduces a sophisticated economic concept – the ‘diffusion machine’ – that explains why concentration in AI isn’t just a fairness issue but an economic efficiency problem. It provides a compelling economic rationale for inclusion beyond moral arguments.
This comment provided the economic foundation for arguing that inclusion and competitiveness are complementary rather than competing goals. It influenced the moderator’s follow-up question about framing inclusion as ethical imperative vs. competitive strategy, leading to a deeper exploration of this relationship.
Speaker: Gabriela Ramos
I believe that what we are doing is building systems that are going to be smarter than humans at all cognitive labor… We’re spending a trillion dollars this year on that. That’s the plan… think of frontier AI as being useful for stuff that we don’t even have words for today right concepts that you will invent
This comment cuts through incremental thinking about AI improvements to articulate the truly transformative nature of what’s being built. The concrete figure of a trillion dollars and the concept of capabilities ‘we don’t have words for’ makes the abstract future tangible and urgent.
This comment served as a wake-up call that shifted the discussion from managing current AI applications to preparing for fundamentally different future capabilities. It influenced Gabriela’s subsequent emphasis on education reform as an urgent priority, recognizing that current educational systems are inadequate for this future.
Speaker: Dean W. Ball
Why aren’t we upgrading massively the education pedagogy? Why are we changing the way we go in school? Why don’t we invest in our teachers… If the future that Dean is projecting is going to happen, is going to arrive, we need people to be very well equipped.
This comment identifies a critical gap between the transformative AI future being discussed and the fundamental institutions (education) that need to prepare people for it. It’s particularly insightful because it connects the high-level AI governance discussion to practical, immediate policy needs.
This comment grounded the futuristic AI discussion in immediate, actionable policy needs. It prompted the moderator to share India’s experience with AI education, connecting the global discussion to specific national initiatives and demonstrating how abstract principles translate into concrete policies.
Speaker: Gabriela Ramos
Overall Assessment

These key comments fundamentally shaped the discussion by challenging conventional assumptions and reframing core concepts. Dean’s initial comment about presuming existing law sufficiency set a contrarian tone that encouraged deeper thinking throughout. Gabriela’s historical perspective on government’s role in innovation and her economic analysis of the ‘broken diffusion machine’ provided sophisticated frameworks for understanding the relationship between inclusion and competitiveness. Ivana’s redefinition of governance as strategic capability elevated the conversation beyond compliance thinking. The interplay between Dean’s vision of transformative AI capabilities and Gabriela’s urgent call for educational reform created a productive tension between future possibilities and present institutional needs. Together, these comments moved the discussion from surface-level policy debates to fundamental questions about the role of institutions, the nature of innovation, and the relationship between technological advancement and social equity.

Follow-up Questions
How can existing bodies of law be effectively applied to AI governance?
Dean emphasized the need to figure out how to apply existing legal frameworks like common law traditions to AI, rather than assuming new regulation is needed
Speaker: Dean W. Ball
What constitutes clear and demonstrated threat models for AI that would justify proactive governance?
Dean mentioned the need for proactive governance in areas with clear threat models, particularly around catastrophic risks, but the specific criteria for what constitutes such threats needs clarification
Speaker: Dean W. Ball
How can governments effectively address market distortions created by AI technology concentration?
Gabriela highlighted that AI technologies function as natural monopolies leading to oligopolies, creating market distortions that need government intervention
Speaker: Gabriela Ramos
How can the ‘diffusion machine’ for AI innovation be repaired to ensure broader benefits?
Gabriela identified that market concentration has broken the mechanism by which innovative developments trickle down to broader users, requiring research into solutions
Speaker: Gabriela Ramos
What is the design for trust in agentic AI?
Ivana mentioned publishing an article on this topic and highlighted the need to understand how to design autonomous AI agents that people can trust and intervene with when needed
Speaker: Ivana Bartoletti
How can organizations protect against cascading hallucinations and model drifting in production AI systems?
These were mentioned as critical governance challenges that require further research and development of monitoring and intervention tools
Speaker: Ivana Bartoletti
How should education systems be massively upgraded to incorporate AI pedagogy?
Gabriela identified education reform as a critical blind spot, emphasizing the urgent need to upgrade teaching methods and teacher training for AI integration
Speaker: Gabriela Ramos
What should be the global red lines for AI development and deployment?
Ivana highlighted the lack of global alignment on what AI applications should never be pursued, suggesting need for international coordination on ethical boundaries
Speaker: Ivana Bartoletti
How can competitive dynamics in AI be maintained while preventing harmful concentration of power?
Dean noted both competitive dynamics (dropping token prices) and centralizing tendencies in AI, requiring research into preventing excessive concentration
Speaker: Dean W. Ball
How can frontier AI capabilities be leveraged for applications we don’t yet have words for?
Dean suggested that the most important applications of advanced AI systems may be for concepts and uses not yet invented, requiring exploration of these unknown possibilities
Speaker: Dean W. Ball

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World

Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World

Session at a glanceSummary, keypoints, and speakers overview

Summary

Renvi opened the session by framing the discussion around “sharing is learning” and the need for AI that is both independent and inclusive on a global scale [1]. She described AI sovereignty as a shift from traditional political and geographic notions toward “digital independence,” emphasizing that achieving it has become a worldwide imperative [4-7]. Comparing regional strategies, Renvi noted that the United States leads in model scale, China centralizes control with rapid scaling, Europe prioritizes trust and compliance, the Middle East builds critical infrastructure hubs, and India is pursuing data, infrastructure, and talent sovereignty to boost its economy [9-14]. She highlighted India’s AI Mission, which has deployed 7,500 datasets and 273 models and offers compute power for less than two cents per minute, illustrating the country’s commitment to affordable, democratized AI [20-23]. She further framed AI as both free and impactful, moving from safe innovation to actionable solutions [27-29]. Renvi argued that such affordability enables broad inclusion of startups, researchers, diverse cultures, languages, disabilities, and gender equality across the AI ecosystem [24-26]. To demonstrate impact, she recounted writing an AI book at age six, which was recognized by the UN Secretary-General and the Indian Ministry of Education, and later translated into 22 Indian languages using the sovereign Sarvam AI model [33-38]. This translation broke language barriers, supported the National Education Policy 2020 by introducing AI concepts from Grade 3, and increased book sales, thereby contributing to India’s GDP [39-43]. She asserted that any nation’s AI strategy will be imperfect, but shared learning and multilateral cooperation through bodies like the GP AI Council can foster responsible and inclusive AI development [45-46]. Renvi emphasized that Generation Alpha, including herself, are not passive recipients but active agents shaping AI’s future, underscoring the generational commitment to the AI revolution [47-52]. She concluded by reaffirming her identity as a member of Generation Alpha, an Indian citizen, and an impact-driven participant, urging the world to recognize the combined power of these three pillars [53-55]. The session then transitioned to the next panel on the next generation of tech leaders, introducing speakers from Glean, Credo AI, and Origin Bio, to be moderated by Ranirudh Suri [58-64].


Keypoints

AI sovereignty and digital independence – Renvi outlines how different regions pursue AI sovereignty (U.S., China, Europe, Middle East) and stresses India’s focus on data, infrastructure, and especially talent sovereignty as a driver for economic growth [3-5][8-13][14].


Democratization and inclusive AI – She highlights the need for affordable compute (under 2 cents per minute) and broad inclusion of startups, researchers, diverse cultures, languages, disabilities, and gender, positioning these as core to India’s AI mission [15-26][22-26].


Concrete impact through Indian AI models – Renvi shares a personal case study: using the sovereign Sarvam AI model to translate her AI-focused book into 22 Indian languages, supporting the National Education Policy, boosting sales, royalty income, and contributing to GDP [30-42][36-41].


Call for global collaboration and youth participation – She urges sharing of learnings, multilateral cooperation via the GP AI Council, and stresses that Generation Alpha will be active agents shaping AI, not just passive recipients [45-48][49-55].


Overall purpose:


The discussion aims to promote India’s vision of a sovereign, affordable, and inclusive AI ecosystem, demonstrate its tangible economic and social impact, and rally both domestic and international stakeholders-including the next generation of technologists-to collaborate on responsible, human-centered AI development.


Overall tone:


Renvi’s speech is upbeat, inspirational, and forward-looking, emphasizing pride in Indian achievements and optimism about youth empowerment. The tone remains consistently positive and motivational throughout, culminating in a hopeful call to action without shifting to criticism or negativity.


Speakers

Renvi


* Areas of expertise: AI sovereignty, inclusive AI, AI impact, digital independence


* Role/Title: Speaker (presenter)


Speaker 2


* Areas of expertise: Event moderation, AI policy discussions


* Role/Title: Moderator / host [S4][S5][S6]


Additional speakers:


Arvind Jain – Founder and Chief Executive Officer, Glean


Navina Singh – Founder and Chief Executive Officer, Credo AI


Malhar Abide – Chief Technology Officer, Origin Bio


Ranirudh Suri – Managing Partner, India Internet Fund


Full session reportComprehensive analysis and detailed insights

Renvi opened the session by asserting that “sharing is learning with the rest of the world” and positioning AI as a tool that must be both independent and globally accessible [1-2]. She framed her talk around three pillars: AI that is independent (sovereignty) [1-3]; AI that is inclusive (democratic and responsible) [15-18]; and AI that is free and impactful[27-31].


She introduced AI sovereignty as a newly emerging global imperative, arguing that the traditional notion of sovereignty based on political and geographic boundaries is being replaced by digital independence. Renvi outlined regional approaches in parallel form: the United States leads large-scale model development and sector-wide innovation; China centralises rapid scaling with strong international governance; Europe builds trust and compliance under its pioneering AI law; the Middle East develops critical infrastructure hubs that underpin the AI boom; and India focuses on data, infrastructure and talent sovereignty as levers for economic growth [9-14].


In the inclusion narrative, she referenced her keynote at the EIFGO Global Summit in Geneva, where she highlighted the shift from an AGI race to responsible, democratized AI inclusion [12-14]. She noted that India’s AI Mission has deployed 7 500 datasets and 273 models, and that compute power is now available for “less than 2 cents per minute”, a price point that makes AI genuinely affordable for startups, researchers and developers from diverse cultural, linguistic, disability-related and gender backgrounds [20-23]. Renvi also mentioned completing a certification course from the India AI Mission, which gave her insight into practical use-case examples [24-26].


To illustrate how affordable, sovereign AI can generate tangible impact, Renvi shared a personal case study. At six years old she wrote a children’s book on artificial intelligence, later recognised by UN Secretary-General António Guterres and India’s Ministry of Education [33-39]. Using the full-stack sovereign model Sarvam AI, she translated the book into 22 Indian languages, breaking language barriers through the “A-L-O-C” mechanism and supporting the National Education Policy 2020 by introducing AI concepts from Grade 3 onward [40-42]. The translation both democratizes AI access and generates measurable business impact, contributing to India’s GDP through increased sales and royalty income [40-43].


Renvi argued that AI can be both free and impactful, moving from safe, innovative research to actionable solutions that benefit societies [27-31]. She called for a collaborative, multilateral framework, stating that “once the GP AI Council members convene and define the multilateral cooperation for responsible and inclusive AI, keeping in mind the value of a human connection,” the world can accelerate empowerment [45-46].


She emphasized the role of Generation Alpha, declaring, “I stand for I, Generation Alpha. I stand for India. I stand for impact.” [52-55]. By being born with AI around them, today’s youth can co-create the systems they will inherit, reinforcing the need for early AI literacy and participation.


Renvi concluded with the cultural phrase “Sarvajan Hitai, Sarvajan Sukhai.” [56-57].


Speaker 2 thanked Renvi and introduced the next panel on emerging tech leaders, naming Arvind Jain (Glean), Navina Singh (Credo AI), Malhar Abide (Origin Bio) and moderator Ranirudh Suri [58-64].


Session transcriptComplete transcript of the session
Renvi

Sharing is learning with the rest of the world. One, an AI that is independent. From large global AI to empowered, scalable, sovereign AI. Sovereignty. The generation sitting right in front of me grew up taking it for only political and geographical individuality. Fast forward to now, the world has a completely new landscape for its definition. I’m growing up knowing it’s to be more around something I may like to call digital independence. And achieving AI sovereignty has become a global imperative. And then I’m seeing an emergence of very AI models which are not just differentiating from the rest of the world. by scale, computer parameters, but by the very approach different nations are building them with. While US leads the global AI models and the technology sector drives innovation, China likes to keep its control centralized with rapid scaling and strong international governance.

While Europe likes to build it more with trust and compliance with the world’s first comprehensive AI law, Middle East positions itself by building AI hubs in the infrastructure layer contributing critical nodes in the AI boom. Well, India is digging into sovereignty. Data sovereignty, infrastructure sovereignty. And most importantly, talent sovereignty. And I’m glad. That is what my country needs to boost its economy. Two, an AI that is inclusive. From the artificial general intelligence race to responsible, democratized AI inclusion. The democratization of AI with inclusion, which I touched upon in my keynote at the EIFGO Global Summit in Geneva last year, has become a core focus area for not just India, but even for the United Nations and the rest of the world.

I’m seeing how India is leading a shift from the artificial general intelligence race to the AI. Two, responsible, democratized AI inclusion. The democratization of the AI course as a key enabler for India’s digital public infrastructure 7500 data sets and 273 models have already been deployed as natural resources to build AI solutions across sectors. Allow me to share my two cents on the affordability of AI compute power under the India AI Mission. Well, to your surprise, it is less than 2 cents per minute. How’s that for democratization? Inclusion of different Indian startups, researchers and developers. Social inclusion of different cultures, languages, disabilities and even gender equality. Overall inclusion of human capital, innovation, social empowerment and the list goes on.

Third, AI is free and AI that is impactful. From safe, innovative, actionable AI to impactful AI. Let’s move to impact and let’s do it a bit differently here. How about I share my own use case of an AI model just released by India. Thanks to my recently completed certification course from the India AI Mission, I observed how every single bit of content was exemplified with an India specific use case impacting lives, businesses and industries. So here’s my back story. When I was six, I written a book on AI. Are you born with AI? This had been made available globally on Amazon and even had been acknowledged by His Excellency, Secretary General of the United Nations, Sir Antonio Guterres and the Ministry of Education, Government of India.

Thanks to the full stack AI sovereign model now in place, Sarvam AI, I’m able to translate my book into 22 different Indian languages, boosting the sales of my book and contributing to India’s GDP. Here’s a sneak peek into this. So you can see here that I’ve translated it into Punjabi, Tamil, Hindi, and then 19 more languages, but obviously I can’t fit on the slide. Impact? One, it helps me live my dream to drive A -L -O -C to all my friends out there breaking language barriers. Two, it helps me support the National Education Policy 2020 of the Government of India by introducing A -L -O -C from Grade 3 onwards. Democritization checked. Three, it helps to have a wider reach as an author, boosting the sales and the royalty I get from the book.

Business impact and GDP contribution checked. So, if a Gen Alpha can contribute to AI literacy countrywide by first writing a book on artificial intelligence, then using AI tools to make illustrations to make it relevant for young minds, and then further use Indian AI tools to translate it into multiple Indian languages, boosting the sales of his book and the royalty, then, to contribute to India’s GDP at age 8, I am confident that each and every one of you can leave your impact with relevant Indian AI models. Amalgamating, be you geopolitically driven or an inclusive AI -impact fabric, and there is no assurance that any country will get it all correct and do it truly. My simple yet important message here is that we can all learn from each other and share our learnings to make this world more empowered with AI.

And that is exactly what India is all set to do once the GP AI Council members convene and define the multilateral cooperation for responsible and inclusive AI, keeping in mind the value of a human connection. Also, me and my generation are part of this AI revolution too. We understand and observe how AI is being shaped up globally. Be it governments, be it tech giants, be it start -ups or even scientists. We are not just at the receiving end. Do not forget we are born with AI around us and we will contribute and be the true agents of change of what you all build today. I stand for I, Generation Alpha. I stand for India. I stand for impact.

And the world will witness all three when they have been raised to the power of AI. Sarvajan Hitai, Sarvajan Sukhai. Thank you.

Speaker 2

Thank you. Thank you. Thank you, Renvi. We have our next panel, which is next generation of techies. May I now invite Mr. Arvind Jain, Founder and Chief Executive Officer, Glean. Ms. Navina Singh, Founder and Chief Executive Officer, Credo AI. Malhar Abide, Chief Technology Officer, Origin Bio. And the panel will be moderated by Mr. Ranirudh Suri, Managing Partner, India Internet Fund. In the meantime…

Related ResourcesKnowledge base sources related to the discussion topics (12)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Renvi opened the session by asserting that “sharing is learning with the rest of the world”.”

The knowledge base contains the exact phrasing “Sharing is learning with the rest of the world” in the keynote transcript, confirming the claim [S11] and [S10].

Additional Contextmedium

“Renvi framed her talk around three pillars: AI that is independent (sovereignty); AI that is inclusive (democratic and responsible); AI that is free and impactful.”

Related discussions in the knowledge base highlight three strategic pillars for AI governance-comprehensive skills development, inclusive governance, and responsible deployment-providing nuance to Renvi’s pillar framing [S43].

Additional Contexthigh

“AI sovereignty is presented as a newly emerging global imperative that replaces the traditional notion of sovereignty based on political and geographic boundaries.”

Several sources discuss AI sovereignty as strategic autonomy rather than isolation, emphasizing jurisdictional control, infrastructure capacity, and strategic choice, which adds nuance to the claim [S44] and [S45] and challenges the idea of a simple replacement of traditional sovereignty [S8].

Confirmedmedium

“The United States leads large‑scale model development and sector‑wide innovation; China centralises rapid scaling with strong international governance.”

The knowledge base compares the U.S. university-industry, venture-capital driven model with China’s centralized state-led strategy, confirming the described regional approaches [S47] and [S48].

Additional Contextlow

“Europe builds trust and compliance under its pioneering AI law.”

While the knowledge base does not cite a specific European AI law, it references Europe’s focus on trusted AI at scale, which aligns with the claim of building trust and compliance [S44].

Confirmedmedium

“India focuses on data, infrastructure and talent sovereignty as levers for economic growth.”

India’s AI strategy emphasizing data, infrastructure, and talent sovereignty is documented in the knowledge base, confirming the claim [S56].

Additional Contextlow

“Compute power is now available for “less than 2 cents per minute”, making AI affordable for startups and researchers.”

The knowledge base reports that India made 50,000 GPUs available at less than a dollar per GPU per hour, which translates to roughly 2.8 cents per minute, providing a comparable but not identical cost figure [S53].

External Sources (61)
S1
WSIS+20 Open Consultation session with Co-Facilitators — – **Jennifer Chung** – (Role/affiliation not clearly specified)
S2
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S3
Event page with the recording — – **Marko Markovic**: Role/title not mentioned. Appears to be a travel guide content creator or host, providing detailed…
S4
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Speaker 2: Role appears to be event moderator or host. Area of expertise and specific title not mentioned.
S5
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S6
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S7
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborat…
S8
AI Safety at the Global Level Insights from Digital Ministers Of — This comment challenges the prevailing narrative around AI sovereignty, arguing that isolationist approaches actually un…
S9
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — I come here to say that India can. And I think that’s the message I want to say. India can. And India can train state -o…
S10
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — This discussion features an 8-year-old prodigy presenting their perspective on global AI development and India’s strateg…
S11
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — This discussion features an 8-year-old prodigy presenting their perspective on global AI development and India’s strateg…
S12
The Foundation of AI Democratizing Compute Data Infrastructure — okay okay okay again again i would say i’ll spend that big money to develop some more use cases again and again. So we a…
S13
The Foundation of AI Democratizing Compute Data Infrastructure — So as we come to the end of our panel, with everything that’s been said, even with all the money on the table, free mone…
S14
Global consensus grows on inclusive and cooperative AI governance at IGF 2025 — At theInternet Governance Forum 2025in Lillestrøm, Norway, the ‘Building an International AI Cooperation Ecosystem’ sess…
S15
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — This discussion focused on governing AI development and ensuring safe, beneficial deployment while maintaining innovatio…
S16
UNGA Resolution on enhancing international cooperation on AI | ‘China’ AI Resolution — Calls upon other international, regional and subregional organizations and international financial institutions and all …
S17
New Technologies and the Impact on Human Rights — Allison Gilwald: Thank you so much, Peggy. I think both of these questions really relate to all the panels and questions…
S18
Securing access to financing to digital startups and fast growing small businesses in developing countries ( MFUG Innovation Partners) — They express gratitude for the help provided. The speaker mentions the arrival of someone, suggesting a new participant …
S19
Session — Martin Rauchbauer: Well, thank you so much. To the overall question of disruption or continuity, I mean, I think there’s…
S20
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S21
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — India’s unique position—combining technical talent, diverse datasets, a vibrant startup ecosystem, and supportive policy…
S22
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And it’s a core technology that a country like India must understand. from the foundational level. Otherwise, we will be…
S23
From India to the Global South_ Advancing Social Impact with AI — Evidence:85% of Indians speak mother languages, skill books have images without descriptions, advanced visual arts libra…
S24
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — An interesting fact is that most of the AI models in the world work in English. But your AI model works in Indian langua…
S25
Friday Closing Ceremony: Summit of the Future Action Days — Youth as active participants and co-creators, not passive recipients
S26
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — Lastly, Francesca highlights the value of establishing partnerships to confidently navigate the crowded AI field. She fe…
S27
Global AI Policy Framework: International Cooperation and Historical Perspectives — – Dean Ball (referenced) Baumann argues for a balanced approach that establishes shared global norms while allowing fle…
S28
AI/Gen AI for the Global Goals — The importance of collaboration and partnerships
S29
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — India’s approach, according to the speaker, centers on three pillars of sovereignty: data sovereignty, infrastructure so…
S30
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — This discussion features an 8-year-old prodigy presenting their perspective on global AI development and India’s strateg…
S31
Open Forum #30 High Level Review of AI Governance Including the Discussion — Abhishek Singh, Under-Secretary from the Indian Ministry of Electronics and Information Technology, emphasised that oper…
S32
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And it’s a core technology that a country like India must understand. from the foundational level. Otherwise, we will be…
S33
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The Bharat GPT consortium exemplifies this approach, bringing together nine academic institutions through a Section 8 no…
S34
From India to the Global South_ Advancing Social Impact with AI — Evidence:85% of Indians speak mother languages, skill books have images without descriptions, advanced visual arts libra…
S35
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — I come here to say that India can. And I think that’s the message I want to say. India can. And India can train state -o…
S36
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — An interesting fact is that most of the AI models in the world work in English. But your AI model works in Indian langua…
S37
Friday Closing Ceremony: Summit of the Future Action Days — Youth as active participants and co-creators, not passive recipients
S38
AI/Gen AI for the Global Goals — The importance of collaboration and partnerships
S39
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Audience:Good afternoon. Good morning. My name is Paola Galvez. I’m Peruvian, right now based in Paris. I just finished …
S40
Multi-stakeholder Discussion on issues about Generative AI — Furthermore, Andrade highlights the significance of dialogue and cooperation in the global AI landscape. He particularly…
S41
Artificial intelligence (AI) – UN Security Council — The global focus on Artificial Intelligence (AI) capacity-building efforts has been a significant topic of discussion am…
S42
Democratizing AI: Open foundations and shared resources for global impact — The tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained …
S43
We are the AI Generation — In her conclusion, Martin articulated that the fundamental question should not be “who can build the most powerful model…
S44
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — AI sovereignty does not mean isolation. It means choosing your dependencies… True sovereignty rests on three pillars: …
S45
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — This comment provides a sophisticated framework for understanding how nations can maintain strategic autonomy in an inte…
S46
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — I believe that nations that command the convergence of biology and AI, or what I like to call the convergence of biologi…
S47
Comprehensive Report: “Converging with Technology to Win” Panel Discussion — The discussion began by comparing two major technology ecosystem models: the U.S. approach, driven by university-industr…
S48
AI race shows diverging paths for China and the US — The US administration’s new AI action plan frames global development as anAI racewith a single winner. Officials argue A…
S49
US robotics firms seek federal support amid China’s rapid growth — Following the US’s first-ever Enterprise Artificial Intelligence Strategy in October 2024, leading robotics companies ar…
S50
A Global Compact for Digital Justice: Southern perspectives | IGF 2023 — Gurumurthy also champions greater inclusivity in stakeholder consultations, extending beyond internet governance bodies….
S51
Building Trusted AI at Scale – Keynote Anne Bouverot — Anne Bouverot’s keynote strategically reframes AI governance from a competitive, technology-centric discourse to a colla…
S52
Main Session on Artificial Intelligence | IGF 2023 — Canales Lobel also highlights the significance of effective global processes in AI governance, advocating for seamless c…
S53
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Agarwal explained that while India has strong talent and skills, they faced challenges with compute infrastructure and d…
S54
AI 2.0 Reimagining Indian education system — Thank you, sir. Thank you so much for giving me the opportunity. I would like to ask a few of the… I think I’m seeing …
S55
Building Trustworthy AI Foundations and Practical Pathways — Alright, I can take the clicker. So, I will keep it slightly brief and I’m going to skip over some slides in the interes…
S56
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Several concrete examples demonstrate progress:
S57
AI Transformation in Practice_ Insights from India’s Consulting Leaders — I agree with Sanjeev. I think just a couple of other points. Why are pilots not getting into sort of really, really prod…
S58
Harnessing Collective AI for India’s Social and Economic Development — <strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in speci…
S59
Building Sovereign and Responsible AI Beyond Proof of Concepts — But then maybe you have the US approach where you don’t have regulation yet. So I think that’s one example. it really de…
S60
Building Sovereign and Responsible AI Beyond Proof of Concepts — It’s like, go ahead and do it. But then there’s a medium risk, a high risk would be something that would be like really …
S61
From Innovation to Impact_ Bringing AI to the Public — Beautiful. Give me an example of it. So, a very good starting point would be detecting whether a particular transaction …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
R
Renvi
8 arguments105 words per minute956 words542 seconds
Argument 1
AI sovereignty as a global imperative (Renvi)
EXPLANATION
Renvi states that achieving AI sovereignty has become essential for nations worldwide. She frames it as a pressing global goal that underpins future technological independence.
EVIDENCE
She explicitly declares that AI sovereignty is a global imperative, noting the shift in how the current generation perceives digital independence [7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S9 emphasizes India’s drive to build sovereign AI models, illustrating the strategic weight of AI independence, while S8 challenges isolationist approaches, arguing they may weaken rather than strengthen national autonomy.
MAJOR DISCUSSION POINT
AI sovereignty importance
Argument 2
Diverse national strategies: US innovation, China centralization, Europe trust‑compliance, Middle East infrastructure hubs, India focus on data, infrastructure, and talent sovereignty (Renvi)
EXPLANATION
Renvi outlines how different regions are pursuing AI sovereignty through distinct approaches: the US emphasizes innovation, China centralizes control, Europe prioritizes trust and compliance, the Middle East builds infrastructure hubs, and India concentrates on data, infrastructure, and talent. This illustrates a fragmented global landscape of AI development.
EVIDENCE
She describes the US leading in innovation, China’s centralized rapid scaling, Europe’s trust-compliance model, the Middle East’s infrastructure focus, and India’s emphasis on data, infrastructure, and talent sovereignty [9-14].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S15 describes the EU’s trust‑and‑compliance model for AI, S9 outlines India’s focus on data, infrastructure and talent, and S14 reports growing global consensus on inclusive, cooperative AI governance, providing context for the varied regional approaches.
MAJOR DISCUSSION POINT
Regional AI strategies
Argument 3
Democratization enabled by ultra‑low compute cost (under 2 cents per minute) (Renvi)
EXPLANATION
Renvi highlights that the cost of AI compute in India is exceptionally low—under two cents per minute—making AI accessible to a broader audience. This ultra‑low cost is presented as a key driver of AI democratization.
EVIDENCE
She reveals that the affordability of AI compute power under the India AI Mission is less than two cents per minute, emphasizing its role in democratization [22].
MAJOR DISCUSSION POINT
Low‑cost AI compute
Argument 4
Broad inclusion across cultures, languages, disabilities, and gender, fostering social empowerment (Renvi)
EXPLANATION
Renvi argues that AI must be inclusive of diverse cultures, languages, people with disabilities, and gender perspectives. Such inclusion is positioned as a catalyst for broader social empowerment and equitable development.
EVIDENCE
She lists inclusion of Indian startups, researchers, developers, various cultures, languages, disabilities, and gender equality, noting the overall impact on human capital, innovation, and social empowerment [24-26].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S9 notes Sarvam AI’s aim to serve India’s linguistic and cultural diversity, and S14 highlights the push for inclusive, multilateral AI governance, both reinforcing the importance of broad inclusion.
MAJOR DISCUSSION POINT
Inclusive AI
Argument 5
Personal case: translating a children’s AI book into 22 Indian languages using Sarvam AI, driving sales, royalty income, and GDP contribution (Renvi)
EXPLANATION
Renvi shares a personal example where she used the Sarvam AI model to translate her AI‑focused children’s book into 22 Indian languages, which boosted book sales, generated royalty income, and contributed to India’s GDP. The case demonstrates tangible economic impact of sovereign AI tools.
EVIDENCE
She recounts writing a book at age six, receiving UN and Indian Ministry acknowledgment, using Sarvam AI to translate the book into multiple languages, and observing increased sales and royalty revenue that feed into GDP [33-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S9 provides evidence that Sarvam AI is being used to create sovereign AI solutions for a billion Indians, supporting the feasibility and economic impact of large‑scale multilingual translation projects.
MAJOR DISCUSSION POINT
AI‑driven economic impact
Argument 6
Supporting India’s National Education Policy 2020 by introducing AI literacy from Grade 3 onward (Renvi)
EXPLANATION
Renvi links her AI translation work to India’s National Education Policy 2020, stating that the multilingual AI resources help introduce AI concepts to students from Grade 3, thereby aligning with national educational goals.
EVIDENCE
She notes that the translated AI book supports the National Education Policy 2020 by introducing AI literacy from Grade 3 onwards [40].
MAJOR DISCUSSION POINT
AI in education policy
Argument 7
Generation Alpha as active contributors and future agents of AI change (Renvi)
EXPLANATION
Renvi positions Generation Alpha—not just as passive recipients but as proactive contributors to the AI ecosystem. She emphasizes that her generation will shape and drive AI advancements.
EVIDENCE
She observes that youth are aware of global AI dynamics, are not merely receivers, and asserts “I stand for Generation Alpha” while highlighting their role as agents of change [48-55].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S11 explicitly characterizes Generation Alpha as “true agents of change” in AI development, aligning with Renvi’s claim.
MAJOR DISCUSSION POINT
Youth empowerment in AI
Argument 8
Call for GP AI Council to define responsible, inclusive, multilateral AI cooperation (Renvi)
EXPLANATION
Renvi urges the formation of a GP AI Council to establish multilateral, responsible, and inclusive AI cooperation frameworks. She sees this as essential for coordinated global AI governance.
EVIDENCE
She states that India is ready to engage once the GP AI Council members convene to define responsible and inclusive multilateral AI cooperation, emphasizing human connection values [46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S14 documents the emerging global consensus on inclusive AI cooperation, S15 outlines the EU’s multilateral governance framework, and S16 records a UN resolution urging international collaboration on AI, all supporting the call for a GP AI Council.
MAJOR DISCUSSION POINT
Multilateral AI governance
S
Speaker 2
1 argument99 words per minute66 words39 seconds
Argument 1
Announcement and handover to the next panel of emerging tech leaders (Speaker 2)
EXPLANATION
Speaker 2 thanks Renvi and introduces the next panel, naming the upcoming speakers and moderator, thereby transitioning the session to the next set of technology leaders.
EVIDENCE
The speaker thanks Renvi, announces the next panel, and lists Mr. Arvind Jain, Ms. Navina Singh, Malhar Abide, and moderator Mr. Ranirudh Suri, before indicating a brief pause [58-64].
MAJOR DISCUSSION POINT
Session transition
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Overall Assessment

The transcript contains a single substantive contribution from Renvi, who presents multiple arguments about AI sovereignty, inclusive AI, low‑cost compute, and the role of Generation Alpha. Speaker 2 only offers a brief procedural thank‑you and introduces the next panel, without reiterating any of Renvi’s substantive points. Consequently, there is no observable substantive agreement or shared viewpoint between the speakers beyond the courteous acknowledgment of Renvi’s talk.

Minimal – the only consensus is procedural (thanks and transition) and does not affect the thematic discussion. This limits the ability to draw joint conclusions or policy implications from the exchange.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The exchange consists of a single, uninterrupted presentation by Renvi followed by a brief procedural hand‑over from Speaker 2. No substantive conflict or opposing viewpoints appear in the transcript; the only shared element is a procedural acknowledgement of the session’s continuation.

Minimal – the dialogue shows virtually no disagreement, indicating consensus (or at least no contestation) on the presented themes of AI sovereignty, inclusion, and multilateral cooperation. This suggests that, within the limited scope of the recorded segment, participants are aligned on the overarching goal of advancing AI in India and globally, with no visible friction that would impede collaborative progress.

Partial Agreements
Both speakers acknowledge the relevance of AI and the need to move the discussion forward, though Speaker 2 does not elaborate on policy or technical aspects, simply facilitating the session transition [58-64].
Speakers: Renvi, Speaker 2
Renvi emphasizes the importance of AI sovereignty, inclusive AI, and multilateral cooperation (see arguments list). Speaker 2 thanks Renvi and signals continuation of the session by introducing the next panel of emerging tech leaders.
Takeaways
Key takeaways
AI sovereignty is framed as a global imperative, with each nation adopting distinct strategies: US focuses on innovation, China on centralized rapid scaling, Europe on trust and compliance, Middle East on infrastructure hubs, and India on data, infrastructure, and talent sovereignty. India is positioning itself to achieve digital independence through sovereign AI models, emphasizing affordability (compute cost under 2 cents per minute) and broad inclusion across languages, cultures, disabilities, and gender. Democratization of AI is highlighted as a driver for social empowerment and economic growth, illustrated by a personal case where a children’s AI book was translated into 22 Indian languages using the Sarvam AI model, boosting sales, royalty income, and contributing to GDP. The initiative aligns with India’s National Education Policy 2020 by promoting AI literacy from Grade 3 onward. Generation Alpha is presented as active contributors and future agents of AI change, not merely passive recipients. A call is made for the GP AI Council to convene and define multilateral, responsible, and inclusive AI cooperation, emphasizing human connection and shared learning.
Resolutions and action items
None identified
Unresolved issues
How multilateral cooperation for responsible and inclusive AI will be structured and operationalized by the GP AI Council. Specific mechanisms to ensure equitable AI talent development and infrastructure sovereignty across diverse regions. Details on scaling the low‑cost compute model beyond pilot projects to broader industry adoption.
Suggested compromises
None identified
Thought Provoking Comments
AI sovereignty has become a global imperative, and each nation is pursuing it in a distinct way – the US leads with scale, China with centralized control, Europe with trust and compliance, the Middle East with infrastructure hubs, and India with data, infrastructure and talent sovereignty.
Frames AI development as a geopolitical contest and highlights the strategic diversity of national approaches, moving the conversation beyond technology to policy and power dynamics.
Sets the stage for the rest of the speech, shifting the discussion from a generic AI overview to a nuanced analysis of global AI strategies. It prompts listeners to consider how sovereignty shapes innovation, leading Renvi to later contrast India’s specific focus.
Speaker: Renvi
The democratization of AI with inclusion is now a core focus not only for India but also for the United Nations and the world at large.
Elevates the topic of inclusion from a national initiative to an international agenda, challenging the audience to view AI equity as a shared responsibility.
Creates a turning point from describing sovereign models to emphasizing ethical and social dimensions. It broadens the conversation to global governance and prepares the audience for concrete examples of inclusive AI deployment.
Speaker: Renvi
Affordability of AI compute power under the India AI Mission is less than 2 cents per minute.
Provides a tangible, quantifiable metric that illustrates how cost barriers can be removed, making the abstract idea of democratization concrete and actionable.
Introduces a data‑driven sub‑topic that deepens the analysis of democratization. It sparks curiosity about scalability and invites stakeholders to consider replication in other contexts.
Speaker: Renvi
Using the Sarvam AI model, I translated my book into 22 Indian languages, boosting sales, supporting the National Education Policy 2020, and contributing to India’s GDP.
Shows a personal, real‑world use case that links AI technology to economic impact, education policy, and cultural inclusion, turning theory into practice.
Acts as a pivotal narrative moment, moving the discussion from macro‑level policy to micro‑level impact. It demonstrates the practical benefits of sovereign, inclusive AI and reinforces the earlier points about affordability and democratization.
Speaker: Renvi
We are not just at the receiving end; we are born with AI around us and will be the true agents of change of what you all build today.
Challenges the common perception of younger generations as passive consumers, asserting generational agency and responsibility in shaping AI’s future.
Shifts the tone from descriptive to motivational, urging the audience—especially peers and policymakers—to recognize youth as co‑creators. This call to action frames the subsequent invitation to learn from each other.
Speaker: Renvi
My simple yet important message is that we can all learn from each other and share our learnings to make this world more empowered with AI.
Synthesizes the earlier themes of sovereignty, inclusion, affordability, and impact into a collaborative ethos, emphasizing knowledge exchange over competition.
Provides a concluding turning point that reframes the discussion from competition among nations to cooperation. It sets a collaborative tone for the upcoming panel and signals a transition to broader dialogue.
Speaker: Renvi
Overall Assessment

Renvi’s remarks steered the discussion from a high‑level geopolitical overview to concrete, inclusive, and affordable AI applications, punctuated by personal examples that illustrated economic and social impact. Each key comment introduced a new dimension—sovereignty, global inclusion, cost‑effectiveness, real‑world use, generational agency, and collaborative learning—creating successive turning points that deepened the conversation and reshaped its tone from descriptive to action‑oriented. These insights laid the groundwork for the next panel, positioning AI not just as a technological race but as a shared, inclusive endeavor.

Follow-up Questions
How can nations achieve AI sovereignty while balancing scale, governance, and trust?
Understanding the pathways different regions (US, China, Europe, Middle East, India) can take to build sovereign AI is crucial for global policy and competitive strategy.
Speaker: Renvi
What specific mechanisms is India using to ensure data, infrastructure, and talent sovereignty?
Clarifying India’s approach will help assess its ability to boost the economy and serve as a model for other countries.
Speaker: Renvi
How effective is the democratization of AI in India, as measured by the low compute cost (less than 2 cents per minute) and adoption across startups, researchers, and developers?
Evaluating the impact of affordable compute on AI uptake is essential to gauge the success of inclusive AI initiatives.
Speaker: Renvi
What are the outcomes and impact metrics of India’s AI Mission’s 7,500 datasets and 273 models deployed across sectors?
Quantitative data on these deployments will reveal how AI is transforming industries and inform future investments.
Speaker: Renvi
How can AI translation tools like Sarvam AI be scaled to cover all Indian languages while ensuring translation quality?
Scaling multilingual capabilities is key to linguistic inclusion and broader societal impact.
Speaker: Renvi
What role will the GP AI Council play in defining multilateral cooperation for responsible and inclusive AI?
Understanding the Council’s mandate and framework is vital for establishing global governance standards.
Speaker: Renvi
How can Generation Alpha be effectively engaged in AI literacy and contribute to national GDP?
Identifying strategies to involve youth early can build a sustainable AI talent pipeline and economic contribution.
Speaker: Renvi
What frameworks or standards are needed to ensure AI inclusivity across cultures, languages, disabilities, and gender?
Developing clear inclusion guidelines will help create equitable AI systems worldwide.
Speaker: Renvi
How does India’s AI compute cost compare globally, and what factors drive its low price?
A comparative analysis will highlight competitive advantages and inform policy decisions in other regions.
Speaker: Renvi
What are the measurable economic impacts (e.g., GDP contribution) of AI-driven publishing and other small‑scale AI applications?
Quantifying these impacts will demonstrate AI’s macroeconomic benefits and justify further investment.
Speaker: Renvi

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit

Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

The discussion centers on how artificial intelligence is transforming the entertainment and media industry, with a particular focus on India’s potential to lead a new era of AI-driven storytelling by 2030. Speaker 1 argues that the industry is transitioning from a passive consumption era dominated by streaming platforms over the past 15-20 years to an active creation era powered by AI technologies. They identify four pillars of an AI storytelling civilization: every creator becoming a studio, every language becoming global through auto-translation, stories becoming participatory with branching narratives, and culture serving as a major export opportunity.


The speaker emphasizes how AI is dramatically reducing content creation costs and production cycles from years to hours, enabling real-time content adaptation based on audience feedback. They highlight micro dramas as the first truly digital format in 20 years, designed specifically for vertical viewing and rapid character development within seconds rather than minutes. The discussion envisions a future where content seamlessly blends audio, video, gaming, and extended reality experiences, with platforms living inside stories rather than stories living on platforms.


India’s advantages in this transformation include demographic energy, linguistic complexity, cultural depth spanning thousands of years, and a robust startup ecosystem. The speaker projects that by 2030, India could have 10 million AI-assisted creators producing content in real-time. However, they acknowledge challenges in moving from finite to infinite content creation, requiring new business models that integrate community commerce rather than relying solely on traditional advertising and subscriptions. The presentation concludes with the vision that India should not just scale AI technology but use it to narrate and define the next storytelling civilization.


Keypoints

Major Discussion Points:


Evolution from consumption to creation era: The speaker describes how the media landscape has shifted from 20 years of passive streaming consumption to an emerging era of AI-powered content creation, where production costs are collapsing and creation cycles are reduced to hours.


Four pillars of AI storytelling civilization: Every creator becomes a studio, every language becomes global through auto-translation, stories become participatory with branching narratives, and culture becomes a major export opportunity through extended mythological and folklore content.


Micro dramas as the first truly digital format: A revolutionary storytelling format that can establish character and narrative in 15 seconds compared to traditional films that take minutes, representing a fundamental shift in how stories are told and consumed.


India’s competitive advantages in AI storytelling: The country’s demographic energy, linguistic complexity, cultural depth spanning 5,000+ years of storytelling, and entrepreneurial ecosystem position it to lead the global AI storytelling revolution.


Business model transformation challenges: The shift from finite content production to infinite AI-generated content requires reimagining traditional advertising and subscription models, with a move toward community-to-commerce integration.


Overall Purpose:


This appears to be a keynote presentation at an AI conference where the speaker is making the case for why India can become the world’s leading “AI storytelling civilization” by 2030, arguing that the country should not just scale AI technology but use it to revolutionize global narrative and content creation.


Overall Tone:


The tone is consistently optimistic, visionary, and inspirational throughout. The speaker maintains an enthusiastic and confident demeanor while presenting a bold vision for India’s future in AI-powered storytelling. The tone becomes slightly more cautionary when discussing business model challenges but quickly returns to the inspirational theme with the closing statement about India “narrating” rather than just scaling AI.


Speakers

Speaker 1: Area of expertise appears to be media, entertainment, gaming, and AI-driven content creation. Role/title not explicitly mentioned, but demonstrates deep knowledge of the entertainment industry, streaming platforms, and AI storytelling technologies.


Speaker 2: Role appears to be event moderator or host. Area of expertise and specific title not mentioned.


Naveen Tiwari: Founder and CEO of Mobi (mentioned as “in Mobi” in the transcript). Area of expertise not detailed in the provided transcript portion.


Additional speakers:


None identified beyond those in the speakers names list.


Full session reportComprehensive analysis and detailed insights

This keynote presentation at an AI conference outlines a compelling vision for how artificial intelligence is transforming entertainment and media, with India positioned to become the world’s leading “AI storytelling civilisation” by 2030. The speaker argues that the industry is transitioning from an era of consumption to an unprecedented period of AI-powered content creation.


From Consumption to Creation Era


The speaker contextualises the current moment by explaining that the past 15 years represented the “streaming or consumption era,” where platforms like Netflix fundamentally changed when we consume content—transforming “prime time” into “my time.” However, despite technological advances, content formats remained largely unchanged. Even when Netflix began producing original content, “they pretty much did what HBO was already doing.”


Now, the speaker argues, we are entering an “era of creation” where AI will democratise content production. Creation costs are collapsing dramatically, with production cycles shrinking from years to hours, fundamentally altering the economics of content creation.


The Four Pillars of AI Storytelling


The speaker presents four transformative pillars defining the new AI storytelling paradigm:


First, “every creator is already a studio”—individual creators now possess technological capabilities that previously required entire production companies.


Second, “every language is global” through advanced auto-translation technologies, enabling real-time multilingual communication where participants can speak in their native languages while being understood by others instantly.


Third, stories become “participatory” with branching narratives and conversational AI integrated within characters, shifting from linear storytelling to interactive, adaptive narratives.


Fourth, culture becomes “truly an opportunity of export in a very different way,” with the ability to extend mythological and folklore stories through AI-enhanced techniques.


Live Operations and Interactive Content


Drawing from his gaming background, the speaker describes a revolutionary shift toward “live operations” in content creation. In gaming, “you take two, three years to make a game, then seven years of live ops.” Similarly, creators will produce initial content episodes, with subsequent episodes generated in real-time based on audience feedback. This represents a move from the traditional “one to million” model of director-to-audience communication to a “million to million” interactive paradigm.


Micro Dramas: Digital Format Innovation


The speaker highlights micro dramas as “the first truly digital format that have emerged” in two decades. These require character establishment within just 15 seconds, compared to traditional formats that take “four minutes, five minutes” or even “eight hours.” Micro dramas are “unabashed” in their approach, immediately presenting characters with clear identities (such as “billionaire playboy”) without gradual character development.


India’s Five Key Advantages


The speaker identifies five advantages positioning India to lead this AI storytelling revolution:


Demographic energy provides a young, digitally native population driving both creation and consumption.


Linguistic complexity becomes an advantage rather than a challenge. The speaker shares an anecdote about addressing an American delegation, noting that Indian AI models have been “trained on chaos,” and this linguistic complexity presents enormous opportunities.


Cultural depth spanning thousands of years provides an unparalleled repository of narratives, mythologies, and folklore for AI reimagining.


Startup nation status with entrepreneurial ecosystems across all sectors enables rapid innovation.


Scale advantages with India producing “1,500 films” compared to Hollywood’s “250 films,” plus “900 TV channels” and “2,500 print publications.”


The 2030 Vision


The speaker projects that by 2030, intelligence rather than cameras will become the primary storytelling tool. India will have transitioned from the current “10 million” creators to potentially billions of AI-assisted creators, with regional studios capable of real-time cinematic production and immersive cultural and devotional platforms.


This future will feature seamless transitions between audio, video, gaming, live events, and extended reality experiences. Most intriguingly, “platforms will live inside stories” rather than stories living on platforms.


Business Model Challenges


The speaker acknowledges the fundamental challenge of moving from “finite to infinite” content production. Using a cement industry analogy, they explain that while traditional industries can scale incrementally based on predictable demand, infinite content generation through AI creates unprecedented sustainability challenges, necessitating new business models beyond traditional advertising and subscription approaches.


Cultural Vision


The presentation concludes with a powerful philosophical statement: “civilisations are not defined by the tools they use. They are defined by the stories they tell.” This positions the AI storytelling revolution as fundamentally about cultural expression rather than just technological advancement.


The closing vision encapsulates the speaker’s argument: “By 2030, let it be said that India just did not scale AI. We narrated it.” This suggests India’s opportunity lies not merely in adopting AI technology, but in using it to define and lead a new era of global storytelling, leveraging its cultural heritage and storytelling traditions as competitive advantages in an increasingly AI-driven world.


Session transcriptComplete transcript of the session
Speaker 1

I think even in the panel before, there was a conversation around that. And I’m going to, over the next couple of slides, just take you through why we see this as a window. Last, about, say, 15 years has really been, you know, the streaming or the consumption era as we know it. It was predominantly, you know, passive consumption. About 20 years back, a bunch of companies like Netflix, etc., they got content from studios, from broadcasters. And prime time basically became my time. There was search, there was recommendations, etc. But format hasn’t changed. Because seven years later, when they did their first original show, they pretty much did what HBO was already doing. So we haven’t really seen much change in format for almost now 20 years.

Cut to now. We’re seeing the era of creation, and that had already commenced in the short video space. But the short video space. was what I call as an augmented space, 30, 60 second stories, augmented with music, augmented with background. You can call them stories, but they were not really complete as it were. I believe with the manner in which video generation models are developing, creation costs are collapsing, production cycles are now within hours, and AI will make India a creation civilization. Why do I say so? So when I look at, you know, what I term as the four pillars of an AI storytelling civilization, it starts with the fact that every creator is already a studio.

That is the reality now. I mean, you could literally be able to speak, and there is a component around, you know, an output that will happen as far as. Second, every language is global. We don’t need to. We have platforms where there is already auto translation, and this will continue to progress even more. Even the ones that you wear, I will be speaking in English, you will listen to me in French, you can reply back in Spanish and I’ll still comprehend. Our stories are becoming participatory. We’re beginning to see branching of narratives. I’ve been on forums like this for the last 15 years. Many a times spoken about terms like, which are more often than used, abused, which is convergence.

But today it is truly beginning to happen because we have conversational AI within characters. It’s already happened within gaming and it’s beginning to happen in this. And lastly, to me, culture is truly an opportunity of export in a very different way. I think from our stories, whether they’re mythological or folklore, we have an ability to extend these. And why do I say so? Because I think the technology stack… which is getting laid out, will make this a possibility. From production pipelines, we’re getting into what we call as creative intelligence systems. We already have generative engines. There is an autonomous sort of creative cycle and agents which are doing this from camera work to the kind of manner of lighting, etc.

We have the layer of narrative engines. You will have more interfaces for immersiveness, etc. And that leads to multi -path and components which we’ve seen parts of. But I think where we are heading is I come from a gaming world as well. We used to take two, three years, make a game, and then do seven years of live ops. I believe with categories like micro dramas, etc., for the first time, we are in for a live op scenario where I will make 10 shots. They’re ready, ingested. By the time you’re watching the fourth, you know, basis that feedback, basis that conversion sort of consumption pattern. your 11th and your 12th and your 13th episode is getting created.

So a vision which was typically one to million, that of a director, scriptwriter, etc., is now heading for a million to million kind of interaction and interface. So why 2030? I believe the camera will no longer be the primary tool of storytelling. Intelligence will. And why do I say that? You see, we are already seeing parts of this. You know, one of the first most significant, in fact, I believe in 20 years, micro dramas are the first truly digital format that have emerged. When we made films, a filmmaker could take four minutes, five minutes in setting up a character. When I did original shows, you know, which could be extending to four hours, I could take eight hours to show this person as an alcoholic, very elderly person, sort of, you know, whatever.

And by the seventh minute, you’re finding that he’s a genius as well. Here, in 15 seconds, they are unabashed, they don’t care, they will put, they’ll show you a face, it’s a billionaire playboy and there will be a little thing coming around and in one stroke, that’s the kind of, so it’s a format. And it is a format of narration which a generation who hasn’t seen things in horizontal is embracing at a pace which is unprecedented. We’re seeing that and I feel where we are heading is a world which I like to describe just from a visual point of view like a cube of sorts. Up till now, we’ve all consumed content as, you know, audio, video, game, live, extended reality.

We’re going to kind of move from one to the other seamlessly and that is why it is exciting. From multi platforms to stories which will no longer live on platforms but platforms will live inside these stories. And why do I say that, you know, because as I said, the creator explosion has already commenced. this is what typically it looked like if I really wanted to be very very generous let’s say for every author who’s been living and has been published for every lyricist who has written a song for every singer for every director every filmmaker in any form anyone from a literary sense if I were to out of eight billion people right now my sense is that number or whatever would probably be about 10 million but if I took the entire creator economy of what’s happening I’ll probably jump a little more we are heading from that world to potentially billions of sort of creators across this entire space and that is the reason why I feel the next Disney our own YRF or Marvel may not be a company but it could very well be a community which is coming and therefore So, you know, let me, no talk is incomplete on media entertainment without, you know, some perspective, you know, on our most visual form, which is still the most sort of, you know, expansive form of theaters.

I don’t believe, you know, that we are going to see the end of that. What we are going to see is more eventized immersive screenings, more mixed reality environments, and hopefully interactive participation. So the formula one is storytelling, premium, spectacular, and experiential. And with this in mind, I feel now coming to the last two slides of why India can lead. In an era where cultural depth becomes a comparative advantage. It’s important, and I really hope that, you know, this is something, you know, we’re a nation with so much of history. The first is the fact that we have demographic energy. We all know this, right? We have linguistic complexity. I often say, you know, we were hosting the American delegation three days back, and there were 120 of us.

We were the first group of them. And I said to them, I said, you know, this is probably the most somber American delegation I have seen across industries. And I asked them, I said, you know, why is it? I mean, is it because of whatever is happening in the traffic, et cetera? Or is somebody actually sort of, you know, concerned or has been using the T word with you all? So I said, you know, as far as I know, intelligence is still pretty much duty free. But having said that, the point I was making to them was I said, listen, even our models here in India have been trained on chaos. And complexity of language and nuances have a huge opportunity.

I think we have massive, you know, cultural depth. Five, six thousand years of storytelling experience. And finally, we’re a nation of startups, you know. We’re an entrepreneurial ecosystem across every sector. And that is one of the reasons why I feel. This is certainly a category where India can lead and show the world what’s possible. So with this in mind, my sense is by 2030, I believe 10 million AI -assisted creators, regional studios, real -time cinematic production, immersive, devotional, cultural platforms, and leading to mainstream sort of events. There’s a thought I’ll leave for you. With all of this, it looks very good. But there is, you know, nothing looks sort of just hunky -dory, as it were. And the thought is we’re also moving from a world of finite.

If I look at content today, in whichever platform it is, right? We make 1 ,500 films. Hollywood makes 250 films. We have 900 TV channels. We produce so many hours across the world. It is this. We have so many radio networks. We have 2 ,500 print publications. It’s all finite. In a gen AI leading to… an AGI world, we will move from finite to infinite. Now, no industry is in a position to, if I’m doing cement and I have 30 million tons of cement capacity, I know India is growing in a particular way, I add a couple of million tons and that’s fine. But if I go in that category and start adding 30 million tons all over again, you know, it’s not sustainable.

So there is a thought, there have to be reimagination of, you know, business models. And to me, the biggest reimagination of this is no more linkages to just advertising and the traditional subscription, etc. This ecosystem is made for commerce. We need to get and engage into that world from community to commerce is an integral part of leading this way. Which is why I feel that civilizations are not defined by the tools they use. They are defined by the stories they tell. Thank you. And artificial intelligence will be built everywhere, but the next storytelling civilization… can rise right here. By 2030, let it be said that India just did not scale AI. We narrated it. Thank you very much.

Speaker 2

Thank you, sir, for your wonderful remarks. For our next keynote, we have Mr. Naveen Tiwari, founder and CEO in Mobi. We welcome you to the stage, sir.

Naveen Tiwari

So good to see everybody here. Thank you. Firstly, I must congratulate the event organizers, the AI Impact Center.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
11 arguments157 words per minute1771 words673 seconds
Argument 1
The last 15 years represented a passive consumption era with streaming platforms, but format hasn’t fundamentally changed in 20 years
EXPLANATION
Speaker 1 argues that despite the rise of streaming platforms like Netflix that allowed personalized consumption (‘prime time became my time’), the fundamental format of content creation remained unchanged. Even when Netflix started producing original content seven years later, they essentially replicated what HBO was already doing.
EVIDENCE
Netflix and other companies got content from studios and broadcasters, introduced search and recommendations, but when they created original shows, they followed HBO’s existing model
MAJOR DISCUSSION POINT
Evolution of media consumption patterns and the lack of format innovation in streaming
Argument 2
We’re now entering an era of creation where AI will make India a creation civilization with collapsing production costs and faster cycles
EXPLANATION
Speaker 1 contends that we are transitioning from passive consumption to active creation, enabled by AI-powered video generation models. This technological shift is dramatically reducing production costs and shortening production cycles from years to hours, positioning India to become a leader in content creation.
EVIDENCE
Video generation models are developing rapidly, production cycles are now within hours instead of years, and creation costs are collapsing
MAJOR DISCUSSION POINT
AI’s transformative impact on content creation and India’s potential leadership role
Argument 3
Micro dramas represent the first truly digital format in 20 years, requiring character establishment in just 15 seconds versus traditional longer formats
EXPLANATION
Speaker 1 identifies micro dramas as the first genuinely new digital content format to emerge in two decades. Unlike traditional films or shows that could take minutes or hours to establish character depth, micro dramas must accomplish character development in just 15 seconds, representing a fundamental shift in storytelling techniques.
EVIDENCE
Traditional filmmakers could take 4-5 minutes to set up a character, original shows could take 8 hours to develop character complexity, but micro dramas show a ‘billionaire playboy’ character in 15 seconds with visual cues
MAJOR DISCUSSION POINT
Innovation in digital content formats and storytelling efficiency
Argument 4
Four pillars of AI storytelling civilization: every creator becomes a studio, every language becomes global, stories become participatory, and culture becomes exportable
EXPLANATION
Speaker 1 outlines four foundational elements that will define the AI-powered storytelling era. These include democratizing production capabilities, breaking language barriers through translation technology, enabling interactive narratives, and facilitating global cultural exchange through extended mythological and folklore content.
EVIDENCE
Auto-translation platforms exist, real-time language translation through wearable devices, conversational AI within characters already happening in gaming, and branching narratives are emerging
MAJOR DISCUSSION POINT
Structural transformation of the entertainment industry through AI
Argument 5
Technology stack evolution from production pipelines to creative intelligence systems with generative engines and autonomous creative cycles
EXPLANATION
Speaker 1 describes a comprehensive technological infrastructure that is transforming content creation from traditional production methods to AI-driven systems. This includes automated creative processes covering everything from camera work to lighting, supported by narrative engines and immersive interfaces.
EVIDENCE
Creative intelligence systems with generative engines, autonomous creative cycles handling camera work and lighting, narrative engines, and multi-path components are being developed
MAJOR DISCUSSION POINT
Technical infrastructure enabling AI-powered content creation
Argument 6
By 2030, intelligence rather than cameras will become the primary tool of storytelling
EXPLANATION
Speaker 1 predicts a fundamental shift in content creation methodology where artificial intelligence will replace traditional filming equipment as the primary means of producing visual narratives. This represents a complete transformation of the filmmaking process from physical capture to AI generation.
EVIDENCE
Current developments in AI storytelling technology and the emergence of micro dramas as digital-first formats
MAJOR DISCUSSION POINT
Future of content creation technology and methodology
Argument 7
India has demographic energy, linguistic complexity, cultural depth from 5-6 thousand years of storytelling, and a strong entrepreneurial ecosystem
EXPLANATION
Speaker 1 identifies four key competitive advantages that position India to lead in AI-powered storytelling. These include a young population, diverse linguistic landscape, rich cultural heritage spanning millennia, and a robust startup ecosystem across all sectors.
EVIDENCE
India is described as ‘a nation of startups’ with an ‘entrepreneurial ecosystem across every sector’ and has ‘5-6 thousand years of storytelling experience’
MAJOR DISCUSSION POINT
India’s unique positioning for leadership in AI storytelling
Argument 8
Indian AI models have been trained on chaos and complexity, providing unique advantages over other markets
EXPLANATION
Speaker 1 suggests that India’s complex linguistic and cultural environment has created AI models that are inherently more sophisticated in handling nuanced, chaotic inputs. This complexity training gives Indian AI systems advantages over models developed in more homogeneous environments.
EVIDENCE
Reference to a conversation with an American delegation where Speaker 1 noted that ‘our models here in India have been trained on chaos’ and mentioned the ‘complexity of language and nuances’
MAJOR DISCUSSION POINT
Competitive advantages of Indian AI development
Argument 9
India can lead the next storytelling civilization by 2030 with 10 million AI-assisted creators and real-time cinematic production
EXPLANATION
Speaker 1 presents a specific vision for India’s dominance in AI-powered content creation, projecting massive scale in creator participation and technological capability. This includes not just quantity of creators but also advanced production capabilities and immersive cultural platforms.
EVIDENCE
Specific projection of ’10 million AI-assisted creators, regional studios, real-time cinematic production, immersive, devotional, cultural platforms’
MAJOR DISCUSSION POINT
India’s potential to lead global AI storytelling by 2030
Argument 10
The industry must transition from finite content production to infinite content possibilities enabled by generative AI
EXPLANATION
Speaker 1 highlights a fundamental challenge facing the entertainment industry as it moves from traditional limited content production to AI-enabled unlimited content generation. This shift requires completely rethinking industry capacity, business models, and sustainability approaches.
EVIDENCE
Current finite content production numbers: ‘1,500 films, Hollywood makes 250 films, 900 TV channels, 2,500 print publications’ compared to the infinite possibilities of generative AI
MAJOR DISCUSSION POINT
Industry transformation challenges from finite to infinite content
Argument 11
Traditional business models based on advertising and subscriptions need reimagination toward community-to-commerce approaches
EXPLANATION
Speaker 1 argues that the shift to infinite AI-generated content makes traditional revenue models unsustainable and calls for new approaches that integrate community engagement with direct commerce. This represents a fundamental restructuring of how content creators and platforms generate revenue.
EVIDENCE
The ecosystem ‘is made for commerce’ and needs to move ‘from community to commerce’ rather than relying on ‘advertising and traditional subscription’
MAJOR DISCUSSION POINT
Business model innovation requirements for AI-powered content
S
Speaker 2
1 argument110 words per minute28 words15 seconds
Argument 1
Introduction of the next keynote speaker and transition between presentations
EXPLANATION
Speaker 2 serves as a moderator, providing a brief transition between presentations by thanking the previous speaker and introducing Naveen Tiwari as the next keynote presenter. This represents standard event management and flow control.
EVIDENCE
Formal introduction: ‘For our next keynote, we have Mr. Naveen Tiwari, founder and CEO in Mobi’
MAJOR DISCUSSION POINT
Event management and speaker transitions
N
Naveen Tiwari
1 argument90 words per minute19 words12 seconds
Argument 1
Acknowledgment of the AI Impact Center event organizers for hosting the discussion
EXPLANATION
Naveen Tiwari begins his presentation by expressing gratitude to the event organizers, specifically mentioning the AI Impact Center. This represents standard opening remarks acknowledging the hosting organization and setting a positive tone for his presentation.
EVIDENCE
Direct statement: ‘I must congratulate the event organizers, the AI Impact Center’
MAJOR DISCUSSION POINT
Event acknowledgment and opening remarks
Agreements
Agreement Points
AI’s transformative potential for content creation and India’s leadership opportunity
Speakers: Speaker 1
We’re now entering an era of creation where AI will make India a creation civilization with collapsing production costs and faster cycles India can lead the next storytelling civilization by 2030 with 10 million AI-assisted creators and real-time cinematic production India has demographic energy, linguistic complexity, cultural depth from 5-6 thousand years of storytelling, and a strong entrepreneurial ecosystem
Speaker 1 presents a comprehensive vision where AI technology will fundamentally transform content creation, positioning India as a global leader due to its unique advantages in demographics, culture, and entrepreneurship
Recognition of technological infrastructure evolution
Speakers: Speaker 1
Technology stack evolution from production pipelines to creative intelligence systems with generative engines and autonomous creative cycles By 2030, intelligence rather than cameras will become the primary tool of storytelling
Speaker 1 acknowledges that the fundamental technological infrastructure for content creation is evolving from traditional methods to AI-driven systems
Need for business model innovation in the AI era
Speakers: Speaker 1
The industry must transition from finite content production to infinite content possibilities enabled by generative AI Traditional business models based on advertising and subscriptions need reimagination toward community-to-commerce approaches
Speaker 1 recognizes that the shift to AI-generated content requires fundamental changes to traditional business models and revenue approaches
Similar Viewpoints
AI technology will democratize content creation and provide India with competitive advantages due to its complex linguistic and cultural environment
Speakers: Speaker 1
We’re now entering an era of creation where AI will make India a creation civilization with collapsing production costs and faster cycles Four pillars of AI storytelling civilization: every creator becomes a studio, every language becomes global, stories become participatory, and culture becomes exportable Indian AI models have been trained on chaos and complexity, providing unique advantages over other markets
The entertainment industry has seen limited format innovation despite technological advances, with micro dramas representing the first significant new digital format
Speakers: Speaker 1
Micro dramas represent the first truly digital format in 20 years, requiring character establishment in just 15 seconds versus traditional longer formats The last 15 years represented a passive consumption era with streaming platforms, but format hasn’t fundamentally changed in 20 years
Unexpected Consensus
Limited format innovation despite technological advancement
Speakers: Speaker 1
The last 15 years represented a passive consumption era with streaming platforms, but format hasn’t fundamentally changed in 20 years Micro dramas represent the first truly digital format in 20 years, requiring character establishment in just 15 seconds versus traditional longer formats
It’s unexpected that despite major technological disruptions in streaming and digital platforms over two decades, fundamental content formats remained largely unchanged until the recent emergence of micro dramas
India’s chaos-trained AI models as competitive advantage
Speakers: Speaker 1
Indian AI models have been trained on chaos and complexity, providing unique advantages over other markets
The framing of India’s linguistic and cultural complexity as a training advantage for AI models presents an unexpected positive perspective on what might traditionally be viewed as development challenges
Overall Assessment

The discussion shows strong internal consistency in Speaker 1’s vision for AI-powered content creation, India’s leadership potential, and the need for business model innovation. There is alignment across arguments about technological transformation, India’s competitive advantages, and industry evolution requirements.

High level of internal consensus within Speaker 1’s presentation, with coherent arguments supporting a unified vision of AI-driven content creation revolution. The implications suggest confidence in India’s potential to lead global storytelling transformation by 2030, though this represents a single speaker’s perspective rather than multi-stakeholder consensus.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

No disagreements identified among speakers

There is no disagreement present in this transcript. Speaker 1 delivered a comprehensive presentation on AI’s transformative impact on storytelling and content creation, with Speaker 2 serving only as a moderator for transitions, and Naveen Tiwari providing brief opening acknowledgments. All speakers appear to be aligned on the positive potential of AI in the digital economy and content creation space.

Takeaways
Key takeaways
The media industry is transitioning from a 20-year passive consumption era to an active creation era powered by AI, with collapsing production costs and faster cycles India has unique advantages to lead the next storytelling civilization by 2030, including demographic energy, linguistic complexity, cultural depth, and entrepreneurial ecosystem AI will fundamentally transform storytelling infrastructure through four pillars: every creator becomes a studio, every language becomes global, stories become participatory, and culture becomes exportable Micro dramas represent the first truly digital format innovation in 20 years, requiring rapid character establishment in 15 seconds versus traditional longer formats By 2030, intelligence rather than cameras will become the primary tool of storytelling, with potential for 10 million AI-assisted creators in India The industry must evolve from finite content production models to infinite content possibilities, requiring business model reimagination from advertising/subscription to community-to-commerce approaches Future entertainment will feature seamless transitions between audio, video, games, live events, and extended reality, with platforms living inside stories rather than stories living on platforms
Resolutions and action items
India should aim to have 10 million AI-assisted creators by 2030 Development of regional studios and real-time cinematic production capabilities Creation of immersive, devotional, cultural platforms leading to mainstream events Transition business models away from traditional advertising and subscription toward community-to-commerce approaches
Unresolved issues
How to manage the transition from finite to infinite content production without industry sustainability issues Specific implementation strategies for reimagining business models beyond traditional advertising and subscription How to handle the potential oversupply of content when moving from millions to billions of creators Practical steps for developing the technology stack from production pipelines to creative intelligence systems Methods for ensuring quality control and curation in an environment with exponentially more creators
Suggested compromises
None identified
Thought Provoking Comments
I believe the camera will no longer be the primary tool of storytelling. Intelligence will.
This statement fundamentally challenges the traditional paradigm of filmmaking and content creation. It suggests a revolutionary shift from physical tools and equipment to AI-driven intelligence as the core creative instrument, which reframes how we think about the entire creative process.
This comment serves as a pivotal moment that transitions the discussion from describing current trends to making bold predictions about the future. It establishes a clear demarcation between traditional content creation methods and the AI-driven future, setting up the framework for discussing the broader implications of this technological shift.
Speaker: Speaker 1
We’re moving from a world of finite… to infinite content. No industry is in a position to handle this scale – if I go from 30 million tons of cement capacity to adding 30 million tons all over again, it’s not sustainable.
This analogy brilliantly illustrates one of the most profound challenges of the AI content generation era. By comparing content to cement production, the speaker makes the abstract concept of infinite content generation tangible and highlights the unprecedented business model disruption this represents.
This comment shifts the discussion from the exciting possibilities of AI to the sobering realities and challenges. It introduces a note of caution and complexity, forcing consideration of sustainability and business model reimagination. This creates a more balanced and realistic perspective on the AI content revolution.
Speaker: Speaker 1
The next Disney, our own YRF or Marvel may not be a company but it could very well be a community.
This insight challenges the fundamental structure of the entertainment industry by suggesting that traditional corporate hierarchies may be replaced by decentralized creative communities. It reimagines how creative enterprises might be organized in an AI-enabled world.
This comment introduces a paradigm shift in thinking about entertainment industry structure, moving the conversation from technological capabilities to organizational and social implications. It suggests a democratization of content creation that could reshape power dynamics in the industry.
Speaker: Speaker 1
Micro dramas are the first truly digital format that have emerged in 20 years… In 15 seconds, they are unabashed, they don’t care, they will show you a face, it’s a billionaire playboy.
This observation identifies a genuine format innovation and explains how it represents a fundamental shift in narrative structure and pacing. The speaker recognizes that micro dramas aren’t just shorter content, but represent an entirely new storytelling grammar adapted to digital consumption patterns.
This comment provides concrete evidence for the speaker’s broader thesis about format evolution, grounding abstract concepts in a real, observable phenomenon. It helps the audience understand how AI-driven changes are already manifesting in current content formats.
Speaker: Speaker 1
Civilizations are not defined by the tools they use. They are defined by the stories they tell… By 2030, let it be said that India just did not scale AI. We narrated it.
This philosophical conclusion elevates the entire discussion from a technical or business conversation to a cultural and civilizational one. It reframes AI development as fundamentally about human expression and cultural identity rather than just technological advancement.
This closing statement provides a powerful synthesis that ties together all the previous points while inspiring a vision of India’s unique role in the AI storytelling revolution. It shifts the conversation from ‘how’ to ‘why’ and positions cultural depth as a competitive advantage in the AI era.
Speaker: Speaker 1
Overall Assessment

Speaker 1’s presentation represents a masterful progression from current state analysis to future vision, punctuated by several paradigm-shifting insights. The most impactful comments work together to build a comprehensive argument: starting with format evolution (micro dramas), moving through structural changes (communities replacing companies), addressing practical challenges (finite to infinite content), and culminating in a civilizational vision. The speaker effectively uses concrete analogies and examples to make abstract concepts accessible, while the philosophical framing elevates the discussion beyond mere technological trends to cultural and societal transformation. The progression creates a compelling narrative arc that positions India not just as an AI adopter, but as a potential leader in AI-driven storytelling civilization.

Follow-up Questions
How can business models be reimagined to move beyond traditional advertising and subscription models in an infinite content generation world?
Speaker 1 identified this as a critical challenge when moving from finite to infinite content creation through AI, emphasizing the need for new sustainable business models and suggesting community-to-commerce integration as a solution
Speaker: Speaker 1
What are the practical implementation strategies for real-time content creation based on audience feedback and consumption patterns?
Speaker 1 described a vision where episodes 11, 12, and 13 are created based on feedback from viewers watching episode 4, but didn’t elaborate on the technical and logistical details of implementing such systems
Speaker: Speaker 1
How can India’s linguistic complexity and cultural depth be effectively leveraged in AI storytelling models?
Speaker 1 mentioned that Indian AI models trained on linguistic chaos and complexity present opportunities, but didn’t provide specific strategies for capitalizing on this advantage in the global market
Speaker: Speaker 1
What will be the impact on traditional entertainment industry jobs and roles as AI becomes the primary storytelling tool?
Speaker 1 predicted that intelligence rather than cameras will become the primary storytelling tool by 2030, but didn’t address the implications for traditional filmmakers, directors, and other industry professionals
Speaker: Speaker 1
How can the transition from platforms hosting stories to stories hosting platforms be practically implemented?
Speaker 1 described a fundamental shift in how content and platforms interact but didn’t provide concrete examples or implementation frameworks for this paradigm change
Speaker: Speaker 1

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Safe and Responsible AI at Scale Practical Pathways

Safe and Responsible AI at Scale Practical Pathways

Session at a glanceSummary, keypoints, and speakers overview

Summary

This panel discussion focused on making data “AI-ready” to bridge the gap between valuable information trapped in documents and the potential of artificial intelligence to make it accessible and useful. Shalini Kapoor, the moderator, opened by highlighting how enterprises and organizations possess wealth of information stuck in PDFs and documents that people are reluctant to share with AI due to trust and safety concerns. She emphasized the need to make data interoperable, safe, and trusted so it can be effectively utilized by AI systems.


Rohit Bardawaj from India’s Ministry of Statistics stressed the importance of establishing a uniform definition and framework for AI readiness, noting that many people don’t understand what it takes to make data truly AI-ready. He outlined key requirements including cataloging data in machine-readable formats, providing proper metadata, creating context files, and structuring data with defined dimensions and attributes. Prem Ramaswami from Google’s Data Commons project discussed their open-source approach to creating knowledge graphs from multiple datasets, emphasizing that data should be federated and governed locally rather than centralized. He advocated for using AI as a tool to supplement human intelligence rather than replace it.


Ashish Srivastava brought a practitioner’s perspective, highlighting three critical challenges: data interoperability across fragmented systems, contextualization with domain-specific vocabularies, and data verification rather than relying solely on declared information. The panelists agreed that AI readiness requires combining structured knowledge graphs with large language models, implementing proper governance frameworks, and creating incentive models for data sharing. They concluded that while the technology shows promise, success depends on establishing trust, maintaining data sovereignty, and building collaborative frameworks between institutions and industry to create a sustainable data economy.


Keypoints

Major Discussion Points:

Data Fragmentation and Silos: The discussion highlighted how valuable information remains trapped in PDFs, documents, and isolated systems across enterprises and government organizations. This creates an “information divide” where entrepreneurs and citizens cannot access relevant data (like government schemes or compliance information) that could benefit them, even when using AI tools.


AI-Ready Data Framework and Standards: The panelists emphasized the need for a unified framework to define what makes data “AI-ready.” This includes creating machine-readable catalogs, proper metadata, context files, business glossaries, and standardized codes. The discussion stressed moving beyond PDF-based documentation to structured, interoperable formats.


Trust, Safety, and Data Governance: A central theme was balancing data accessibility with security and trust concerns. The conversation explored federated data models where organizations maintain control over their data while making it AI-accessible, and the importance of verifiable versus declared data for decision-making.


Practical Implementation and Tools: The panel showcased real-world solutions like Data Commons (open-source platform for statistical data), MCP servers for data interoperability, and the concept of “data boarding passes” for B2B data access. These tools aim to make data accessible without requiring users to leave their existing workflows.


Business Models and Incentive Structures: The discussion addressed sustainability of data platforms through various funding models, from government-funded public data to commercial licensing. The panelists introduced the “GIVE” model (Guaranteed trust, Incentives, Value, Exchangeability) as a framework for creating viable data economies.


Overall Purpose:

The discussion aimed to address the challenge of making vast amounts of existing data (particularly in government and enterprise settings) accessible and usable for AI applications, while maintaining data sovereignty, trust, and creating sustainable business models for data sharing.


Overall Tone:

The tone was collaborative and solution-oriented, with industry experts and government representatives working together to identify practical approaches. While acknowledging significant challenges (data silos, trust issues, technical complexity), the conversation remained optimistic about the potential for creating AI-ready data infrastructure. The tone was technical but accessible, with speakers using real-world examples to illustrate complex concepts. There was a sense of urgency about the need to start building these systems now, despite imperfections, rather than waiting for perfect solutions.


Speakers

Speakers from the provided list:


Shalini Kapoor: Panel moderator, works on AI and data initiatives, mentioned working on Amul AI and Bharat Vistar projects, associated with People Plus AI website


Rohit Bardawaj: From MOSPI (Ministry of Statistics and Programme Implementation), statistician, works on AI readiness of data and has published papers on the topic


Prem Ramaswami: From Google, works on Data Commons project, previously worked on Google Search, focuses on making public data more accessible through open source platforms


Ashish Srivastava: Industry practitioner and solution builder with three decades of experience, currently heading A4I lab (AI Innovation for Inclusion Initiative) – a collaboration between Microsoft and IIIT Bangalore, previously headed a Gen AI company, works on AI for social problems


Audience: Multiple audience members who asked questions during the Q&A session


Speaker 1: Asked a question about setting up Data Commons instances


Additional speakers:


None identified beyond the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This panel discussion, moderated by Shalini Kapoor, brought together experts from government, industry, and technology sectors to address the challenge of transforming existing data repositories into “AI-ready” formats. The conversation took place against the backdrop of significant AI developments, including the Prime Minister’s launch of Amul AI that same morning.


Framing the Problem: The Information Divide

Kapoor opened by highlighting a fundamental challenge in today’s data landscape. She illustrated this with a concrete example: an entrepreneur in Nagpur seeking information about biotechnology subsidies available through government schemes. Despite the existence of relevant programmes offering substantial support for women in biotechnology, this information remains buried in government notifications that are neither discoverable through standard search engines nor accessible via current AI systems.


This scenario exemplifies what Kapoor termed the “information divide”—where valuable information exists digitally but remains trapped in organisational silos, locked away in PDFs, documents, and legacy systems across enterprises and government institutions.


Government Perspective: Building Technical Infrastructure

Rohit Bardawaj from MOSPI (Ministry of Statistics and Programme Implementation) challenged the panel’s fundamental assumptions by conducting an audience poll that revealed no uniform understanding of what constitutes “AI-ready” data. He argued that this definitional gap represents the primary barrier to progress, shifting the conversation from technical solutions to foundational framework development.


Bardawaj outlined key requirements for AI-ready data: machine-readable cataloguing systems (preferably JSON or XML rather than PDFs), comprehensive metadata, context files, business glossaries for domain-specific terminology, and structured databases with clearly defined attributes. His ministry has implemented these principles through their Model Context Protocol (MCP) server, which he described as creating a “universal socket” that enables any large language model to access verified government statistical data.


The success of this approach was demonstrated through creative applications, with Bardawaj’s favourite example being the analysis of grain price inflation based on references in Tamil songs. This illustrated both the technical feasibility and unexpected potential of properly structured AI-ready data.


Open Source Approaches to Data Access

Prem Ramaswami from Google’s Data Commons project provided a complementary perspective on making public data accessible through open-source, federated approaches. His work addresses the tension between data accessibility and data sovereignty by enabling organisations to maintain local control while participating in broader interoperability networks.


Ramaswami explained that Data Commons combines structured knowledge graphs with large language models to create AI search engines capable of quickly accessing and analysing multiple datasets simultaneously. He emphasised that this addresses a fundamental limitation of human cognition—our difficulty processing multi-dimensional problems that require computational assistance.


His vision extends to democratising data analysis capabilities, particularly for India’s 74 million micro, small, and medium enterprises, enabling them to access sophisticated analysis without requiring expensive data science teams. The federated model allows organisations to maintain governance over their information while contributing to broader knowledge networks, with successful implementations including work with UN Statistical Department managing data from WHO, ILO, and other agencies.


Industry Implementation Challenges

Ashish Srivastava from IIIT Bangalore brought a practitioner’s perspective, identifying three fundamental problems that AI-ready data must address: interoperability across fragmented systems, contextualisation for domain-specific applications, and verification of data quality.


His work in women and child health illustrated the interoperability challenge, where critical information about child development is split between different government departments—nutrition data managed by Women and Child Development, while birth and immunisation data resides with Health and Family Welfare departments.


On contextualisation, Srivastava noted that while large language models are improving at general tasks, they consistently fail with specialised terminology. His team creates comprehensive glossaries that work alongside LLMs to provide accurate domain-specific translations, requiring significant upfront investment but proving essential for reliable performance.


Srivastava also highlighted the verification problem in public data, where survey-based information often relies on self-declared responses rather than verified facts. He referenced a conversation with an MIT mathematician about LLMs being inherently probabilistic systems, meaning they can never achieve perfect consistency, making external guardrails and human oversight essential.


Economic Models and Sustainability

The discussion revealed important insights about the economic realities of data infrastructure. Bardawaj noted that “open data is not free data,” explaining that MOSPI operates under a tiered model where research use remains free while commercial applications require compensation, reflecting substantial costs in data collection and maintenance.


Kapoor introduced the concept of “data boarding passes” as a standardised approach to B2B data access, providing efficient onboarding processes for organisations to access AI-ready data systems. She also touched on the “give data, give model” concept and the importance of creating proper incentives for data sharing.


Addressing AI Limitations and Governance

A significant portion focused on current AI system limitations. Bardawaj presented evidence that identical prompts applied to the same datasets can produce different results, highlighting reliability concerns. Kapoor mentioned her team’s ongoing benchmarking work to address consistency issues across different AI models.


The panel revealed that making data AI-ready is fundamentally a governance challenge rather than merely technical. The audience poll demonstrated that while technical solutions are important, primary barriers involve coordination, standardisation, and institutional alignment across different organisations.


Practical Applications and Future Directions

The discussion included practical examples and audience questions about business models and coordination challenges, such as road construction projects spanning multiple districts. Speakers repeatedly referenced booth demonstrations where attendees could see these solutions in action.


Srivastava emphasised that AI comprises only 10-15% of effective solutions, with the remaining 85% consisting of supporting infrastructure. This reframes expectations about AI deployment and highlights the importance of comprehensive system design.


Key Takeaways

The panel concluded that creating AI-ready data infrastructure requires hybrid solutions combining government frameworks, open-source approaches, and practical implementation experience. Success depends on institutions’ ability to coordinate effectively, establish shared standards, and maintain long-term commitments to data quality and accessibility.


The emphasis on federated governance models, sustainable economic frameworks, and appropriate safeguards provides a roadmap for developing systems that serve both local needs and broader development objectives. However, as the panellists acknowledged, this represents a long journey requiring sustained effort and collaboration across sectors.


Session transcriptComplete transcript of the session
Shalini Kapoor

Deep work on working on fragmented data silos. As you all know that AI, it thrives on data. And today, most of the LLMs, what they have done is, they’ve definitely scraped internet and they’re doing really well. But the value of the work or what an answer an LLM would give is present based on what it can fetch from the actual data, which means in enterprises and organizations, there’s a wealth of information. There’s a wealth of information stuck in PDFs, stuck in documents, which people have a fear of not giving it to AI. So there is a fear, there’s lack of trust today, and that data… data, it stays where it is, like digitized. So, for example, there is there could be an entrepreneur, say in Nagpur, wanting to know about the scheme applied for the biotechnology plant that she wants to put up in Nagpur.

Now, if you see the MSME industry has a scheme for her, for women, for biotechnology. And, you know, it’s very good subsidy that that’s available. But where is it stuck? It’s stuck in a government notification which came out, which she’s not aware of. And what she is doing is she’s actually going to LLMS and asking that question and she’s not getting it. She’s also searching it on various places. She doesn’t get it. So that’s the divide, the information divide, which is existing. And the information which is there has which which is there stuck in documents or in even digitized form. That has to be AI ready. so that in a safe, trusted, and these two are very important, safe and trusted manner, the data can be linked, made useful and then made available.

Now, this is a long journey. This is a long journey. It’s not an easy journey because the data journey is about how you clean the data, you make it ready, you link it, you make it relevant, you make it useful and then present it in a manner so that the choice, and you want to have a choice of various elements of, you know, I mean, we live in the age of choice, right? We don’t want to be locked into anything particular. So that’s the data problem that we have in front of us. The opportunity is humongous because there is, I’ll give you an example, 3 ,000, I’m talking to an organization which does 3 ,000. Thank you. entities and the 3 ,000 entities actually manage 5 million new compliances in a year.

They have those kind of queries, 5 million queries on new compliances. Forget existing compliances because there are new compliances which get generated by the government, by various bodies and then they have to search. So the problem is humongous and it can be bridged. It can be bridged but we have to think about how to make data interoperable useful and AI ready. So with that background, I’d like to get into our panel and talk to some of the experts that we have today. My first question is to Rohitji who is from Mosby. So India generates a vast amount of statistical and administrative data. Mosby actually for all of you, it calculates a GDP for India. they have the source of all the data at village and taluka level so the data is there but as you think about making data AI ready what do you think is the responsibility of institution how and yours is an institution to make the data trusted safe and available to all

Rohit Bardawaj

thank you Salini ji good morning everyone so trusted safe and ready for everyone AI ready my I like all of you to take a step back on this and just let us understand do you have uniform definition of what is AI readiness at this point in time do we have and I’ll not say that it’s not there in the ecosystem it’s there in the ecosystem that but do we have an agreement about it so there are two issues we need to understand when we talk about AI readiness of data. One is that, so let me just go back to today’s conversation I had with one of my colleagues over WhatsApp group, you know, we all are very active there.

So one of my papers has just been accepted in one of the largest conference and it’s about AI readiness of data and he asked me what’s so great about it. So I asked why, what is not so great about it? So he asked me, I put Bangla into ChatGPT and it completely understands. So what’s new you are doing? So the point I’m trying to make is people are not aware what it takes to make data AI ready. We all understand and then he asked me that no, but it’s not understanding and he talked about some of the dialect of this country and we have a huge number of dialects and Salneji, I asked him and he asked me that how do I train ChatGPT on this dialect?

I said, it’s not my job, it’s Sam Holtzman’s job. So the issue here. is that we don’t know. And that is the biggest responsibility of our institutions like MOSB to make people aware what AI readiness is all about. And then AI readiness means if I start, you know, talking about there should be a context file, there should be semanticity, there should be metadata, but many of us sorry about that, many of us it looks it would not make sense. So the first idea is to create a framework agreed framework, say people not only me, it’s not about my way or highway, me all of us work together create that framework, put it up for people to know.

I would do, the first thing I would do and I plan to do it literally is try to create a slide deck saying what AI can see and what human can see. So my folder, if it has 10 versions of budget 1, 2, 3, 4, 5, 6 and if I ask a question from that folder budget some answer will come from budget one some answer will come from budget two because unlike human where i am focused on this question ai is designed to take scan the entire thing available so it’s a big difference between human and ai i can be focused ai when once i give a thing to ai it will just scan everything it has in its domain so i would say starting point and just you know not taking much of a time uh starting point should be that let us create this framework let us have a shared understanding let us have a core ai readiness part and an aspirational ai readiness part and work on

Shalini Kapoor

yeah i think that’s very relevant because you cannot leapfrog into everything you have to be like i mean you can have aspirational but the foundation is very very important and and everybody joining that foundation that that that foundation exercise is is really important um i’ll go to you preb uh and talk about let’s talk about data Data Commons aims to make public data more accessible and usable. You’re from Google, and you have put all this in open source. You’ve been working on US Census data being available. Tell us some more about your experiments and how Data Commons is kind of ready or prepared to work on this challenge.

Prem Ramaswami

Thank you for having me here on this panel today. I think one of the areas I’ll start with is the importance of coming to that understanding on AI -ready data, but understanding that the field itself is moving quite quickly at the same time. So whatever agreements we come to today in six months, it feels like we’re dealing with a brand new technological landscape that we’re staring down. What Data Commons tried to do was say that if we can get… If we can get our data in that machine -readable format, which means… structured, which means machine -readable metadata also, and a format where that format specification is not stuck behind a 500 -page PDF, right? Can we make that in a way that the machine can understand it, interpret it, and then use it?

Our theory behind this is that idea of a knowledge graph from that data combined with the large language model gives you a much better chance of success to answer your question. So at Data Commons, what we try to do is we try to bring multiple data sets globally together in a common knowledge graph and then put an AI search engine on top of it so that you can quickly access that data. You can play with this yourself at datacommons .org. But what we did is we open -sourced the entire stack because this idea that that data is centralized with one source is also the dangerous part, and it shouldn’t be, right? The data should be federated.

It should be located at every organization and governed locally by the organizations that are… using it. And so one of the things we’ve done by open sourcing that stack is allowed, for example, the United Nations, the United Nations Statistical Department to use data commons as their back end. And so, you know, UNSDGs, WHO data, ILO data, so on and so forth, is all stored in this common interoperable database now, where instead of a data analyst spending 80 % of their time renaming column headers, they can actually focus on the data analysis so that we can get the impact and the outcomes we want to see. Hope that helped answer the question.

Shalini Kapoor

Yes, yes, no, absolutely. I’ll poke you a little bit more to understand on data commons, what’s a vision you have?

Prem Ramaswami

So a very simple vision, right, which is make data aware decision making the easy answer to take. Today, right now, the majority of the world is flying blind, whether you’re one of those 74 million MSMEs in India. you can’t afford a bevy of computer scientists and data scientists that you can hire you pay a tax to play with any data if you’re a policymaker thinking about climate change poverty education health these are holistic problems it’s no longer i can go to one ministry pull one spreadsheet and solve poverty i need to endemically understand how does education how does health outcomes how does income and economy how do all of these affect poverty locally right and that’s the problem we have today that the world is a multi -dimensional problem the other problem is our brains are not inherently multi -dimensional our brains are great in three dimensions you add a fourth dimension which is time and we’re okay right like look at climate change you add time and it’s greater than our lifetime we can’t think about it which is why we’re not solving it right but the majority of problems are 50 60 dimensional problems machines are really good at this but by the way.

And humans are good at using tools that are good at doing things we’re not. And this is where we have to approach AI as a tool we can use. Not as the answer, but as a tool we can use to derive the answer, to supplement our brains in the areas we’re

Shalini Kapoor

I’ll poke you a little bit more, but later on.

Rohit Bardawaj

Saniji, I just want to take a second stab on that. And just a quick interjection on that. I’m a statistician. So I’ll be very happy if some of my work can be done by AI, you know, all those lab language models. I just read a paper today in the morning. It’s been written by two undergraduates from a Canadian university. And they said, and they proved it, that if you give same prompt to AI with the same data set, it gives you two types of analysis. So this is something I just wanted to flag. That we should not be really gung -ho about things, which is still untested. But yes, I would be the first to accept adopt an AI and use it for my work, but it needs to be, as you rightly put it, trustworthy.

Shalini Kapoor

Yeah, I just comment on this, the stability of an answer, that’s what you’re talking about. We are actually working to create a benchmark onto this because the same thing we are doing, like Amul AI was launched today in the morning, I mean, by the Prime Minister, and the same thing applies to Bharat Vistar, and we are actually working to see that the same question if you ask, multiple times across LLMs, and also to one LLM many times by different farmers, both options, you get different answers. And that, can we make it as a benchmark? That’s what we are working at also because this is a benchmark which is needed really on the ground, right? So that’s a part, so I wanted to comment.

I’ll go to Mr. Ashish. You’re from the industry, and you work with IIIT, . Bangalore. Tell us more about the research in the data area, plus how institutions can help build it all together.

Ashish Srivastava

Right. So I think my perspective is more as a practitioner because the last almost three decades I’ve been a solution builder. So I have seen data not from the data side, but from the solution side, trying to exploit it, trying to use it for the solutions. And I’ll come to the institution part of it. But, you know, when I look at the data and the challenges which are there associated with it, then for the last 10, 12 years, I’ve been for AI, for social problems or digital, like women in child health. I worked for almost a decade. Now, one of the problems that I realized is that the world is fast moving where you don’t manage a transaction.

You manage a journey. OK, and that is the agentic AI and all those things that we are talking about. Now, when I was working a few years back on the women and child data. I realize how fragmented it is the two main data sets if you look at a child’s health his anthropometric data, his nutrition data is with women and child development through their Anganwadi program if you look at the birth data, the humanization data and a lot of other data it is with the health and family welfare department and if you have to have an integrated decision making across for that child what needs to be done and then you have to look at both the data but that burden of orchestration comes on to the person who is solution making the data does not by itself flows through the workflow and that is one of my biggest problem that we have to solve that we look at data sets in isolation but we don’t look at how it flows through the process the second thing which I said the contextualization we all have read the book at least some of them that raw data is an oxymoron data always resides in a particular context and with some standardization associated with it so that you can make some sense out of it.

Now with education, when we are working recently, we realized that LLMs are becoming increasingly good, at least with the main languages, not with all the dialects, but in good translation. The moment they hit any domain -specific vocabulary, that’s when they start failing. Even the class 6th physics question, all these frontier models, is not able to properly translate. So we came up with a solution of using a glossary combined with the LLM so that it does a decent job in terms of overall translation. The user is transparent to contextualization. And the third thing which I faced a lot is that when we talk of public data, a lot of it is declared data and not verified.

Not verifiable data. Especially when a lot of planning depends on surveys. and lot of survey data is actually declared data whether you have a hypertension or not yes, no, whether you have this problem yes, no, what is the verification no doctor has actually verified that and you are going to make a decision based on that so in my opinion the AI ready data has to solve these three big problems it has to be interoperable it has to be contextual and it should actually the third problem that I was saying that you know verifiable, it should be verifiable and governable as an extension of that

Shalini Kapoor

very relevant I think you have posed the right challenge so Prem I am going to come to you right what is how let’s just pick one of them which is contextualization because I am increasingly seeing that domain information is needed and people are creating these glossaries to add like even in Agri when we had to roll out like we are going to do like we are going to do Mahavista, we actually created glossary of 5000 terms which is it is in Marathi so it has to be in Marathi and those terms being used and I know we did some experiments and we have created a sandbox environment you have done it for India so why don’t you explain that how contextualization and domain can be added to Google Data Commons and how it can be helpful

Prem Ramaswami

I think this idea of contextualization and localization is very important at the end of the day these are large language models, language being the key word there they’re not data models and so to what Mr. Bhardwaj said earlier what you want to be able to do is use them to write code to manipulate data because code is language but you don’t necessarily want them to be producing data on their own and one of those problems that you have today is also those large language models are essentially created largely off the web which has its own biases inherent in it, both language and locality -wise. And then on top of that, the example you used on the full folder of all the budgets, right?

The example I like to use for this is actually if you ask a large language model about a celebrity that recently had a breakup, they’ll tell you they’re together because it doesn’t know what just happened over the last month, right? It’s very sad. And so this is where you can use, though, the combination of, you know, you called it a glossary, I always call it a knowledge graph. What is that factual basis of information that I can put together? Now, it’s always going to be a subset of the whole, right? I might be able to cover maybe 0 .1 % of the world’s information with a knowledge graph. But if I can ground it in those facts, can I then utilize the intelligence of the large model to then help me produce some knowledge from those facts or fill in the gaps in those facts?

And so this, I think, is an opportunity that we actually have in the technology to move it forward. This is one of the areas that we’re actively working on as a team. But again, to do that, you first need that glossary of facts, right? This is where having that knowledge graph of statistical data, even if imperfect at this moment, because it is survey collected. It is dependent on the quality of the question asked, the error bar shown, the quality of that metadata, so on and so forth. But it is a starting point from which you can get more information and use that intelligence to potentially even find those outliers or areas that don’t match what you might be hearing on the ground.

So that’s the opportunity I think that we have.

Ashish Srivastava

Because I absolutely agree with you, but I will say it in more direct terms. Because sometimes we feel that LLMs or in previous version, the AI models are the solution. They are not the solution. They are only one of the inputs to the solution. And they comprise 10%, 15 % of what you’re trying to do. It is what is the rest of 85%. is doing yes llm will give different answer how are you compensating with guardrails human in the loop risk assessment these are the tools which are available today so i if you have to build because at the end of it it’s a probabilistic model okay come what may and i was talking to a mathematician from mit and he explained why it will never become perfect why it is it is grounded that fact is grounded in mathematics that it is it cannot ever become as perfect that every time consistent that we are wanting it to be ever because then you are taking the main source of its creativity away from it so what you have to focus is outside not inside that that’s all i ever wanted to say

Prem Ramaswami

if i agree with you completely and i started by saying it’s a tool right and we use tools to supplant ourselves not to replace ourselves right to supplement our knowledge not to replace our knowledge so i do agree with you it’s a tool but we have to be careful throwing the baby out with the baby and we have to be careful with the baby and we have to be careful with the baby and we have to be careful with the baby and we have to be careful with the bathwater here in the sense that That tool now makes things available to the average person. It upskills the average person in a way that they couldn’t themselves before.

So if we immediately go to put guardrails, prevent access, things like that, we’re preventing a large part of society. And I’ll say as somebody who worked on Google Search for many years, there were many arguments in Google Search that we, for example, shouldn’t put health information on search. Because the average person isn’t smart enough to be able to deduce information about their own health from Google. But the average person can’t afford a doctor also, right? There are endemic problems in society that prevent you from doing that. So does the answer to that question suffer, or does the answer to that question do less harm and give people a pathway that they can learn from? And so that’s an important question to ask ourselves here as we think about AI, which is, yes, it is imperfect at this moment.

Can we understand? Can we educate? Can we work inside the system that exists? we can’t ignore it either. We can’t say it made one mistake, therefore I will not use it. And I will also call out the imperfection of us as humans is also very much there, right? So there are many times we look at these systems and, you know, we look at, you know, a way more autonomous vehicle and we said, look, it had six accidents last year. The 30 ,000 deaths from car accidents in the U .S. a year, right? And so statistically speaking, this is still much safer, right? And so these are the sorts of examples that we have to look at, understand where to apply it, how to apply it, and what the overall societal good is from using it.

Shalini Kapoor

Yeah. No, thanks. I think a very relevant discussion that we are having, and there’s always a fight between should we have RAG architecture or should we just, you know, teach, give all to LLM to do it because it has more capacity and more, you know, GPU. But either or is not possible. There’s like, it’s like so much about the world. It’s like, you know, it’s like, you know, it’s like, you know, it’s like, you don’t want to give you maybe want to keep the data and the sovereignty comes in. it a lot. And this has been a discussion in the last two days. Most of the panels that I have been that you want to keep your data.

Countries want to keep the data with themselves and they actually don’t want to train because choice of LLMs is like you want a lot of choice and you want to use here, there, everywhere. So I’ll come back to you Rohitji and see we talked about administrative data and you talked about a framework. So my question is that how do you think alternate data, secondary data beyond administrative data, how can that be also brought in and your framework which you talked about that there should be a foundational framework if that framework is adopted by industry. One, is it possible? And two, what kind of data economy it can start?

Rohit Bardawaj

So this is early morning. Let me take an audience poll on it. How many of you think that what Salini asked is a governance issue? Or is it a, I mean, just raise your hand if you feel it’s a governance issue. Anyone who feels it’s a governance issue? How many of you feel it’s a technological issue? What she asked. How to make alternative data ready for AI. That’s what the question was. So how many of you feel it’s a technology? There’s no prizes for it. There’s no punishment for it. So feel free to raise your hand the way you think. It’s a technology. So, okay. So I am with that gentleman. I feel it’s a governance issue.

And I’ll also work on it. So what are we talking about? We are talking about data generated from different sources, be it alternative data sources, be it like administrative data sources. The panelist with my co -panelist just talked about getting data from different sources not aligned to each other. So it’s a governance issue which we need to understand first. We need to create. And, of course, I completely agree with Salini when she said that we need a federated model. Perhaps Prem said that. We need a federated model. There cannot be one whole sole owner for a data. of this country or for that matter for any country what as as somebody needs to play the role of data steward somebody needs to orchestrate this data ecosystem and that perhaps being from nso i have my own biases i’ll say nso can do it but of course that’s something for the people to decide now let’s understand this what do we need when we need ai ready data we need first a cataloging of it i’m just going to take one minute on it cataloging of it you should have everything catalog any industry any government organization this is my data set these are the indicators these are the definitions and so on and so forth i’m not getting that deep into it you need a catalog of your data and if that’s not there second thing is that catalog should not be pdf that catalog should be as she was saying machine readable json file probably you need a catalog of your data and if that’s not there you need a catalog of your data and many other ways are there but let’s talk about you JSON file.

Second point, you should have metadata for it. If you don’t have metadata for it, I mean other day I was with another panel with Prem, I said the thing which irritates me the most is lack of metadata. I don’t know. I’ve been driving in blind. I don’t know what the word frequency means. It may mean hundreds of things. So you should have metadata and again not in PDF. So when I’m, whatever I’m talking about is, I’m not, I mean JSON or XML, there are so many ways, but machine readable. Let’s put it that way. Third is, you should have a context file. So now machine has read it. Now but it wants to know that where do I find the meaning of frequency?

So machine should have a context file where the source is written. You go there and see. You will find the meaning of frequency. So metadata will not have frequency, meaning of frequency. It will only write frequency means quarterly. So machine now needs to understand what does that frequency means. So that’s what she was talking about and Tim again was talking about. We need to have a, that makes us, bring us to the, we need to have a business glossary. We need to have a business glossary. He also talked about a knowledge graph. I mean, just a sophisticated version of business glossary. That we need to have. So once we have sorted this out, we need to work, what type of codes are we working?

So the gentleman just beside me just talked about two data sources using different codes for different thing. I mean, same thing. So then we have to standardize that codes. And then lastly, we have to structure our data. Data needs to go in a structured database. It should be defined and that’s not new I’m talking about. It should be defined by dimensions. It should be defined by attributes. It should be defined by its role. So time means temporal. You can’t write time and expect LLM to understand what does time mean. You have to say time means temporal. And once you have these ready, these available in a, so there are two use cases. And just last, last quick of the One is that am I using it for my own use case?

Am I training my own model for it? Then I can put all these in one file and feed it to my model. But if I’m expected to create a MCP for my database, then I have to create separate files, put it up in a URI or URL where any model can go, the connector can direct it to that model, that place, that resource place, and then the things happen. And this is all I’m talking from my personal experience when we, and Salneji knows about it, when we developed our own MCP server

Shalini Kapoor

loving it the amount of reach out which has happened to use the data data sets for you know you can actually find out you can ask a question of what’s been the price of how has the price of moong dal been in the last whole year or whole quarters or month wise so that that capability is there now and it has happened because it was always there they do the calculation of the wholesale price index the commodity price index so that in from the data was there it’s just that now it is ai ready for people to consume take and and then ask and it is connected to claude and chat jpt ashish i’ll go to i’ll go to you uh for because building on what uh roheji stopped at which is the use cases and you come from the solution part of it uh how do you visualize and imagine solutions and use cases and how do you visualize and imagine solutions and use cases you combining say administrative data and alternate data I’m not going into personal data because there’s a lot of consent there but at least a lot of secondary sources of data which is available and how do we combine and make it more powerful

Ashish Srivastava

I think as you rightly pointed out I come from the solution perspective and a solution now with agentic AI coming in we look at every solution in form of a journey. We are going past the mechanism of point solution that you ask it reverts back to the answer and now the use case has to decide at which part of the journey what data is that you need and that will dictate whether it is additional data sets which are outside or it is a public data set it will be due. The only challenge which I see here is the who is accountable for that data Thank you. Thank you. in the solution at the API level, at the policy engine level, which are actually going along with the solution, and it should happen, it should be enforceable automatically.

If you are thinking that a human being will actually enforce that policy, it will break. It will break in no time. So that is what we are trying to do, is to create those reusable artifacts as DPIs or DPGs, it will fall into one of those categories. But where it allows those policies to be set for a data set in an easy reusable way so that everybody doesn’t have to recreate from scratch those kind of policies, and then that’s the way to move forward.

Shalini Kapoor

You mentioned your lab. I’m sorry, I just spoke you into that. Tell us more about your lab. What more work they are doing?

Ashish Srivastava

So that’s my current job. Previously I was heading a Gen AI company, by the way, and I will talk separately later on PDF challenge, which we thought we had solved it. We didn’t fully, but we were on the way. But the current lab, which is very exciting, it’s a collaboration between Microsoft and IIIT Bangalore. A4I stands for AI Innovation for Inclusion Initiative. That means we create large scale. The idea is not here to run pilots that we do this small thing here, we diagnose, not that. It should be population scale and we want to launch it as a DPG so that it can be largely. So we are working on education, school education area.

We are working with teachers in terms of making their life easy. We are working in terms of accessibility. How blind children can actually be taught STEMs so that they can actually become a mathematician. They can hope to become a physicist, mathematician. Today it’s very difficult. How to even read a book? And the third one we are doing is working with the last mile health workers. Our current solution is a rack based AI combination, but we are looking at exactly that problem that you mentioned that either it is this or that. I think there are plenty of answers which are in between. The. That was what we are exploring.

Shalini Kapoor

Thank you. Thank you so much. Prem, I’ll again build on the concept that we were discussing on the use cases, which can be. I mean, I just want you to paint a picture of if you have data in knowledge graphs, like what you mentioned, if the data is there and data commons is present. I just want you to visualize that what more use cases can be possible with secondary data. How can India benefit and not just India, Global South benefit from this? And please feel free to paint the use cases which you have built in the sandbox environment that you have. You can just take those examples.

Prem Ramaswami

Yeah, I’ll give two very. These might not be exactly where the sandbox is today, but where it could go tomorrow. Right. And so I’ll give two very different examples here. One is. At the end of the day, the Ministry of Statistics does a lovely job collecting as much information as they can. The whole ministry does. The government does. it’s a top -down data collection.

Shalini Kapoor

I’m sorry, I’ll just interrupt you. I think Rohitji will say it’s not top -down. It’s actually at the field level, it’s bottom -up.

Prem Ramaswami

That’s fair, that’s fair.

Shalini Kapoor

He will say that, it’s bottom -up.

Prem Ramaswami

That’s fair, that’s fair. You’re correct, it’s bottom -up. That said, we have alternate data sources also that are there. Sometimes they supplement and they further show, yes, the data collected is correct. At times they disagree. And those disagreements are also interesting to understand to the point of where is the survey question flawed or where is the civil society seeing something or has visibility into something that we don’t have access to. And so the more of these data sets that come together, these points of friction, again, this is where the human intelligence comes in. Show me the points of friction. I have a haystack full of needles. Which needles do I pay attention to? Right? So this is one example if I’m at the government or the statistics, you know, ministry of statistics level.

Now let’s go to the completely opposite end. I’m a small business owner. I’m setting up a physical shop. Where should I set it up? Right? Where I set it up depends on mobility traffic, depends on the demographics and affordability in that space, depends on all types of things. Right? It’s a large data question. But that MSME owner is often ill -equipped to answer any of those questions, is often taking a shot in the dark. And that shot in the dark is a costly shot in the dark if they’re wrong. Right? Because they are taking the full risk of that decision. Now with the data commons that we’re building, the question becomes can we reduce that risk for that individual?

Can we help them model, understand, de -risk the decision they’re making? And that’s what we’re doing. And that’s what we’re doing. based on the audience they want, based on the footfalls they want, based on the location that they’re choosing. That’s a very specific example now. But these are two very opposite examples of how bringing all of this data together, which we often think about as more aligned towards, you know, the international organizations or the government minister, but is actually usable on the ground by an individual too.

Speaker 1

Tell us a bit more about, like if suppose someone wants to put up a Data Commons instance, how can they get started?

Prem Ramaswami

It’s actually quite simple. It’s easy enough that I can do it myself, which means you can. But it’s datacommons .org is an open source platform. We have a 20 -minute guide to get started. You can set the whole thing up on your computer, have your CSV data set, bring it in. And the thing is, once you bring one data set in, it overlays with all the data sets already in Data Commons. This creates sort of a network effect between the two. To the data, right? So once I bring in, you know, if I am a chain store in India trying to figure out that next store location, if I bring in all my per store sales revenue data once, then suddenly I can compare that to the 50 ,000 data sets and overlay them that are already in data comments.

Before, if I wanted to do this as a chain store in India, I would normally have my people come up with maybe 10, 12 different hypotheses. Because then I have to get those 10, 12 different data sets and I have to form 13 different data transforms, right? So they’re all in the same format. That prevents us from being able to have that level of creativity we want where we can look across the entire landscape of the problem set. And so this is sort of one of the things.

Rohit Bardawaj

Right answer. And it was a matter of trust for NSO also. That, you know, people are getting different answers for the data which is created by NSO. That makes sense. It makes us look toward MCP server. A, it is open. so it makes our data interoperable for all the almost all the AI system. I am not saying all the AI system. Otherwise what would happen? Be aware that every LLMs have their own standards of API. So you create those APIs first and then you somehow manage the LLM to approach that API. With this connector, it’s like C socket for the phone charger, if I may use the parallel, where you can just plug in any C socket you can use for anything.

That’s what the MCP is. So data comes and the LLM comes and plugs into MCP. And it allows any LLM to come. But what you have to do now, that you have to connect that small tool with that LLM. So that’s a one minute job and it’s available on our website. You go there, www .mosfet .gov .in and the offering section, everything is available. You can do it in one minute. One minute, maybe two minutes at the most. Anyone. But still there is one challenge, which I must tell you, is that somehow need to ensure that this becomes a default tool. The user does not have to add it. Somebody says somebody forgets it. Then the same situation starts happening again.

So right now people have to add it to their tool. But the biggest advantage I see is that people don’t have to come out of their workflow. So if I have taken a very costly pro cloud, then I don’t have to come out of it. Go to my portal to get the data analysis. I can keep using the intelligence of cloud or chat GPT. I don’t have a preference for that. With the verified data, as he talked about, verified data of Mospy. And the use cases are innumerous on the web now. I mean people have just lapped it up. My favorite is that there is a Tamil song which talks about a lot of grains.

So one of the messages I got, and I’ll share the link also. It’s on Twitter also. I mean X now. That somebody created a CPI for all the grains which was talked about in that CPI is consumer price index which basically talks about inflation uh which talks about all the grains and they just took the grain out of the uh song that you know now wheat and and created CPI index for it and they have named it like p index or something which is like songs name so I’m not very conversant pardon me for that in Tamil but I’ll share that link so that’s my favorite use case so people so what I mean to say that people can use the data the way they like it that’s the that’s the bottom line and that’s that’s what the NSO’s idea

Shalini Kapoor

is most interesting use case I would have I would have seen and I really want to see uh it and and say yeah yeah I’ll have a look at it so one more thing which I want to tell uh the audiences that uh the work uh see several like the use case uh Rohitji mentioned about that someone can just pick the data uh so we have created a data and we have created a data and we have created a data and we have created a data and we have created a data and we have created a data and we have created a data and we have created a data and we have created a data and we have created a concept called as data boarding pass concept called as data boarding pass concept called as data boarding pass so this is a data boarding pass so this is a data boarding pass This is like for AI ready India.

This is a physical copy, but actually the concept is that once your data is ready and it is it has a set of checklists. Which it passes, then as a B2B player, you could be a policymaker, you could be a researcher, you could be a market player wanting to build on top of it. You can take this, you know, this concept of data boarding pass and get onboarded onto the date or for the data usage so that you can pick the data and then start using it in your applications. So data boarding pass is, say, at a district level, you have and I’m just painting a scenario. You have a data commons where graph knowledge graph and data have been all combined together, created all together, right context and everything.

And some organization. Now wants to know, say, the automobile. MSME manufacturer wants to access it and give information to dealers as to where scooters are being sold, where motorcycles are being sold and what’s been the income of of that region over a period of time that that can be possible now. Right. So the data boarding pass enables it, makes it possible. And if you want to physically see it, how this exactly works, visit our booth at a step foundation on Hall three in on the first floor. Do visit that. And my team would be there to show you the actual generation of the data I think we have given a lot of things. I want to just, you know, we have less time, but I want to take a couple of questions from the audience.

So feel free to ask. We have four minutes so we can have like two, three questions from the audience. audience. I saw that first, sorry, and then I saw you. So next to you. Yeah, please go ahead. Can someone give him a mic, please? Otherwise, I’ll hand mine.

Audience

Thank you very much. I wanted to ask you about the business models of these platforms because it is obviously extremely important to have high -quality data, but high -quality data is also expensive to collect, to maintain in the time. So did you work, besides, on how you can maintain these kind of platforms during the time? Does it have to be, I don’t know, publicly paid or whatever models you may have? And it’s also for everybody, I think.

Shalini Kapoor

Go ahead, then I’ll also add. Then I’ll also add.

Rohit Bardawaj

So, Jasmo, I just have a quick clarification on that. And National Statistics Office India is fully funded by the Government of India. It’s a… I mean, as we all know, National Statistics Office India is fully funded by the Government statistics office over all over are like public funded through public money. So it’s our job to create data and make it available to the public. At the same time, just one quick disclaimer on that, that open data is not free data. So somebody has paid for it. So when depending on the use, we provide the data. So if the use is research and things like those, I’m not getting into details of it, then it’s free.

But if the resource, you know, the use is commercial, then, of course, there is a system. There is a policy for it. And people have to pay accordingly.

Shalini Kapoor

Yeah. So I’ll also answer it because we have done a good amount of work. I would encourage you to see a paper that I’ve put up on our People Plus AI website, which talks about the give data, give model for data. G is guaranteed trust. And we talked about it. I is incentive. Incentive. Why should I bring the data? What will I get it get from it? The V is the value. If the data has no value, nobody is interested. And E is exchangeability. right which is can i share the data so i’ll focus on the i the incentive there has to be an incentive for someone to bring the data and there has to be an incentive for someone to use the data and that value will be monetized that is the data economy if you ask me this data economy is actually running without a formal mechanism there’s good amount of money people in selling data buying data lead generation i mean there’s huge amount of things which are happening this formalizes that so they will be but what will be the price that the economy the data economy i mean that has to stabilize that has to happen at region level with private sectors so we have been working in that direction so that the incentive model is clear but the actual price is is a discovery mechanism

Audience

and it’s very uh very interesting to hear all this that’s amazing one of the very key scenario that we see every day and we get little bit trouble is we see a road making getting made and stuck after few days I mean yeah it might not feel good but that’s how it is because it somewhere it feels like a disconnection in the data or somewhere decision in the policy making so do we have some way to kind of get this kind of pieces applied in those like know whatever the tender ecosystem or whatever that like you know you have a road made and then a duck for a pipeline after a very short window

Shalini Kapoor

yeah maybe I’ll answer it see if you see India has put the whole digital public infrastructure in place these are the DPI thinking whether UPI Aadhaar DigiLocker DigiYatra they were about digital rails which were put together this data infrastructure that we talked about today is going to be that rails is it going to be dug up maybe maybe maybe no problem Promises, right? Is it going to be dug up? Are there going to be holes in that? Maybe. But I think it’s a journey that if we don’t do it and don’t start it now, it’s going to hit us later on. So no promises, but yes. Rohit, do you have to add anything on that?

Rohit Bardawaj

I just wanted to add that we need to keep working on these data sharing platforms and all the philosophies we just talked about, like accessibility, sharing, analysis, use of AI, and things will improve slowly but steadily, I’m very sure about it.

Shalini Kapoor

Time is up, and the next session is going to start. So thank you so much for listening in to the AI -ready data, and please visit the booth to see it actually in action. Thank you. Bye. Bye. Bye. Bye. Thank you. Thank you. you you you you Thank you. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
R
Rohit Bardawaj
8 arguments185 words per minute2308 words746 seconds
Argument 1
Need for uniform definition and agreed framework for AI readiness with core and aspirational components
EXPLANATION
Bardawaj argues that there is no uniform definition or agreed framework for what constitutes AI readiness in the current ecosystem. He emphasizes the need for institutions like MOSB to create a shared understanding and framework that includes both core AI readiness components and aspirational elements that organizations can work towards.
EVIDENCE
Example of colleague asking about ChatGPT understanding Bangla but struggling with local dialects; mentions his paper on AI readiness being accepted at a major conference
MAJOR DISCUSSION POINT
AI-Ready Data Framework and Standards
Argument 2
AI-ready data requires cataloging, machine-readable metadata, context files, business glossaries, standardized codes, and structured databases
EXPLANATION
Bardawaj outlines specific technical requirements for making data AI-ready, including proper cataloging in machine-readable formats like JSON, comprehensive metadata, context files for meaning interpretation, business glossaries for terminology, standardized coding systems, and structured databases with defined dimensions and attributes.
EVIDENCE
Mentions creating MCP server with these components; explains how time should be defined as ‘temporal’ rather than just ‘time’ for AI understanding
MAJOR DISCUSSION POINT
AI-Ready Data Framework and Standards
AGREED WITH
Prem Ramaswami
Argument 3
Same prompts to AI with same datasets can produce different analyses, requiring trustworthy approaches
EXPLANATION
Bardawaj highlights the inconsistency problem in AI systems where identical prompts with the same dataset can yield different analytical results. He argues this demonstrates the need for trustworthy and reliable AI approaches rather than being overly enthusiastic about untested AI capabilities.
EVIDENCE
References a paper by two Canadian university undergraduates that proved this inconsistency; mentions his personal willingness to adopt AI for statistical work but only if trustworthy
MAJOR DISCUSSION POINT
AI as a Tool vs Solution
AGREED WITH
Shalini Kapoor
DISAGREED WITH
Prem Ramaswami
Argument 4
Making data AI-ready is fundamentally a governance issue rather than just a technological challenge
EXPLANATION
Bardawaj argues that the challenge of preparing alternative and administrative data for AI use is primarily about governance structures and coordination rather than purely technical solutions. He emphasizes the need for proper data stewardship and orchestration of data ecosystems.
EVIDENCE
Conducted audience poll asking whether the challenge is governance or technology; mentions different data sources not being aligned with each other
MAJOR DISCUSSION POINT
Governance and Trust Issues
DISAGREED WITH
Audience
Argument 5
Need for data stewardship and orchestration of data ecosystems at national level
EXPLANATION
Bardawaj advocates for a federated model where no single entity owns all data, but someone must play the role of data steward to orchestrate the data ecosystem. He suggests that National Statistical Offices could potentially fulfill this role.
EVIDENCE
Acknowledges his bias as someone from NSO; emphasizes that this is ultimately a decision for the people to make
MAJOR DISCUSSION POINT
Governance and Trust Issues
AGREED WITH
Prem Ramaswami
Argument 6
MCP server enables interoperability across AI systems without users leaving their workflow
EXPLANATION
Bardawaj explains how the MCP (Model Context Protocol) server acts like a universal connector that allows any LLM to access verified data without users having to switch between different platforms or workflows. This solves the problem of different LLMs having different API standards.
EVIDENCE
Compares MCP to a C socket for phone chargers; mentions it’s available on mosfet.gov.in; describes setup taking only 1-2 minutes
MAJOR DISCUSSION POINT
Practical Applications and Use Cases
Argument 7
Creative applications like analyzing grain prices from Tamil songs demonstrate diverse data usage possibilities
EXPLANATION
Bardawaj shares an example of how users creatively applied statistical data by extracting grain names from a Tamil song and creating a Consumer Price Index for those specific grains. This demonstrates the unexpected and innovative ways people can use accessible data.
EVIDENCE
Mentions a Tamil song about grains where someone created a CPI index named after the song; promises to share the Twitter/X link
MAJOR DISCUSSION POINT
Practical Applications and Use Cases
Argument 8
Open data is not free data – someone pays for collection and maintenance, with different pricing for research vs commercial use
EXPLANATION
Bardawaj clarifies that while National Statistical Office data is publicly funded, it’s not actually free as taxpayers bear the cost. He explains that there are different access policies depending on usage, with research use being free but commercial use requiring payment according to established policies.
EVIDENCE
Mentions NSO India being fully funded by Government of India; explains policy differences between research and commercial use
MAJOR DISCUSSION POINT
Business Models and Data Economy
A
Ashish Srivastava
8 arguments151 words per minute1240 words491 seconds
Argument 1
Data must be interoperable, contextual, and verifiable/governable to solve key problems
EXPLANATION
Srivastava identifies three critical requirements for AI-ready data based on his experience as a solution builder: data must work across different systems (interoperable), be properly contextualized for domain-specific use, and be verifiable rather than just declared data that cannot be validated.
EVIDENCE
Examples from women and child health work showing fragmented data across departments; mentions education translation challenges with domain-specific vocabulary; discusses survey data being declared rather than verified
MAJOR DISCUSSION POINT
AI-Ready Data Framework and Standards
Argument 2
Fragmented data across different departments creates orchestration burden for solution builders
EXPLANATION
Srivastava explains how data for a single use case (like child health) is often split across multiple government departments, requiring solution builders to manually orchestrate and integrate data rather than having it flow naturally through workflows. This fragmentation prevents effective integrated decision-making.
EVIDENCE
Specific example of child health data split between Women and Child Development (anthropometric/nutrition data through Anganwadi) and Health and Family Welfare (birth/immunization data)
MAJOR DISCUSSION POINT
Data Silos and Integration Challenges
Argument 3
Domain-specific vocabularies require glossaries combined with LLMs for proper translation and context
EXPLANATION
Srivastava points out that while LLMs are becoming good at translating main languages, they fail when encountering domain-specific vocabulary. His team developed a solution combining glossaries with LLMs to handle specialized terminology, particularly in educational content.
EVIDENCE
Example of LLMs failing to properly translate class 6th physics questions; mentions creating a solution using glossaries combined with LLMs for better domain-specific translation
MAJOR DISCUSSION POINT
Knowledge Graphs and Contextualization
AGREED WITH
Prem Ramaswami, Shalini Kapoor
Argument 4
Data should flow through workflows rather than being managed in isolation
EXPLANATION
Srivastava argues that modern solutions require managing entire user journeys rather than point transactions, and data should naturally flow through these processes. The current approach of managing data in isolation creates inefficiencies and burdens for users.
EVIDENCE
Mentions the shift toward agentic AI and journey management; contrasts current isolated data management with workflow-integrated approaches
MAJOR DISCUSSION POINT
Data Silos and Integration Challenges
Argument 5
AI models are probabilistic and will never be perfectly consistent, requiring external guardrails and human oversight
EXPLANATION
Srivastava emphasizes that AI models are fundamentally probabilistic and mathematically cannot achieve perfect consistency. He argues that removing this uncertainty would eliminate their creativity, so solutions must focus on external safeguards rather than expecting perfect AI performance.
EVIDENCE
Discussion with MIT mathematician explaining mathematical basis for AI inconsistency; emphasizes that perfect consistency would remove AI’s main source of creativity
MAJOR DISCUSSION POINT
AI as a Tool vs Solution
Argument 6
LLMs comprise only 10-15% of a solution, with the remaining 85% being guardrails, human-in-loop, and risk assessment
EXPLANATION
Srivastava argues that large language models are just one input to solutions, not the solution themselves. The majority of a robust AI solution consists of guardrails, human oversight, risk assessment, and other supporting mechanisms that ensure reliability and safety.
EVIDENCE
Provides specific percentage breakdown; mentions guardrails, human-in-the-loop, and risk assessment as essential components
MAJOR DISCUSSION POINT
AI as a Tool vs Solution
AGREED WITH
Prem Ramaswami
DISAGREED WITH
Prem Ramaswami
Argument 7
Policies must be enforceable automatically at API and policy engine levels, not dependent on human enforcement
EXPLANATION
Srivastava argues that data governance policies must be built into the technical infrastructure and automatically enforced at the API and policy engine levels. Relying on humans to enforce policies will inevitably lead to system failures.
EVIDENCE
Mentions creating reusable artifacts as DPIs or DPGs; emphasizes that human-dependent policy enforcement ‘will break in no time’
MAJOR DISCUSSION POINT
Governance and Trust Issues
Argument 8
Solutions must manage entire user journeys rather than point transactions, requiring agentic AI approaches
EXPLANATION
Srivastava explains that modern AI solutions need to handle complete user journeys with multiple touchpoints rather than simple question-answer interactions. This requires agentic AI that can determine what data is needed at different stages of the user’s journey.
EVIDENCE
Mentions the shift from point solutions to journey management; discusses how use cases determine what data is needed at different journey stages
MAJOR DISCUSSION POINT
Practical Applications and Use Cases
P
Prem Ramaswami
5 arguments188 words per minute2119 words672 seconds
Argument 1
Open-source approach with federated data governance allows local control while enabling interoperability
EXPLANATION
Ramaswami advocates for an open-source, federated model where data remains with local organizations and is governed locally, rather than being centralized with one source. This approach maintains data sovereignty while enabling interoperability across different systems and organizations.
EVIDENCE
Data Commons open-sourced entire stack; UN Statistical Department uses Data Commons as backend; mentions UNSDGs, WHO data, ILO data in common interoperable database
MAJOR DISCUSSION POINT
AI-Ready Data Framework and Standards
AGREED WITH
Rohit Bardawaj
Argument 2
Combination of knowledge graphs with large language models provides better success for data access
EXPLANATION
Ramaswami argues that combining structured data in knowledge graph format with large language models creates more reliable and useful AI systems than using LLMs alone. This hybrid approach leverages both structured factual data and AI’s natural language capabilities.
EVIDENCE
Data Commons combines multiple datasets in common knowledge graph with AI search engine; available at datacommons.org for testing
MAJOR DISCUSSION POINT
Knowledge Graphs and Contextualization
Argument 3
Large language models are tools to supplement human knowledge, not replace it, and should upskill average users
EXPLANATION
Ramaswami emphasizes that AI should be viewed as a tool that enhances human capabilities rather than replacing human intelligence. He argues that these tools can democratize access to data analysis capabilities for average users who previously couldn’t afford data scientists.
EVIDENCE
Comparison to Google Search health information debate; mentions that average person can’t afford doctors but can benefit from accessible information; analogy to autonomous vehicles being statistically safer despite some accidents
MAJOR DISCUSSION POINT
AI as a Tool vs Solution
AGREED WITH
Ashish Srivastava
DISAGREED WITH
Ashish Srivastava
Argument 4
Knowledge graphs provide factual basis to ground LLMs and fill information gaps
EXPLANATION
Ramaswami explains that while knowledge graphs can only cover a small fraction of world information (maybe 0.1%), they provide a factual foundation that can ground large language models and help fill gaps in factual information, making AI responses more reliable.
EVIDENCE
Example of LLMs not knowing recent celebrity breakups due to training data cutoffs; explains how factual basis can help LLMs produce knowledge from facts
MAJOR DISCUSSION POINT
Knowledge Graphs and Contextualization
AGREED WITH
Ashish Srivastava, Shalini Kapoor
Argument 5
Data Commons can help small business owners make location decisions by reducing risk through data modeling
EXPLANATION
Ramaswami provides a specific use case where Data Commons can help MSME owners make better decisions about physical shop locations by providing access to data about mobility, traffic, demographics, and affordability that they couldn’t otherwise afford to analyze.
EVIDENCE
Specific example of small business owner choosing shop location; mentions that wrong decisions are costly for MSMEs taking full risk; explains how overlaying personal sales data with 50,000+ datasets in Data Commons creates network effects
MAJOR DISCUSSION POINT
Practical Applications and Use Cases
S
Shalini Kapoor
6 arguments128 words per minute2572 words1200 seconds
Argument 1
Wealth of information trapped in PDFs and documents due to fear and lack of trust in sharing with AI
EXPLANATION
Kapoor identifies a major problem where valuable enterprise and organizational data remains inaccessible because it’s stuck in PDFs and documents, and organizations are reluctant to share this data with AI systems due to trust and security concerns.
EVIDENCE
Mentions wealth of information in enterprises stuck in PDFs and documents; notes that people have fear of giving data to AI
MAJOR DISCUSSION POINT
Data Silos and Integration Challenges
Argument 2
Information divide prevents entrepreneurs from accessing relevant government schemes and subsidies
EXPLANATION
Kapoor illustrates how an information gap exists between available government support (like MSME schemes for women in biotechnology) and entrepreneurs who need this information. The data exists but is trapped in government notifications that entrepreneurs cannot easily access through current AI systems.
EVIDENCE
Specific example of entrepreneur in Nagpur wanting biotechnology plant information; mentions MSME schemes for women in biotechnology with good subsidies stuck in government notifications
MAJOR DISCUSSION POINT
Data Silos and Integration Challenges
Argument 3
Give data model requires guaranteed trust, incentives, value, and exchangeability for sustainable data sharing
EXPLANATION
Kapoor outlines a framework for data sharing called the ‘give data model’ where G stands for guaranteed trust, I for incentives, V for value, and E for exchangeability. She argues that successful data sharing requires clear incentives for both data providers and users.
EVIDENCE
References paper on People Plus AI website; explains each component of GIVE model; mentions current informal data economy with lead generation and data buying/selling
MAJOR DISCUSSION POINT
Business Models and Data Economy
Argument 4
Data economy needs formal mechanisms with clear incentive models and market-driven price discovery
EXPLANATION
Kapoor argues that while a data economy already exists informally through data buying, selling, and lead generation, there’s a need for formal mechanisms with clear incentive structures. She emphasizes that actual pricing should be determined through market discovery mechanisms at regional levels.
EVIDENCE
Mentions existing informal data economy with ‘huge amount of money’ in data transactions; discusses need for price discovery mechanisms
MAJOR DISCUSSION POINT
Business Models and Data Economy
Argument 5
Data boarding pass concept enables B2B players to access AI-ready data with proper onboarding
EXPLANATION
Kapoor introduces a ‘data boarding pass’ concept that would allow B2B players (policymakers, researchers, market players) to access AI-ready data after meeting certain checklist requirements. This would enable various use cases like helping automobile manufacturers understand regional sales patterns.
EVIDENCE
Shows physical data boarding pass concept; mentions example of automobile MSME manufacturer accessing dealer information and regional income data; refers to demonstration available at their booth
MAJOR DISCUSSION POINT
Knowledge Graphs and Contextualization
AGREED WITH
Ashish Srivastava, Prem Ramaswami
Argument 6
Stability and consistency of AI answers requires benchmarking across different LLMs and usage scenarios
EXPLANATION
Kapoor discusses the problem of AI systems giving different answers to the same question when asked multiple times or across different LLMs. She mentions working on creating benchmarks to measure and improve this consistency, particularly for agricultural applications.
EVIDENCE
Mentions Amul AI launch by Prime Minister; discusses Bharat Vistar project; explains testing same questions across multiple LLMs and to one LLM by different farmers
MAJOR DISCUSSION POINT
Governance and Trust Issues
AGREED WITH
Rohit Bardawaj
A
Audience
2 arguments172 words per minute200 words69 seconds
Argument 1
Business models for data platforms need to address the high costs of collecting and maintaining quality data
EXPLANATION
An audience member raised concerns about the sustainability of data platforms, questioning how these platforms can be maintained over time given that high-quality data is expensive to collect and maintain. They inquired about potential business models, including whether platforms should be publicly funded.
EVIDENCE
Acknowledged that high-quality data is expensive to collect and maintain in time
MAJOR DISCUSSION POINT
Business Models and Data Economy
Argument 2
Infrastructure coordination problems lead to inefficient resource use, as seen in road construction followed by immediate digging for pipelines
EXPLANATION
An audience member highlighted a common infrastructure coordination problem where roads are constructed and then immediately dug up for pipeline installation. They suggested this reflects disconnection in data sharing or policy-making processes and asked whether the proposed data infrastructure solutions could address such coordination failures.
EVIDENCE
Specific example of roads being made and then dug up after few days for pipeline work
MAJOR DISCUSSION POINT
Data Silos and Integration Challenges
S
Speaker 1
1 argument136 words per minute23 words10 seconds
Argument 1
Practical guidance needed for organizations wanting to implement Data Commons instances
EXPLANATION
Speaker 1 asked for specific information about how organizations can get started with implementing their own Data Commons instances, seeking practical implementation guidance.
EVIDENCE
Direct question about getting started with Data Commons implementation
MAJOR DISCUSSION POINT
Practical Applications and Use Cases
DISAGREED WITH
Rohit Bardawaj, Audience
Agreements
Agreement Points
AI should be viewed as a tool to supplement human capabilities rather than replace them
Speakers: Prem Ramaswami, Ashish Srivastava
Large language models are tools to supplement human knowledge, not replace it, and should upskill average users LLMs comprise only 10-15% of a solution, with the remaining 85% being guardrails, human-in-loop, and risk assessment
Both speakers emphasize that AI systems are tools that enhance human capabilities rather than complete solutions, requiring significant human oversight and complementary systems
Data must be machine-readable and properly structured with metadata for AI readiness
Speakers: Rohit Bardawaj, Prem Ramaswami
AI-ready data requires cataloging, machine-readable metadata, context files, business glossaries, standardized codes, and structured databases Open-source approach with federated data governance allows local control while enabling interoperability
Both speakers agree that AI-ready data requires proper structuring, machine-readable formats, and comprehensive metadata, though they approach implementation differently
Domain-specific context and glossaries are essential for AI systems to work effectively
Speakers: Ashish Srivastava, Prem Ramaswami, Shalini Kapoor
Domain-specific vocabularies require glossaries combined with LLMs for proper translation and context Knowledge graphs provide factual basis to ground LLMs and fill information gaps Data boarding pass concept enables B2B players to access AI-ready data with proper onboarding
All three speakers recognize that AI systems need domain-specific knowledge and contextual information to function properly, whether through glossaries, knowledge graphs, or structured onboarding processes
Federated data governance model is preferable to centralized control
Speakers: Rohit Bardawaj, Prem Ramaswami
Need for data stewardship and orchestration of data ecosystems at national level Open-source approach with federated data governance allows local control while enabling interoperability
Both speakers advocate for federated models where data remains with local organizations while enabling interoperability, rather than centralized data control
AI consistency and reliability issues require careful attention and solutions
Speakers: Rohit Bardawaj, Shalini Kapoor
Same prompts to AI with same datasets can produce different analyses, requiring trustworthy approaches Stability and consistency of AI answers requires benchmarking across different LLMs and usage scenarios
Both speakers acknowledge the problem of AI inconsistency and are working on solutions to measure and improve reliability of AI systems
Similar Viewpoints
Both speakers view AI as inherently imperfect systems that require human oversight and should be treated as tools rather than complete solutions
Speakers: Ashish Srivastava, Prem Ramaswami
AI models are probabilistic and will never be perfectly consistent, requiring external guardrails and human oversight Large language models are tools to supplement human knowledge, not replace it, and should upskill average users
Both speakers identify data fragmentation and silos as major barriers preventing effective access to information and services
Speakers: Shalini Kapoor, Ashish Srivastava
Fragmented data across different departments creates orchestration burden for solution builders Information divide prevents entrepreneurs from accessing relevant government schemes and subsidies
Both speakers emphasize that data challenges are primarily governance issues requiring systematic policy and organizational solutions rather than just technical fixes
Speakers: Rohit Bardawaj, Ashish Srivastava
Making data AI-ready is fundamentally a governance issue rather than just a technological challenge Policies must be enforceable automatically at API and policy engine levels, not dependent on human enforcement
Unexpected Consensus
AI limitations and the need for human oversight
Speakers: Prem Ramaswami, Ashish Srivastava, Rohit Bardawaj
Large language models are tools to supplement human knowledge, not replace it, and should upskill average users LLMs comprise only 10-15% of a solution, with the remaining 85% being guardrails, human-in-loop, and risk assessment Same prompts to AI with same datasets can produce different analyses, requiring trustworthy approaches
Despite being from different sectors (Google, industry, government), all speakers showed remarkable consensus on AI’s limitations and the critical need for human oversight, which is unexpected given the current AI hype cycle
Open data requires sustainable business models and isn’t truly free
Speakers: Rohit Bardawaj, Shalini Kapoor, Audience
Open data is not free data – someone pays for collection and maintenance, with different pricing for research vs commercial use Give data model requires guaranteed trust, incentives, value, and exchangeability for sustainable data sharing Business models for data platforms need to address the high costs of collecting and maintaining quality data
There was unexpected consensus across government, industry, and audience that ‘free’ data isn’t actually free and requires sustainable business models, challenging common assumptions about open data
Overall Assessment

The speakers showed strong consensus on key technical and governance aspects of AI-ready data, including the need for proper structuring, metadata, domain context, federated governance, and treating AI as a tool requiring human oversight. There was also agreement on addressing data silos and the need for sustainable business models.

High level of consensus across different sectors (government, industry, academia) on fundamental principles, which suggests these are well-established best practices. The implications are positive for developing coherent policies and standards for AI-ready data infrastructure, as stakeholders are aligned on core requirements and challenges.

Differences
Different Viewpoints
Whether making data AI-ready is primarily a governance or technology issue
Speakers: Rohit Bardawaj, Audience
Making data AI-ready is fundamentally a governance issue rather than just a technological challenge Practical guidance needed for organizations wanting to implement Data Commons instances
Bardawaj conducted an audience poll and argued that preparing data for AI is fundamentally about governance structures and coordination rather than purely technical solutions, while audience members seemed more focused on technical implementation aspects
Extent of caution needed when implementing AI solutions
Speakers: Rohit Bardawaj, Prem Ramaswami
Same prompts to AI with same datasets can produce different analyses, requiring trustworthy approaches Large language models are tools to supplement human knowledge, not replace it, and should upskill average users
Bardawaj emphasized the need for caution due to AI inconsistency and untested capabilities, while Ramaswami advocated for embracing AI as a tool that democratizes access, warning against being overly restrictive with guardrails
Role and proportion of AI in overall solutions
Speakers: Ashish Srivastava, Prem Ramaswami
LLMs comprise only 10-15% of a solution, with the remaining 85% being guardrails, human-in-loop, and risk assessment Large language models are tools to supplement human knowledge, not replace it, and should upskill average users
Srivastava argued for a very limited role of AI (10-15%) with heavy emphasis on external controls, while Ramaswami promoted AI as an empowering tool that should be more accessible to average users
Unexpected Differences
Data collection methodology characterization
Speakers: Shalini Kapoor, Prem Ramaswami, Rohit Bardawaj
Information divide prevents entrepreneurs from accessing relevant government schemes and subsidies Data Commons can help small business owners make location decisions by reducing risk through data modeling Need for data stewardship and orchestration of data ecosystems at national level
An unexpected disagreement emerged when Ramaswami characterized government data collection as ‘top-down’ and Kapoor immediately corrected him, with Bardawaj’s implicit support, that it’s actually ‘bottom-up’ from field level. This revealed different perspectives on how national statistical systems operate
Verification vs declared data reliability
Speakers: Ashish Srivastava, Rohit Bardawaj
Data must be interoperable, contextual, and verifiable/governable to solve key problems MCP server enables interoperability across AI systems without users leaving their workflow
Srivastava raised concerns about survey data being ‘declared’ rather than ‘verified’ (like self-reported health conditions), questioning the reliability of much public data used for planning. This created tension with Bardawaj’s promotion of NSO data accessibility, as it implicitly questioned the quality of official statistical data
Overall Assessment

The discussion revealed moderate disagreements primarily around the balance between AI adoption and caution, with speakers agreeing on goals but differing on implementation approaches. Key tensions emerged between promoting AI accessibility versus ensuring reliability and trust.

Moderate disagreement level that reflects healthy debate about implementation strategies rather than fundamental opposition to AI-ready data initiatives. The disagreements suggest need for balanced approaches that combine innovation with appropriate safeguards, and highlight the complexity of creating sustainable data governance frameworks that serve multiple stakeholders.

Partial Agreements
All speakers agreed on the need for standardized frameworks and interoperability for AI-ready data, but disagreed on implementation approaches – Bardawaj focused on institutional frameworks and governance, Ramaswami emphasized open-source federated models, and Srivastava stressed technical requirements for verification and context
Speakers: Rohit Bardawaj, Prem Ramaswami, Ashish Srivastava
Need for uniform definition and agreed framework for AI readiness with core and aspirational components Open-source approach with federated data governance allows local control while enabling interoperability Data must be interoperable, contextual, and verifiable/governable to solve key problems
Both agreed that combining structured knowledge with LLMs is superior to using LLMs alone, but Ramaswami focused on knowledge graphs for factual grounding while Srivastava emphasized domain-specific glossaries for contextual accuracy
Speakers: Prem Ramaswami, Ashish Srivastava
Combination of knowledge graphs with large language models provides better success for data access Domain-specific vocabularies require glossaries combined with LLMs for proper translation and context
Both agreed that sustainable data sharing requires proper economic models and incentive structures, but disagreed on mechanisms – Kapoor proposed market-driven price discovery while Bardawaj advocated for policy-based pricing differentiation between research and commercial use
Speakers: Shalini Kapoor, Rohit Bardawaj
Give data model requires guaranteed trust, incentives, value, and exchangeability for sustainable data sharing Open data is not free data – someone pays for collection and maintenance, with different pricing for research vs commercial use
Takeaways
Key takeaways
AI-ready data requires a comprehensive framework including cataloging, machine-readable metadata, context files, business glossaries, standardized codes, and structured databases Data silos and fragmentation across organizations create significant barriers to AI implementation, requiring governance solutions rather than just technological fixes AI should be viewed as a tool to supplement human intelligence (10-15% of solutions) rather than a complete solution, requiring guardrails and human oversight Knowledge graphs combined with large language models provide more reliable and contextual data access than LLMs alone Open-source, federated data governance models enable local control while maintaining interoperability across systems A sustainable data economy requires clear incentive models with guaranteed trust, value creation, and exchangeability mechanisms Practical implementations like MCP servers and Data Commons demonstrate viable pathways for making data AI-ready and accessible
Resolutions and action items
Create a shared framework for AI readiness with core and aspirational components that can be adopted by industry Develop standardized cataloging systems with machine-readable metadata and context files Implement the ‘data boarding pass’ concept for B2B onboarding to AI-ready data systems Build benchmarking systems to measure consistency and stability of AI responses across different LLMs Establish automatic policy enforcement at API and policy engine levels rather than relying on human enforcement Visit demonstration booths to see practical implementations of AI-ready data systems in action
Unresolved issues
How to achieve widespread adoption of the proposed AI readiness framework across different organizations and sectors Determining optimal pricing mechanisms for the data economy while balancing accessibility and sustainability Resolving the technical challenge of ensuring consistent AI responses across different models and usage scenarios Addressing the fundamental tension between data sovereignty/local control and the need for data sharing and interoperability Managing the verification and quality control of declared vs. verified data in public datasets Scaling solutions from pilot projects to population-scale implementations Handling domain-specific vocabularies and dialects that current LLMs struggle with
Suggested compromises
Hybrid RAG (Retrieval-Augmented Generation) architecture that combines knowledge graphs with LLMs rather than choosing either approach exclusively Federated data governance model that allows organizations to maintain local control while enabling interoperability Tiered pricing model for data access – free for research use, paid for commercial applications Phased implementation approach with foundational framework first, then aspirational features Balance between preventing AI access through excessive guardrails while ensuring safety and accuracy Combination of automated policy enforcement with human-in-the-loop oversight for critical decisions
Thought Provoking Comments
Do you have uniform definition of what is AI readiness at this point in time? People are not aware what it takes to make data AI ready… So the first idea is to create a framework agreed framework, say people not only me, it’s not about my way or highway, me all of us work together create that framework.
This comment was foundational because it challenged the entire premise of the discussion by questioning whether participants even had a shared understanding of ‘AI readiness.’ It highlighted a critical gap – that before solving technical problems, there needs to be conceptual alignment.
This shifted the discussion from technical solutions to fundamental definitions and frameworks. It established the need for collaborative standard-setting and influenced subsequent speakers to ground their contributions in clearer definitions and shared understanding.
Speaker: Rohit Bardawaj
If you give same prompt to AI with the same data set, it gives you two types of analysis… we should not be really gung-ho about things, which is still untested.
This comment introduced a sobering reality check about AI reliability and consistency – a critical issue for data-driven decision making. It challenged the optimistic tone about AI capabilities with concrete evidence of instability.
This comment prompted Shalini to reveal they were actively working on benchmarking this exact problem, leading to a more nuanced discussion about AI limitations and the need for stability measures. It grounded the conversation in practical challenges rather than theoretical possibilities.
Speaker: Rohit Bardawaj
LLMs or AI models are not the solution. They are only one of the inputs to the solution. And they comprise 10%, 15% of what you’re trying to do. It is what is the rest of 85%… it’s a probabilistic model… it cannot ever become as perfect that every time consistent.
This was a paradigm-shifting comment that reframed AI from being the centerpiece to being just one component in a larger solution architecture. The mathematical grounding about probabilistic models provided scientific backing to the limitations discussion.
This fundamentally changed how the panel discussed AI implementation, moving from ‘how to make AI work’ to ‘how to build systems where AI is one reliable component.’ It led to deeper discussions about guardrails, human-in-the-loop systems, and risk assessment frameworks.
Speaker: Ashish Srivastava
The world is a multi-dimensional problem… our brains are not inherently multi-dimensional… machines are really good at this but… we have to approach AI as a tool we can use. Not as the answer, but as a tool we can use to derive the answer.
This comment provided a philosophical framework for understanding the human-AI relationship by clearly articulating the cognitive limitations of humans versus the computational strengths of machines, while maintaining human agency in the process.
This elevated the discussion from technical implementation to strategic thinking about human-AI collaboration. It influenced how other panelists framed their subsequent comments about AI applications and helped establish a more balanced perspective on AI capabilities versus human judgment.
Speaker: Prem Ramaswami
When we talk of public data, a lot of it is declared data and not verified… especially when a lot of planning depends on surveys… what is the verification no doctor has actually verified that and you are going to make a decision based on that.
This comment exposed a fundamental flaw in data quality that undermines the entire AI-ready data premise – that much of the data being prepared for AI consumption is inherently unreliable at its source.
This introduced a new dimension to the data readiness discussion – not just technical formatting but data integrity and verification. It led to discussions about the need for verifiable and governable data systems, adding complexity to the technical solutions being proposed.
Speaker: Ashish Srivastava
Open data is not free data. So somebody has paid for it… if the use is research… then it’s free. But if the resource… the use is commercial, then… people have to pay accordingly.
This comment introduced the economic reality of data infrastructure, challenging assumptions about ‘free’ public data and highlighting the sustainability challenges of data commons initiatives.
This prompted a broader discussion about business models and incentive structures for data sharing, leading Shalini to elaborate on the ‘GIVE’ model and data economy concepts. It shifted the conversation from purely technical to economic and policy considerations.
Speaker: Rohit Bardawaj
Overall Assessment

These key comments fundamentally shaped the discussion by introducing multiple layers of complexity that moved the conversation beyond technical implementation to address foundational issues. Rohit’s opening challenge about definitions set a tone of critical examination rather than assumption-based discussion. The reliability and limitation comments by both Rohit and Ashish created a more realistic framework for AI implementation, while Prem’s philosophical framing provided a balanced human-AI collaboration model. Ashish’s data verification concerns added a quality dimension that hadn’t been adequately addressed, and the economic reality check introduced sustainability considerations. Together, these comments transformed what could have been a purely technical discussion into a comprehensive examination of the social, economic, philosophical, and practical challenges of creating AI-ready data infrastructure. The discussion evolved from ‘how to make data AI-ready’ to ‘what does AI-ready really mean, what are its limitations, and how do we build sustainable, reliable systems around it.’

Follow-up Questions
How do we create a uniform definition and agreed framework for AI readiness of data?
There is currently no uniform definition of what constitutes AI-ready data, and establishing this foundation is crucial before any meaningful progress can be made in making data AI-ready across organizations and institutions.
Speaker: Rohit Bardawaj
How can we ensure the same question asked multiple times to LLMs produces consistent answers?
Both speakers noted that asking the same question to an LLM multiple times or across different LLMs produces different answers, which is a critical reliability issue that needs to be addressed through benchmarking and standardization.
Speaker: Shalini Kapoor and Rohit Bardawaj
How can we make data interoperable across different government departments and agencies?
The fragmentation of data across different departments (like women and child development vs. health and family welfare) creates barriers to integrated decision-making and comprehensive solutions.
Speaker: Ashish Srivastava
How can we verify declared data versus actual verified data in surveys and public datasets?
Much of the public data used for planning is based on self-declared survey responses rather than verified information, which can lead to inaccurate decision-making and policy formulation.
Speaker: Ashish Srivastava
How can we create automatic policy enforcement at the API level for data governance?
Manual enforcement of data policies is prone to failure, so there’s a need to develop automated systems that can enforce data governance policies at the technical level.
Speaker: Ashish Srivastava
How can we make MCP (Model Context Protocol) servers a default tool rather than requiring manual addition by users?
Currently users must manually add MCP tools to their workflow, which creates friction and potential for users to forget, defeating the purpose of seamless data access.
Speaker: Rohit Bardawaj
What sustainable business models can support high-quality data platforms over time?
High-quality data is expensive to collect and maintain, so understanding viable business models for sustaining these platforms is crucial for long-term success.
Speaker: Audience member
How can data infrastructure prevent coordination failures in public works projects?
The example of roads being dug up shortly after construction due to lack of coordination suggests a need for better data sharing and planning systems in government infrastructure projects.
Speaker: Audience member
How can we develop better translation capabilities for domain-specific vocabulary in regional languages?
While LLMs are improving at general translation, they still fail with domain-specific terms, which is critical for making AI accessible in local contexts and specialized fields.
Speaker: Ashish Srivastava
How can we create effective guardrails and risk assessment mechanisms for AI systems while maintaining accessibility?
There’s a tension between making AI accessible to average users and implementing necessary safety measures, requiring research into balanced approaches.
Speaker: Ashish Srivastava and Prem Ramaswami

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Panel Discussion Next Generation of Techies _ India AI Impact Summit

Panel Discussion Next Generation of Techies _ India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

This panel discussion at a tech summit focused on how entrepreneurship is evolving in the age of artificial intelligence, featuring three entrepreneurs at different stages: Anirudh Suri (venture capitalist and moderator), Malhar Bhide (young co-founder of AI biotech startup Origin Bio), Navrina Singh (founder of AI governance platform Credo AI), and Arvind Jain (founder of enterprise AI company Glean). The conversation explored how the current AI wave differs from previous technology waves like consumer internet and mobile, with panelists noting that AI enables much leaner startups with smaller teams and lower capital requirements due to AI’s ability to automate many traditional business functions.


A key theme emerged around the democratization of knowledge through AI, allowing entrepreneurs to work across disciplines they haven’t formally studied, as exemplified by Malhar’s biology-focused startup despite having no formal biology background. The discussion highlighted how AI-first companies are fundamentally restructuring traditional organizational blueprints, with unclear roles for humans in many functions. The panelists emphasized that while AI makes it easier to build products quickly, the real competitive advantage lies in creating reliable, trustworthy AI systems that can operate within regulatory frameworks.


Navrina Singh stressed the growing importance of AI governance and policy compliance, arguing that successful AI companies must build trusted technology that works within regulatory constraints rather than just focusing on technological innovation. The conversation touched on whether traditional creative destruction principles will continue in the AI era, with panelists generally agreeing that innovation will still emerge from startups despite big tech companies’ advantages. The discussion concluded with audience questions about AI security, governance ROI, and entrepreneurial opportunities in emerging AI-related fields, reinforcing the theme that new technological challenges create new business opportunities.


Keypoints

Major Discussion Points:

Evolution of AI-driven entrepreneurship compared to previous tech waves: The panelists discussed how AI entrepreneurship differs from the consumer internet wave, noting that while core entrepreneurial principles remain the same (finding business problems, building teams, having vision), AI enables much leaner startups with smaller teams and lower capital requirements due to AI’s ability to automate many tasks.


The democratization of knowledge and cross-disciplinary innovation: The discussion highlighted how AI has made specialized knowledge more accessible, allowing entrepreneurs like Malhar to enter complex fields like biotechnology without formal training in that domain, using AI to accelerate learning and research across disciplines.


AI governance, policy, and regulatory considerations: A significant portion focused on how AI entrepreneurs must now consider policy and regulatory risks as core business functions, not just peripheral concerns. The discussion covered the intersection of AI with government policy, the need for trusted and reliable AI systems, and how regulatory compliance is becoming a competitive moat.


The role of research in AI startups: Unlike previous tech waves, current AI entrepreneurship requires deeper engagement with fundamental research, as companies like Origin Bio must train models from scratch and conduct original scientific research to create viable products.


Creative destruction and competition in the AI era: The panel debated whether large tech companies will maintain dominance or if the traditional pattern of startups disrupting incumbents will continue, with most agreeing that AI actually empowers individual entrepreneurs and small teams to innovate more effectively.


Overall Purpose:

The discussion aimed to explore how entrepreneurship is evolving in the AI era, comparing it to previous technology waves and providing insights for current and aspiring entrepreneurs about the unique challenges and opportunities in AI-driven startups.


Overall Tone:

The tone was optimistic and encouraging throughout, with panelists expressing enthusiasm about AI’s democratizing effects on entrepreneurship. The conversation maintained a practical, educational focus while being accessible to the audience of entrepreneurs and students. The tone became slightly more interactive and urgent toward the end as audience questions were incorporated, but remained consistently positive about the opportunities AI presents for the next generation of entrepreneurs.


Speakers

Speakers from the provided list:


Anirudh Suri – Runs a venture capital fund called India Internet Fund, author of “The Great Tech Game” book and podcast host focusing on the intersection of technology and geopolitics


Malhar Bhide – Co-founder and Chief Technology Officer of Origin Bio, a Y Combinator startup using AI to make safer genetic medicines for diseases like cancer, college dropout


Navrina Singh – Founder and CEO of Credo AI (an AI governance and trust management platform), has been advising the White House on AI policy for the past five years, works with governments globally on AI guardrails


Arvind Jain – Founder and CEO of Glean, an enterprise AI company (about seven years old) that functions like Google or ChatGPT but inside companies, using both world knowledge and internal company data


Audience – Multiple audience members who asked questions during the Q&A session


Additional speakers:


Rahul – Mentioned as a 16-year-old speaker at the summit, one of the youngest speakers, was interviewed earlier by Anirudh Suri for a podcast


Full session reportComprehensive analysis and detailed insights

This panel discussion at a technology summit explored how entrepreneurship is evolving in the artificial intelligence era, featuring three entrepreneurs at different stages of their journeys alongside venture capitalist and moderator Anirudh Suri, author of “The Great Tech Game” and partner at India Internet Fund. The conversation brought together Malhar Bhide, a young co-founder of AI biotech startup Origin Bio and UIUC dropout; Navrina Singh, founder of AI governance platform Credo AI with extensive White House policy advisory experience; and Arvind Jain, founder of enterprise AI company Glean. Together, they examined the fundamental shifts occurring in how startups are conceived, built, and scaled in the age of artificial intelligence.


The Transformation of Entrepreneurial Fundamentals

The discussion began with a comparison between the current AI wave and previous technology cycles, particularly the consumer internet boom. Arvind Jain emphasized that while core entrepreneurial principles remain unchanged—requiring ambition, risk-taking ability, and focus on solving genuine business problems—the AI wave introduces unprecedented organizational transformation. “But now with AI, everything changes. In fact, the role of human itself is unclear in what roles seem to exist,” Jain observed, highlighting how AI fundamentally disrupts traditional company blueprints.


This transformation manifests most clearly in the dramatic reduction of resources required to build viable products. The panelists agreed that AI enables significantly leaner startups, with entrepreneurs able to accomplish tasks that previously required large teams. Jain noted that founders now consistently evaluate whether machines can perform work before considering human hiring, creating what he termed an “AI-first mindset” that generates substantial operational efficiencies.


The Democratization of Knowledge and Cross-Disciplinary Innovation

Perhaps the most striking example of AI’s transformative impact came from Malhar Bhide’s personal experience. His five-person team at Origin Bio, which develops AI-driven genetic medicines, includes only one person with formal biology training—and that individual is not among the team’s graduates. “Because of how good AI has gotten, knowledge has gotten a lot more democratized,” Bhide explained, describing how AI tools enable entrepreneurs to rapidly acquire domain expertise and conduct sophisticated research across disciplines they’ve never formally studied.


This democratization extends to fundamental research capabilities. Bhide’s team trains AI models from scratch, conducts wet lab experiments, and engages in cutting-edge biological research despite lacking traditional credentials. They use AI to predict experimental outcomes for cost efficiency, optimize resource allocation, and accelerate the typically lengthy process of biological discovery. This represents a paradigm shift where intellectual curiosity and AI-augmented learning can substitute for years of formal education, though the panelists emphasized this still requires rigorous empirical validation.


AI Governance as Competitive Advantage

A significant portion of the discussion focused on the intersection of AI technology with policy and regulatory considerations. Navrina Singh argued that contrary to the perception that governance constrains innovation, AI governance creates competitive advantages and accelerates business value creation.


“The true moat that is happening for companies like Malhar is not just the technological innovation, because it is, you know, you’re able to do that much faster with a leaner team. But it is how do you do that consistently within the boundaries of the constraints and guidelines,” Singh explained. She argued that companies implementing robust AI governance can adopt third-party AI tools more rapidly, deploy products faster to customers, and build greater customer trust—all contributing directly to revenue growth.


The discussion revealed varying approaches to governance based on company size and market focus. While Arvind Jain acknowledged the importance of AI safety regulations for the industry overall, he noted that Glean focuses on showing “the full trail of where answers are coming from” as part of their safety approach. Malhar Bhide described how his team proactively studies AI safety research and implements guardrails to prevent misuse of their biological AI models, recognizing that regulatory compliance will be essential for eventual FDA approval.


New Security Challenges and Creative Destruction

The conversation highlighted how AI introduces fundamentally new security challenges. Jain identified emerging threats such as prompt injection attacks and the inherent challenge of managing AI’s probabilistic nature in enterprise environments. “AI is a very new technology, and it’s actually very gameable,” he observed, noting that new forms of attacks require entirely new defensive approaches.


Regarding whether traditional patterns of creative destruction will continue despite the massive resources of companies like Google and Microsoft, the panelists expressed optimism. Jain argued that innovation fundamentally originates from passionate individuals with bold ideas, regardless of corporate resources. “Most bold ideas actually come from a single person who wants to actually who’s passionate about solving a problem,” he noted.


Singh reframed the disruption conversation entirely, arguing that the relevant competition isn’t between big tech and startups, but between individuals who master AI tools and those who don’t. “You should not be worried about another person or even like AI taking your job. You should really be worried about a person who’s so good with AI actually replacing you,” she observed.


Cultural Perspectives and International Entrepreneurship

The discussion explored the unique position of Indian entrepreneurs in the global AI landscape. Both Jain and Bhide identified the experience of international relocation as developing valuable risk-taking capabilities that translate to entrepreneurial success. Bhide described how leaving familiar environments creates “some sort of tone and precedence in even the work you do at your startup in terms of just taking risks, the people you hire, the things you do.”


Jain attributed Indian entrepreneurial success to cultural factors, noting “there’s something about, like, in our culture, you know, and where we are as a nation, there is, you know, that drive, you know, that, you know, Indian people have.”


Audience Interaction and Practical Insights

The session included audience participation, with the moderator noting that most attendees were entrepreneurs or aspiring entrepreneurs. Three key questions emerged: finding problems to solve across disciplines, the evolution of AI security and cybersecurity, and the balance between ROI and governance.


In response to the governance question, Singh mentioned that Amazon was Credo AI’s first customer, illustrating how major companies are prioritizing AI governance. The discussion emphasized that while AI provides powerful new tools, success still requires fundamental entrepreneurial capabilities: identifying genuine market problems, building effective teams, and executing consistently over time.


Conclusion

This discussion illuminated how artificial intelligence is rewriting fundamental assumptions about how businesses are built and scaled. The panelists’ insights suggest that while the entrepreneurial spirit remains constant, the methods and requirements for startup success are undergoing significant transformation. The democratization of knowledge through AI, the emergence of governance as a competitive advantage, and the need for AI-first thinking represent important shifts that entrepreneurs must navigate.


The conversation reflected optimism that AI ultimately empowers individual entrepreneurs and small teams to compete more effectively than ever before, despite the new challenges and complexities it introduces. For the audience of entrepreneurs and aspiring founders, this discussion provided valuable insights into how the next generation of technology companies will emerge and compete in an AI-driven economy.


Session transcriptComplete transcript of the session
Anirudh Suri

Hi and welcome a very good afternoon to all of you thank you for staying on I know it’s the last day of a long productive summit and I think maybe not the last but the second last session maybe the third last session so thank you for being here I’m excited about this discussion that we’re going to have over the course of the next about half an hour or so, 35 minutes I’ll quickly introduce myself and then I’ll get our panelists to introduce themselves. We’re talking of course about the team of the next generation of tech entrepreneurs, tech founders, tech leaders in the world I’m Anirudh Suri, I run a venture capital fund called India Internet Fund and I’m Anirudh Suri And I’m also an author of a book called The Great Tech Game and a podcast by the same name that looks at the intersection of technology and geopolitics.

So I might bring in a little bit of geopolitics into our conversation, even though we’re talking mostly about tech founders. I’ll start with my left, Malhar. Today, I earlier had the opportunity to interview a very young panelist, a young speaker, Rahul. Who’s, I think, one of the youngest speakers at the summit. He’s 16. We did a podcast with him earlier in the afternoon. And so I’m especially delighted to have another young entrepreneur, college dropout. On my left, Malhar, if you can briefly introduce yourself.

Malhar Bhide

Yeah, thank you. Hi, I’m Malhar. I’m the co -founder and chief technology officer of Origin Bio. We’re a Y Combinator startup that is using AI to make safer genetic medicines for diseases. Like cancer.

Anirudh Suri

Thanks Malhar. Navrina

Navrina Singh

Absolutely not a college dropout not very young but I’m the founder and CEO of Credo AI we are an AI governance and trust management platform. For the past five years I’ve also been advising the White House on AI policy and work very closely with governments across the globe to really think about what AI guardrails should look like so to your point I think there is a very strong intersection of technology and policy happening right now excited to be here. Thank you

Anirudh Suri

Arvind, last but not least.

Arvind Jain

Thank you everyone my name is Arvind I’m the founder and CEO of Glean. Glean is an enterprise AI company about seven years old and think of us like Google or ChatGPD but inside your company. Glean is a place where you can go and ask any questions or give it some tasks and it uses all of the world’s knowledge just like how ChatGPD does it but also uses all of the internal company’s data and knowledge to help people with their questions or their tasks.

Anirudh Suri

Incredible. I think we have a great set of panelists across various sectors and I think various angles of the AI entrepreneurship market. We’re going to have a focus on how entrepreneurship is evolving, right, in this session. I’m sure in other sessions in this summit you’ve heard a lot about deep tech entrepreneurs. You’ve heard about probably all sorts of AI entrepreneurs. Of course, you’ve heard some of the largest AI companies take the stage, etc., including our very own Sarvam out of India. But now I want to focus on how entrepreneurship is evolving. Before I ask my first question to the panelists, can I have a quick show of hands? How many of you here are entrepreneurs?

Big chunk. Wannabe entrepreneurs? Ex -entrepreneurs? People who decided to… too much and went into the corporate world, let’s say? A few. Good. So I think the biggest number is still entrepreneurs and want to be entrepreneurs. I think for some of us who’ve seen the previous waves of technology, innovation, like for example, most recently the consumer internet wave and of course there have been multiple waves prior to that. And for those of you who are not familiar with the history of technological waves, I really encourage all of you to study the previous waves because often you get to learn a lot from how the earlier waves of technological innovation panned out, what kind of entrepreneurs, what kind of companies succeeded.

But a little bit with that historical context in mind, Arvind, I want to come to you first. Compared to the wave of the consumer internet where we saw forms like of course the Googles and the Facebooks of the world, the social media platforms, but then also the cab hailing platforms and a lot of marketplaces and consumer focused platforms emerged. We’ve seen a lot of and at least in India I know this is the period over the last 10 -15 years where entrepreneurship has become a buzzword, a desirable profession, you can drop out of college and your parents will still be happy about it right, compared to that wave how is today’s wave of AI driven entrepreneurship looking to you, could you draw out for the audience and for us how these two waves might be similar for entrepreneurs and how they might be different?

Arvind Jain

yeah, so well first, you know, I think whenever there’s a new technology wave, it creates a lot of opportunities for new companies to get started that’s the right time for somebody to jump into the entrepreneurial journey and, you know, we’ve been through many of these in the past two or three decades, you know, starting with like you said consumer internet to mobile to social and of course now AI, and each one of these opportunities are similar in some ways like, you know, to start a company to be successful at it you have to have, like, first of all, the ambition to make, you know, to make one, to make a company. You have to have that sort of, you know, real deep desire to, you know, to be able to take the risk and have the courage to actually go start something.

You have to follow the recipe, the right recipes, which is finding, you know, the right business problem to solve, like something that actually creates a lot of business value in there. And it’s not so much about the technology trend. It’s, you know, you can use the technology trend as a way to see if you can solve that business problem better, but it always starts with the right business problem. And then, of course, like, you know, do the rest of, you know, the entrepreneurship journey, which is about, you know, building a great team, having a clear vision, working hard, and making things happen, right? So those are, like, you know, those things don’t change, like, you know, through these ways.

But I think one thing that is actually very unique about… the new AI wave is that it’s not only, I think we always had a blueprint in terms of what an organization needs to look like. When we went from consumer internet to mobile as two big technology trends, the shape of your company, how you build it, what kind of people you need to hire, those things were not actually changing that much. The blueprint was clear that you’re going to hire some engineers, you’re going to have product managers, some sales people. But now with AI, everything changes. In fact, the role of human itself is unclear in what roles seem to exist. In some sense, there’s some more challenge for the AI entrepreneurs, but also more opportunity to actually not know the basics.

You can actually start and chart a journey without actually knowing how to start a company. Because reinventing yourself, thinking AI first, can actually help you build an organization, which is very unconventional. And maybe that is what is going to create big success for you in the future.

Anirudh Suri

Do we expect, Arvind, do we expect now startups to be even leaner given AI? Because I’ll give you a couple of examples. I have friends who are second -time, third -time entrepreneurs who successfully started, exited startups in the consumer internet wave. And now when they’re starting up in the AI era, they seem to have much lower number of team members to start with to get to that minimum viable product. They seem to have much fewer people doing the coding for them. Many, I would say, a significantly smaller requirement even for capital as a result, because of course in the early days sometimes employee costs or early employee costs are quite high. So do you expect this to continue, leaner startups?

Arvind Jain

Absolutely. In terms of the product and how far you can go with a very, very lean team, in fact a team of one person, is actually, it’s incredible. can actually build a lot with, you know, with that low cost. So certainly, like, you know, that is, that is a, I would say that, you know, but it’s not like, you know, ultimately when you build a company, you know, with scale, people actually are your asset. And at some point you’re going to start growing, but you can do a lot before. And I think one of the reasons why companies will be more lean now is because it’s always, you know, on an entrepreneur’s mind, you’re always thinking about, especially when you don’t have enough resources, enough funding, you’re always thinking about any piece of work that needs to happen.

You know, can the machine do it? Can AI do it? And so that sort of like that mindset of like, you know, like, hey, you know, I’m going to actually use AI to do most of my, most of the work that needs to get done in this company. It is actually, that is what is going to actually create significant efficiencies and a way to defeat like, you know, the incumbents.

Anirudh Suri

Great. Let me move to you, Malhar. We had the opportunity to speak a little bit prior to the session. Thank you very much. And you’ve, of course, started very recently. So you started technically your first startup, I’m assuming, first startup in the AI age. And so talk to us about how you are both viewing entrepreneurship today. Is it any different than maybe, I’m sure you might have read some books or met entrepreneurs who started earlier, venture capitalists who started companies earlier in the earlier waves. What’s your sense on the question I just asked Arvind as well? And secondly, how are you being AI first in the company that you’ve started? How are you leveraging AI, not just in the product itself, but even in the sort of organization, so to say?

Malhar Bhide

Yeah, I think one thing is that because of how good AI has gotten, knowledge has gotten a lot more democratized. And so there’s less of an excuse to. actually be able to work in different fields in this sort of cross -disciplinary nature. For context, my co -founder, Yash and I, we’ve never studied biology. Our team is five people. One of the people from our team has graduated. Only one person has studied biology and they’re not the same people, the same person either. So I think AI has actually allowed us to study a lot more, read a lot more papers, reach out to more scientists, learn from them, reach out to more customers, understand what exactly they want.

And we sort of use AI throughout. We do fundamental research. We train our own models from scratch. We do research in the wet lab. We’re starting a lot of wet lab work where we use AI to actually be able to predict the results of these wet lab experiments so that we can be cost efficient and ensure that we can work with a very limited budget. I think in some sense what hasn’t changed from when we grew up and we were watching movies like The Social Network is even then it felt like the people who succeeded the most were people who didn’t want permission. The example being Mark Zuckerberg. Even if you look at Jeff Bezos starting Amazon, it wasn’t like Barnes and Nobles that started a website to sell books.

I think with AI that sentiment hasn’t changed, but it’s probably easier to materialize and we’ve definitely gotten the benefit of that.

Anirudh Suri

Are you finding that research is more and more critical to your work compared to maybe earlier waves of startups?

Malhar Bhide

Yeah, I think it’s definitely critical. We’re working on using these AI models to actually design novel DNA sequences. They act as switches. This involves training the models from scratch, working with… public data, starting and warranting experiments to actually get our own proprietary private data. So the entire thing actually hedges on the product or our research producing an output that is biologically and scientifically viable. Even when we want to sell to biotech companies and pharma companies, or if we ever want to pursue our own therapeutic program, there are very rigorous requirements for the thing to actually work. An example, of course, being the FDA and needing clearance throughout, but even actually starting the clinical trial process is a lot of work and actually has a requirement of things actually working.

Anirudh Suri

I’m going to keep coming back to this theme of research for a reason, but before I do that, Navreena, I want to bring you in. We, of course, started off the conversation saying about talking about the intersection of AI with, I mentioned geopolitics, you mentioned policy, you said you’re working closely with folks in D .C., in the White House, and otherwise. This intersection of AI and policy. while for policy wonks like some of us I also work at a think tank as a non -resident but other than for us policy wonks who are looking at that intersection of AI and policy talk to us about why for let’s say a Malhar of an Arvind who are not necessarily spending that much time in DC and with folks in the White House and the policy crowd why is understanding or maybe dealing with this intersection of AI and policy and geopolitics why is it important?

Is it? And if it is, why?

Navrina Singh

Absolutely and just by way of background I’m an engineer by training, spent 20 years building AI products in research and development at companies like Qualcomm, Microsoft so it is I do want to ground it in why I think as a technology policy is becoming really critical is going back to something that Malhar said which is really interesting right? It’s when we think about the new AI wave, to actually go from zero to one right now is very, very easy. I think what really becomes really interesting is do you actually get that product to be extremely reliable? It’s robust. Can you explain those systems? There’s a combination of, I would say, scientific measurements to build that trust that needs to happen.

But there’s another thing that needs to happen, which is all about making sure these systems work within the regulatory domains that especially require a lot of risk assessment and management. And so what we are seeing is the true moat that is happening for companies like Malhar is not just the technological innovation, because it is, you know, you’re able to do that much faster with a leaner team. But it is how do you do that consistently within the boundaries of the constraints and guidelines? That’s one of the guardrails that a regulatory ecosystem causes. So just as an example, we work with Fortune 500 companies in financial services, in health care, and they are really finding that they’re depending a lot on third -party AI, like maybe tools like Glean.

But how does Glean work in the context if you are building, let’s say, a customer service chatbot? You want to make sure that the chatbot not only is aligning to your brand guidelines, but it is not toxic. It’s highly reliable. It’s doing the things it’s supposed to do. And if it is within the context of a regulatory sector, it is following, let’s say, HIPAA compliance, et cetera, right? So as you can imagine, now it’s not just about building technology, but it is about building trusted technology that can work in the context that we are talking about. And that’s the exciting intersection of, I would say, policy, governance, and tech that I see.

Anirudh Suri

If I can go a bit deeper on that. So has it changed? Because regulation, of course, regulatory risk, policy risk. all of these risks are always there. So Qualcomm, Microsoft, a Jio, a Tata, you take any large company anywhere in the world, of course they’ll have regulatory and policy risk people and of course that’s a big part of what they are tracking, etc. What has changed, if anything?

Navrina Singh

A lot has changed. And just I think another thing I want to ground us in is it’s not just about regulation. When you start thinking about AI risk, just because of the way these large language models are built, unless you ground it in real data, there are issues of hallucination. Do you actually, are you getting the right outputs? And can these systems be reliable? What kind of evaluation benchmarks do you have? Have you actually done the testing across your entire AI supply chain, etc.? So the thing is now it is not a static tech. It’s actually a very dynamic technology that when it starts to operate in e -commerce, either customer context, or it starts to operate in regulatory context, you have to prove that you can do it reliably.

So I would say that’s the biggest shift that we are seeing with AI and some of the applications.

Anirudh Suri

The other theme that I want to keep going on with this, Navreena, and Malhar, Arvind, please feel free to chime in, is I think at the core of our engagement with AI on the policy front is the fact that the technology is moving so fast and governments realizing the fact that AI as a technology has massive ramifications on people, on existing structures, are saying, hey, listen, let’s rein it in before it goes out of our control. So there’s also a question of control here. So governments want to control so that for both reasons. I think one is governments generally don’t like to give up control to the private sector too much anyways, anywhere around the world.

But the other bigger piece here is when the technology, as you were saying, Navreena, is moving so fast, ultimately if some massive harm happens to people, ultimately the governments and political leaders know that they’ll have to be accountable. So it seems to me that it’s the nature of AI as a technology also, and it’s massive ramifications it’s making. policy and geopolitics risk a big part of what entrepreneurs have to keep in mind. The question I have for you though is has this become a function that every team has to have? So any startup has to have a CTO, has to have a CEO, has to have maybe a CFO and then product manager, etc. Is this becoming a role that is critical?

Malhar, do you have someone looking at this? Arvind, do you have people? Of course you’re a larger company. So let me start with Malhar. Do you have someone looking at this kind of risk?

Malhar Bhide

I think we do have the benefit of being a smaller company that’s not entirely putting things out for public use where they can be harmful until we’ve gone through all of the regulatory requirements and until we have tested things out in a setting where it is safe. But I think from a research perspective people who are working on AI biology models such as us they work a lot on ensuring these models are safe whether it comes to other people being able to design dangerous pathogens and they’re not going to be able to be able to So we very actively in our entire team keep up with that research. We study that research. Everyone on our team is technical.

We study how they actually enforce those guardrails. So for when we actually need to start making things and actually turn them into products, we are actually able to implement that.

Arvind Jain

Yeah, so first, like, you know, so we are enterprise focused. And in that sense, like, you know, users of our product, they come to Glean and they ask questions which are serious in nature. It defines, like, you know, what work they’re going to do, what decisions they’re going to make. So first of all, you have to be absolutely sure that even though the core foundation of the AI technology is, you know, it’s a stochastic, you know, modeling, it can make mistakes, it’s probabilistic. You have to sort of work on top of that and ensure that you can actually deliver precise and accurate results. And like, you know, refrain from answering questions or doing tasks if you’re not sure.

So so we like it’s a big part of like our product experience is how do you actually use a safely and securely? How do you sort of, you know, do that constant sort of judgment and evaluation of the work that it produces? And do fact checking so that ultimately, you know, not only are you delivering the right answers or task execution, but you’re actually showing the full trail of like, you know, where the answers are coming from, what are authoritative information is being used that is human generated. So so that is very core to just like in terms of, you know, what product experiences that we deliver. But I think you’re also talking about the question of like is like how important it is to think about policy, to think about like.

You know. like setting, working with governments and actually ensuring that, you know, the right regulations and laws are in place. For us, like, you know, as an enterprise company, you know, that, you know, we don’t actually think a whole lot about that. But it is important to sort of have, you know, these rules and regulations in place because otherwise AI can actually do significant damage, you know, in the industry.

Anirudh Suri

Great. The other dynamic I now want to move on to is, you know, the tech industry, as I think I’m sure all of us have seen, is in many ways, tech and entrepreneurship is defined by this idea of creative destruction. Large companies come as startups. They become big. By the time they get big, there’s a whole new wave of tech coming in and then a whole new set of startups come in and disrupt the incumbents. We’ve seen that again if you go into history, time after time, wave after wave, that’s happening. So my question to all of you. is that principle of creative destruction going to continue with AI? Or are we going to see that the big companies of today, the big tech firms of today, with the amount of capital they have, the amount of talent that they can hire with the kind of balance sheets that they have, the global scope of these companies, et cetera, their ability to shape policy, right?

Is this wave of big tech firms different? Or can we expect that the principle of creative destruction of them getting disrupted sooner than later is likely to continue? I’ll start off maybe, Arvind, with you, and then I’ll come to Naveen and Mahal.

Arvind Jain

Yeah, well, I mean, I think this creative destruction or disruption, rather, you know, it happens, and like always, you will see that, you know, over the last 20 years, companies like Google, you know, Microsoft, you know, these are big… giants and they have all the resources in the world and all the policy making power but yet when you think about innovation that happens in the tech industry often it actually happens outside of those companies and that’s because I think the spirit of entrepreneurism is actually alive, it’s an innate human thing and most bold ideas actually come from a single person who wants to actually who’s passionate about solving a problem and so I don’t think AI is going to actually change that in fact if there’s any indications AI is actually going to make it even more, it’s going to make it easier for people to actually create really interesting products and to serve large players because now there’s more power in their hands you don’t even need to be an engineer, you don’t need to be an AI scientist to actually use these amazing technologies and build and sort of turn your ideas into real products with very few reasons.

resources. So what I expect to see happen is that more and more innovation that is going to happen again in startup land. But ultimately, of course, I think the larger companies are well -established, they have large customer bases, so the model in the industry tends to be that new products are always, innovation comes from startups, but then innovation scales at larger companies.

Anirudh Suri

Malhar, I’m going to come to Navrina in a second, but Malhar, I want to ask you a slightly different question. Navina, did you want to add to that?

Navrina Singh

You should not be worried about another person or even like AI taking your job. You should really be worried about a person who’s so good with AI actually replacing you. So I think so I have started to think about disruption rather than like in context of big tech or startups like individuals. Right. What what are creators and entrepreneurs going to create just when they can unlearn very fast rather than, you know, we don’t have a playbook right now for how you should be succeeding in the age of AI. So can, you know, new set of entrepreneurs use these tools, unlearn very fast old habits and be open and willing to try new ways of building faster?

I think that’s the construct that’s more healthier than thinking in the context of a company.

Anirudh Suri

Great. Thanks, Navina. Malhar, I want to come to you now and ask you, we’ve spoken about how companies or startups are changing both internally. They might look different, might be leaner. more research focused in the age of AI. You’ve spoken about the importance of policy and regulation and especially in the world of AI, but I want to ask you now about the entrepreneur themselves. Given we’re sitting in India, I’m going to ask you the question from the perspective of India. You’re an Indian entrepreneur, grew up in Bombay, working out in San Francisco, part of the Y Combinator batch, dropped out of UIUC. Tell me from your perspective, Malhar, what does the Indian entrepreneur building a startup in the US today look like?

Is it any different from earlier generations of Indian entrepreneurs in the US? One. And do you find some difference that you can maybe point to between an Indian entrepreneur who’s grown up in India and then is working there, starting up there in the US, versus someone who’s grown up in the US?

Malhar Bhide

Yeah, I think something that has been quite fruitful for me moving to America is that there’s an entire process of actually leaving where you’ve grown up and going somewhere and setting up things entirely new from there. And I think that sort of sets some sort of tone and precedence in even the work you do at your startup in terms of just taking risks, the people you hire, the things you do. I think in some sense that has perhaps stayed the same over multiple years because that procedure has not really gotten easier. It might be easy to get information, to book flights, to stay in contact with people, but the act itself has still been incredibly hard.

I think that is one big difference, and I think something that also probably contributes for more specifics is someone who’s grown up, let’s say, in America, is more aware of systems in America, how do you sell to people in America, what are talent distributions like in America, and how do you sell to people in America? And I think that’s something that has been quite fruitful for me moving to America. Thank you. someone like me who’s grown up in India. I think if we believe that India has a large role to play in things like drug discovery in the coming decade, I know a lot about how drug discovery works here, how hospitals work here, how data is collected, how many patients are treated, how diverse the patient body here is.

So I think there are those very specific advantages to it as well.

Anirudh Suri

If I go to Arvind to ask the same question, can I just see a quick show of hands? Anyone has questions or quick comments? I want to make the last few minutes interactive if you’d like. Any burning questions or comments in the audience? Okay. So while Arvind’s answering, just raise your hand so I have a sense of the room, and then we’ll try and get to you. Arvind, this is your second startup, so I think you might have some perspective on this.

Arvind Jain

Yeah, well, I think, first of all, in technology, a lot of startups actually are started by Indians, folks in Silicon Valley or, of course, here. One thing which I… I think is interesting in the U .S. and Silicon Valley is the… you know, there is availability, you know, of capital. There’s belief, you know, and specifically in the Indian diaspora in terms of, like, you know, their ability to go and build great companies. You know, today, look at tech, you know, in the tech industry, you know, even in the large enterprises, you know, there are a lot of Indian folks who are CEOs. And I think what has actually made that happen is fundamentally, I think, you know, we are more hungry.

You know, we are, like, you know, I think there’s something about, like, in our culture, you know, and where we are as a nation, there is, you know, that drive, you know, that, you know, Indian people have, you know, and which is what is actually creating, you know, this incredible success, you know, for success stories, you know, for all of us. So I think that that’s one thing that I would say. I routinely, like, you know, of course, you know, I had the same thing, you know, I had the desire to make something. You know, big and, you know, that. continues to drive me, but what I see from, I work with a lot of young folks, a lot of people have actually joined our companies and then went on this entrepreneurship journey and I continue to see that same pattern that it is the folks who actually grew up here and then relocated to the US they are the ones who are most likely to start companies and become entrepreneurs.

Anirudh Suri

I want to quickly open up to the audience, I know we don’t have probably mics there so I’m going to can I see a quick show of hands again? 1, 2 and 3. So we have less than 4 minutes, so what I’m going to ask you is in 15 seconds give us a question or comment. 3 hands I see. We’ll start off here and come to you and then come to you. Hello,

Audience

Yep, I have a question for Malhar. So you have a multi -discipline startup right now, so and you also told that you’re not from biology field like you’re not affiliated from biology field so the question is how did you like find this problem to solve great we’ll take these two also you can just shout out while the mic comes Yeah. So my question is so I mean we had this technology and it became a boon and bane and then what started evolving with technology was cyber security the field of cyber security right so now we have this AI and also we have the fear of how AI is being used for the boon and the bane and also you have the additional fear of hallucinations and of course so all these equivalent to cyber security are you going to have something like AI security or is there a new field that will come up and also how can you handle this hallucination I mean you can give a relevancy score to the output?

Hi, my question is for Navina, actually. I also work in an AI governance company called Protego. I had attended sessions today with Amazon and Zoom, especially these big leaders are saying that if we do governance at this stage, we will not see the ROI from AI, and it’s going to stop innovation in some manner. What’s your take on that? How do you advocate AI governance, especially with your hands -on with G42?

Anirudh Suri

Great. So I’m sorry, but I mean, I’m sure we can speak. I literally have a timer that’s in two and a half minutes. Navina, I’m going to give you less than a minute, and then we’ll go across the room.

Navrina Singh

Yeah, it’s funny. If you were in the Amazon room and they made this comment because they were our first customers, so I’m surprised to hear that. But having said that, you know, it’s actually very clear. We are seeing very clear ROI on AI governance. If you have a very clear visibility, risk management practice, you can actually adopt third -party AI much more faster. And you’re seeing… much more, obviously, productivity gains with that. Secondly, when you have governance, your AI deployment increases, so you can actually deploy more products faster to customer, but also products that can be more trusted by the customers, and as a result of which, you are just adding more to top line.

So happy to share more details from our customers.

Anirudh Suri

I think you can take maybe the cyber question, and then Malhar will come to you for this. Are we going to see a new field of AI and cybersecurity?

Arvind Jain

Oh, that’s right, yeah. So absolutely, I think AI is a very new technology, and it’s actually very gameable. So there’s a new form of attacks, like prompt injection are coming into place. It’s actually a rapidly evolving new field with a lot of entrepreneurship opportunities. It’s about how you actually… So I’m going to turn it over to Malar. control like what data, what information actually goes to AI models so that they actually get to work on good safe data but then also like whatever output comes back from AI the responses that comes back from AI how do you sort of make sure that those are not attack vectors and similarly I think the other thing you mentioned the related point of hallucinations.

The hallucination is actually a core sort of feature of the current AI technology unfortunately like this is how it’s built and so again from a discipline perspective I think companies that actually detect hallucinations that can monitor it, provide observability on it is also again like a good area and a field of discipline.

Anirudh Suri

Good entrepreneur. Anytime there’s a problem there’s an opportunity. Malhar you have something? 16, 15?

Malhar Bhide

I think for my co -founder and I both it started off as this deep intellectual interest more than anything, being college students that was really what we had to go off with. We were always interested in DNA and how your body regulates different cells, how it sort of maintains healthy functioning and what can really be learned by mining the genome and understanding things from that. So it started off with that and then I think after that we treated it very empirically, talking to customers, talking to scientists, talking to doctors who know a lot more in this field. I think that was sort of the start and how it continued.

Anirudh Suri

Great. I think we are out of time, but I do hope that all of you have taken something away from the session. I hope that this summit has been a two -way conversation. I think it’s more important that, and I want to end with this remark, it’s very important that people sitting on the stage, whether it’s us or other panels, listen to all that you have to say and ask and show. because I think the summit must be a two -way conversation. I think that’s a very important piece, especially since so many students and so many entrepreneurs and would -be entrepreneurs have come here. So please do take the time to find the panelists if you want subsequently.

And now let me end with one best of wishes to all of you in your entrepreneurship journeys and to you. And we hope to see all of you back again here soon. And thank you all for staying here.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Arvind Jain
11 arguments172 words per minute1847 words640 seconds
Argument 1
AI wave creates opportunities similar to past waves but requires same fundamentals like ambition, risk-taking, and solving real business problems
EXPLANATION
Arvind argues that while AI represents a new technology wave creating entrepreneurial opportunities, the core requirements for startup success remain unchanged. Entrepreneurs still need ambition, courage to take risks, and must focus on solving real business problems rather than just following technology trends.
EVIDENCE
References to previous waves like consumer internet, mobile, and social media as examples of similar opportunity-creating technology shifts
MAJOR DISCUSSION POINT
Evolution of AI-driven entrepreneurship compared to previous technology waves
AGREED WITH
Malhar Bhide
Argument 2
AI wave uniquely changes organizational blueprints and human roles, allowing unconventional company building approaches
EXPLANATION
Unlike previous technology transitions from consumer internet to mobile, AI fundamentally changes how companies are structured and what roles humans play. This creates both challenges and opportunities for entrepreneurs to build organizations in unconventional ways without following traditional blueprints.
EVIDENCE
Comparison between consumer internet to mobile transition (where company structure remained similar) versus AI transition (where everything changes including human roles)
MAJOR DISCUSSION POINT
Evolution of AI-driven entrepreneurship compared to previous technology waves
Argument 3
AI enables much leaner startups with fewer team members, less coding staff, and lower capital requirements
EXPLANATION
AI allows entrepreneurs to accomplish much more with smaller teams, reducing the need for large coding staff and lowering initial capital requirements. A single person can now build products that previously required much larger teams.
EVIDENCE
Examples of second and third-time entrepreneurs starting with much smaller teams and lower capital requirements compared to their previous consumer internet startups
MAJOR DISCUSSION POINT
Impact of AI on startup structure and operations
AGREED WITH
Malhar Bhide
Argument 4
Entrepreneurs now think AI-first, asking if machines can do work instead of hiring people, creating significant efficiencies
EXPLANATION
The AI-first mindset means entrepreneurs constantly evaluate whether AI can perform tasks before considering human hiring. This approach creates significant operational efficiencies and helps resource-constrained startups compete with larger incumbents.
EVIDENCE
Description of entrepreneurs with limited resources and funding consistently asking whether AI can handle work that would otherwise require human employees
MAJOR DISCUSSION POINT
Impact of AI on startup structure and operations
Argument 5
Enterprise AI must deliver precise results despite probabilistic foundations, with fact-checking and authoritative source trails
EXPLANATION
Since enterprise users rely on AI for serious work decisions, companies must ensure accuracy despite AI’s inherently probabilistic nature. This requires implementing fact-checking mechanisms and showing clear trails to authoritative human-generated sources.
EVIDENCE
Glean’s approach of providing full trails showing where answers come from and what authoritative information is being used
MAJOR DISCUSSION POINT
AI safety, security, and risk management
Argument 6
AI creates new attack vectors like prompt injection, requiring new cybersecurity disciplines and entrepreneurship opportunities
EXPLANATION
AI’s unique characteristics make it vulnerable to new types of attacks such as prompt injection, creating an entirely new cybersecurity field. This represents both a challenge and an entrepreneurship opportunity for those who can develop solutions.
EVIDENCE
Specific mention of prompt injection as a new form of attack, and discussion of controlling data inputs to AI models and monitoring outputs
MAJOR DISCUSSION POINT
AI safety, security, and risk management
AGREED WITH
Navrina Singh
Argument 7
Hallucination is an inherent feature of current AI technology requiring detection, monitoring, and observability solutions
EXPLANATION
Rather than being a bug to be fixed, hallucination is a fundamental characteristic of how current AI technology works. This creates opportunities for companies that can detect, monitor, and provide observability around these hallucinations.
EVIDENCE
Description of hallucination as a ‘core feature’ of current AI technology and the need for companies to provide observability and monitoring solutions
MAJOR DISCUSSION POINT
AI safety, security, and risk management
Argument 8
Innovation continues to come from startups despite big tech resources, as bold ideas originate from passionate individuals
EXPLANATION
Even though large tech companies have vast resources and policy influence, the most innovative ideas still emerge from individual entrepreneurs passionate about solving specific problems. The entrepreneurial spirit remains a fundamentally human trait that drives innovation outside large corporations.
EVIDENCE
Examples of Google and Microsoft as resource-rich giants, yet innovation continuing to happen outside these companies over the past 20 years
MAJOR DISCUSSION POINT
Creative destruction and disruption in the AI era
AGREED WITH
Navrina Singh
Argument 9
AI makes it easier for non-engineers to create products and compete with large players using fewer resources
EXPLANATION
AI democratizes product creation by enabling people without engineering backgrounds to build sophisticated products with minimal resources. This levels the playing field between startups and large established companies.
EVIDENCE
Statement that ‘you don’t even need to be an engineer, you don’t need to be an AI scientist to actually use these amazing technologies and build and sort of turn your ideas into real products with very few resources’
MAJOR DISCUSSION POINT
Creative destruction and disruption in the AI era
Argument 10
Indian entrepreneurs demonstrate exceptional hunger and drive that creates success stories in Silicon Valley
EXPLANATION
Indian entrepreneurs possess a cultural drive and hunger that stems from their national context, which translates into exceptional success in the tech industry. This cultural characteristic is evidenced by the high number of Indian CEOs in major tech companies.
EVIDENCE
Observation of many Indian CEOs in large tech enterprises and the pattern of Indian entrepreneurs starting companies and achieving success
MAJOR DISCUSSION POINT
Indian entrepreneurs in the global AI landscape
Argument 11
Indians who relocate to the US are most likely to become entrepreneurs due to their cultural background and determination
EXPLANATION
Based on patterns observed from working with many young professionals, those who grew up in India and then moved to the US show the highest likelihood of starting companies and pursuing entrepreneurship. This suggests that the combination of Indian cultural background and international relocation experience creates ideal entrepreneurial conditions.
EVIDENCE
Personal observation from working with many young people who joined companies and later became entrepreneurs, noting the pattern among those who grew up in India and relocated
MAJOR DISCUSSION POINT
Indian entrepreneurs in the global AI landscape
M
Malhar Bhide
7 arguments187 words per minute985 words314 seconds
Argument 1
AI democratizes knowledge and enables cross-disciplinary work with smaller teams and limited resources
EXPLANATION
AI has made knowledge more accessible, reducing barriers to working across different fields. This allows entrepreneurs to enter domains they haven’t formally studied and operate effectively with very small teams and constrained budgets.
EVIDENCE
Origin Bio team of 5 people where neither co-founder studied biology, only one team member has graduated, and only one person studied biology (different from the graduate)
MAJOR DISCUSSION POINT
Evolution of AI-driven entrepreneurship compared to previous technology waves
AGREED WITH
Arvind Jain
Argument 2
The core entrepreneurial spirit of not asking permission remains unchanged from previous waves
EXPLANATION
Despite technological changes, the fundamental entrepreneurial characteristic of taking initiative without seeking permission continues to be crucial for success. This pattern was evident in previous successful entrepreneurs and remains relevant in the AI era.
EVIDENCE
References to Mark Zuckerberg and Jeff Bezos as examples of entrepreneurs who didn’t wait for permission, noting that it wasn’t established companies like Barnes and Noble that started online book selling
MAJOR DISCUSSION POINT
Evolution of AI-driven entrepreneurship compared to previous technology waves
AGREED WITH
Arvind Jain
Argument 3
Small teams can accomplish more with AI tools, with research becoming more critical to startup success
EXPLANATION
AI enables small teams to achieve significant results while making fundamental research increasingly important. Success depends on the research producing scientifically and biologically viable outputs that meet rigorous regulatory standards.
EVIDENCE
Origin Bio’s work training models from scratch, conducting wet lab experiments, and using AI to predict experimental results for cost efficiency
MAJOR DISCUSSION POINT
Impact of AI on startup structure and operations
Argument 4
Smaller companies benefit from not releasing products publicly until passing regulatory requirements and safety testing
EXPLANATION
Being a smaller company provides the advantage of being able to thoroughly test and validate products in safe environments before public release. This allows for proper regulatory compliance and safety verification without the pressure of immediate public deployment.
EVIDENCE
Origin Bio’s approach of keeping AI biology models internal until they pass regulatory requirements and safety testing, particularly regarding preventing misuse for dangerous pathogen design
MAJOR DISCUSSION POINT
Intersection of AI technology with policy and governance
DISAGREED WITH
Arvind Jain, Navrina Singh
Argument 5
Indian entrepreneurs bring risk-taking experience from relocating internationally, which translates to startup boldness
EXPLANATION
The experience of leaving one’s home country and establishing oneself in a new environment develops risk-taking capabilities that directly benefit entrepreneurial ventures. This international relocation experience creates a mindset that embraces uncertainty and bold decision-making in business contexts.
EVIDENCE
Personal experience of moving from Mumbai to San Francisco and the challenges of setting up life entirely new, which hasn’t gotten easier despite better information and communication tools
MAJOR DISCUSSION POINT
Indian entrepreneurs in the global AI landscape
Argument 6
Growing up in India provides advantages in understanding local markets, systems, and opportunities like drug discovery
EXPLANATION
Entrepreneurs with Indian backgrounds possess deep knowledge of Indian systems, markets, and opportunities that can be valuable for certain business domains. This local expertise becomes particularly relevant as India plays a larger role in sectors like drug discovery.
EVIDENCE
Knowledge of how drug discovery works in India, hospital systems, data collection methods, patient treatment approaches, and patient diversity
MAJOR DISCUSSION POINT
Indian entrepreneurs in the global AI landscape
Argument 7
Deep intellectual curiosity can drive problem identification, followed by empirical validation through customer and expert conversations
EXPLANATION
Successful entrepreneurship can begin with genuine intellectual interest in a field, which then gets validated and refined through systematic engagement with potential customers and domain experts. This approach combines passion-driven exploration with rigorous market validation.
EVIDENCE
Origin Bio’s founding story starting with intellectual interest in DNA and genome regulation, followed by extensive conversations with customers, scientists, and doctors
MAJOR DISCUSSION POINT
Research-driven entrepreneurship approach
N
Navrina Singh
6 arguments179 words per minute880 words294 seconds
Argument 1
AI governance creates competitive moats through reliable, robust, and explainable systems that work within regulatory constraints
EXPLANATION
The true competitive advantage for AI companies lies not just in technological innovation, but in building systems that are consistently reliable, robust, and explainable while operating within regulatory frameworks. This creates sustainable differentiation in the market.
EVIDENCE
Examples of Fortune 500 companies in financial services and healthcare needing AI systems that are non-toxic, reliable, and compliant with regulations like HIPAA
MAJOR DISCUSSION POINT
Intersection of AI technology with policy and governance
DISAGREED WITH
Arvind Jain, Malhar Bhide
Argument 2
True differentiation comes from building trusted technology that consistently operates within regulatory boundaries
EXPLANATION
While building AI technology has become easier and faster with leaner teams, the real challenge and opportunity lies in creating technology that can be trusted to work consistently within the constraints and guidelines of regulatory ecosystems. This becomes the key differentiator for successful companies.
EVIDENCE
Work with Fortune 500 companies requiring AI systems to align with brand guidelines, avoid toxicity, maintain reliability, and follow sector-specific compliance requirements
MAJOR DISCUSSION POINT
Intersection of AI technology with policy and governance
Argument 3
AI’s dynamic nature requires proving reliability across the entire AI supply chain, unlike static technologies
EXPLANATION
Unlike traditional static technologies, AI systems are dynamic and present unique challenges including hallucination, reliability issues, and the need for comprehensive testing across the entire AI supply chain. This requires new approaches to evaluation and risk management.
EVIDENCE
Issues of hallucination in large language models, need for grounding in real data, evaluation benchmarks, and testing across AI supply chains
MAJOR DISCUSSION POINT
Intersection of AI technology with policy and governance
AGREED WITH
Arvind Jain
Argument 4
AI governance shows clear ROI through faster AI adoption, increased deployment, and improved customer trust leading to revenue growth
EXPLANATION
Contrary to concerns that governance slows innovation, proper AI governance actually accelerates business value by enabling faster adoption of third-party AI, quicker deployment of more products, and building customer trust that drives top-line revenue growth.
EVIDENCE
Customer examples showing faster third-party AI adoption, increased AI deployment speed, and revenue growth from trusted AI products
MAJOR DISCUSSION POINT
AI safety, security, and risk management
DISAGREED WITH
Audience
Argument 5
Disruption should be viewed at the individual level – entrepreneurs who master AI tools will replace those who don’t
EXPLANATION
Rather than thinking about disruption in terms of big tech versus startups, the real competitive dynamic is between individuals who effectively leverage AI tools and those who don’t. The focus should be on personal capability development rather than company size.
EVIDENCE
Observation that people should worry about being replaced by someone skilled with AI rather than by AI itself or by company type
MAJOR DISCUSSION POINT
Creative destruction and disruption in the AI era
AGREED WITH
Arvind Jain
Argument 6
Success requires unlearning old habits and trying new approaches since there’s no established AI playbook
EXPLANATION
The absence of established best practices in AI entrepreneurship means success depends on the ability to quickly unlearn previous approaches and experiment with new methods. Entrepreneurs who can adapt fastest to this uncertain environment will have advantages.
EVIDENCE
Emphasis on the lack of existing playbooks for AI success and the need for openness to trying new ways of building
MAJOR DISCUSSION POINT
Creative destruction and disruption in the AI era
A
Audience
1 argument173 words per minute253 words87 seconds
Argument 1
Questions about multidisciplinary problem-solving, AI security fields, and governance ROI reflect real entrepreneurial challenges
EXPLANATION
The audience raised practical questions about how entrepreneurs identify problems in unfamiliar fields, whether new cybersecurity disciplines will emerge for AI, and how to justify AI governance investments. These questions highlight the real-world challenges entrepreneurs face in the AI era.
EVIDENCE
Specific questions about finding biology problems without biology background, emergence of AI security fields, handling hallucinations, and ROI concerns from major companies like Amazon and Zoom
MAJOR DISCUSSION POINT
Audience engagement and practical concerns
DISAGREED WITH
Navrina Singh
A
Anirudh Suri
7 arguments166 words per minute2279 words820 seconds
Argument 1
Studying previous technological waves provides valuable lessons for understanding how current AI entrepreneurship might evolve
EXPLANATION
Historical analysis of technological innovation waves, such as the consumer internet wave, offers important insights into patterns of entrepreneurship, company formation, and market dynamics. Understanding these patterns can help current entrepreneurs and investors better navigate the AI wave by learning from what types of entrepreneurs and companies succeeded in previous cycles.
EVIDENCE
References to consumer internet wave, social media platforms, cab hailing platforms, marketplaces, and the emergence of companies like Google and Facebook
MAJOR DISCUSSION POINT
Evolution of AI-driven entrepreneurship compared to previous technology waves
Argument 2
Entrepreneurship has become a desirable and socially acceptable career path, even allowing college dropouts to gain parental approval
EXPLANATION
The transformation of entrepreneurship from a risky, unconventional path to a mainstream, desirable profession represents a significant cultural shift. This change in social perception has lowered barriers to entry for potential entrepreneurs and created a more supportive environment for startup formation.
EVIDENCE
Observation that in India over the last 10-15 years, entrepreneurship became a buzzword and parents would be happy about college dropouts pursuing entrepreneurship
MAJOR DISCUSSION POINT
Evolution of AI-driven entrepreneurship compared to previous technology waves
Argument 3
The intersection of AI technology with geopolitics and policy creates new considerations that entrepreneurs must understand
EXPLANATION
Unlike previous technology waves, AI’s potential for massive societal impact means that geopolitical and policy considerations are becoming integral to entrepreneurship rather than peripheral concerns. Entrepreneurs can no longer focus solely on technology and market dynamics without considering broader policy implications.
EVIDENCE
Reference to his work on technology and geopolitics through ‘The Great Tech Game’ book and podcast, and the inclusion of policy experts in entrepreneurship discussions
MAJOR DISCUSSION POINT
Intersection of AI technology with policy and governance
Argument 4
The speed of AI technology development is forcing governments to implement controls before the technology becomes uncontrollable
EXPLANATION
Governments worldwide are recognizing that AI’s rapid advancement and potential for significant societal impact requires proactive regulation rather than reactive responses. This creates a new dynamic where policy development is trying to keep pace with technological innovation, making regulatory risk a central concern for AI entrepreneurs.
EVIDENCE
Discussion of governments wanting to ‘rein it in before it goes out of control’ and the accountability pressure on political leaders for potential AI-related harm
MAJOR DISCUSSION POINT
Intersection of AI technology with policy and governance
Argument 5
AI entrepreneurship may require dedicated policy and regulatory risk management roles similar to traditional C-suite positions
EXPLANATION
The complexity and importance of AI-related policy and regulatory risks may necessitate specialized roles within startup teams, potentially making policy expertise as critical as technical or financial expertise. This represents a fundamental shift in startup organizational structure and skill requirements.
EVIDENCE
Questions to panelists about whether startups need dedicated people looking at policy risks, similar to having CTOs, CEOs, and CFOs
MAJOR DISCUSSION POINT
Intersection of AI technology with policy and governance
Argument 6
The principle of creative destruction in technology may face new challenges from the unprecedented resources and scope of current big tech companies
EXPLANATION
While creative destruction has historically allowed new startups to disrupt established companies across technology waves, the current generation of big tech firms possesses unprecedented capital, talent, global reach, and policy influence. This raises questions about whether traditional disruption patterns will continue or if these companies have built more durable competitive moats.
EVIDENCE
References to the historical pattern of large companies being disrupted by startups across technology waves, contrasted with current big tech firms’ resources, balance sheets, and policy-shaping abilities
MAJOR DISCUSSION POINT
Creative destruction and disruption in the AI era
Argument 7
Tech summits should facilitate two-way conversations between industry leaders and entrepreneurs rather than one-way presentations
EXPLANATION
Effective knowledge transfer and ecosystem development requires interactive dialogue where established leaders learn from emerging entrepreneurs and students, not just the reverse. This approach recognizes that innovation often comes from unexpected sources and that diverse perspectives enrich the entire ecosystem.
EVIDENCE
Emphasis on making the session interactive, encouraging audience questions, and stating that panelists should listen to what attendees have to say
MAJOR DISCUSSION POINT
Audience engagement and practical concerns
Agreements
Agreement Points
AI enables leaner startups with smaller teams and lower resource requirements
Speakers: Arvind Jain, Malhar Bhide
AI enables much leaner startups with fewer team members, less coding staff, and lower capital requirements AI democratizes knowledge and enables cross-disciplinary work with smaller teams and limited resources
Both speakers agree that AI fundamentally changes startup structure by allowing entrepreneurs to accomplish more with fewer people and resources. Arvind notes that a single person can now build products that previously required larger teams, while Malhar demonstrates this with his 5-person team entering biology without formal training in the field.
Core entrepreneurial spirit and fundamentals remain unchanged despite AI
Speakers: Arvind Jain, Malhar Bhide
AI wave creates opportunities similar to past waves but requires same fundamentals like ambition, risk-taking, and solving real business problems The core entrepreneurial spirit of not asking permission remains unchanged from previous waves
Both speakers emphasize that while AI changes the tools and methods, the fundamental entrepreneurial characteristics of ambition, risk-taking, and not waiting for permission remain as important as ever. They reference successful entrepreneurs like Mark Zuckerberg and Jeff Bezos as examples of this timeless entrepreneurial spirit.
AI creates new cybersecurity challenges requiring specialized solutions
Speakers: Arvind Jain, Navrina Singh
AI creates new attack vectors like prompt injection, requiring new cybersecurity disciplines and entrepreneurship opportunities AI’s dynamic nature requires proving reliability across the entire AI supply chain, unlike static technologies
Both speakers recognize that AI introduces fundamentally new security challenges that differ from traditional cybersecurity. Arvind specifically mentions prompt injection attacks, while Navrina emphasizes the dynamic nature of AI systems requiring new approaches to reliability and risk management across the entire AI supply chain.
Innovation continues to come from startups despite big tech advantages
Speakers: Arvind Jain, Navrina Singh
Innovation continues to come from startups despite big tech resources, as bold ideas originate from passionate individuals Disruption should be viewed at the individual level – entrepreneurs who master AI tools will replace those who don’t
Both speakers believe that despite the massive resources of big tech companies, innovation will continue to emerge from smaller entities. Arvind emphasizes that bold ideas come from passionate individuals, while Navrina reframes the discussion to focus on individual capability rather than company size, arguing that skilled AI users will outcompete those who don’t master these tools.
Similar Viewpoints
All three speakers agree that AI democratizes capabilities and levels the playing field, allowing people without traditional technical backgrounds to build sophisticated products and compete effectively. They emphasize that AI removes traditional barriers to entry and enables new approaches to problem-solving.
Speakers: Arvind Jain, Malhar Bhide, Navrina Singh
AI makes it easier for non-engineers to create products and compete with large players using fewer resources AI democratizes knowledge and enables cross-disciplinary work with smaller teams and limited resources Success requires unlearning old habits and trying new approaches since there’s no established AI playbook
Both speakers emphasize the critical importance of building trustworthy, reliable AI systems, especially for enterprise use. They agree that despite AI’s inherently probabilistic nature, companies must implement mechanisms to ensure accuracy, traceability, and regulatory compliance to succeed in the market.
Speakers: Arvind Jain, Navrina Singh
Enterprise AI must deliver precise results despite probabilistic foundations, with fact-checking and authoritative source trails AI governance creates competitive moats through reliable, robust, and explainable systems that work within regulatory constraints
Both speakers, as Indian entrepreneurs who relocated to the US, agree that the combination of Indian cultural drive and international relocation experience creates ideal conditions for entrepreneurship. They see the challenge of establishing oneself in a new country as developing valuable risk-taking capabilities that benefit startup ventures.
Speakers: Malhar Bhide, Arvind Jain
Indian entrepreneurs bring risk-taking experience from relocating internationally, which translates to startup boldness Indians who relocate to the US are most likely to become entrepreneurs due to their cultural background and determination
Unexpected Consensus
AI governance provides clear business value rather than hindering innovation
Speakers: Navrina Singh, Arvind Jain
AI governance shows clear ROI through faster AI adoption, increased deployment, and improved customer trust leading to revenue growth Enterprise AI must deliver precise results despite probabilistic foundations, with fact-checking and authoritative source trails
This consensus is unexpected because the audience question suggested that major companies like Amazon and Zoom view governance as potentially limiting ROI and innovation. However, both speakers strongly disagreed, with Navrina providing specific evidence of governance enabling faster adoption and Arvind emphasizing the business necessity of reliable AI systems. This suggests a significant gap between perception and reality regarding AI governance value.
Research becomes increasingly critical in AI entrepreneurship
Speakers: Malhar Bhide, Anirudh Suri
Small teams can accomplish more with AI tools, with research becoming more critical to startup success Studying previous technological waves provides valuable lessons for understanding how current AI entrepreneurship might evolve
This consensus is somewhat unexpected because previous technology waves often emphasized speed to market and rapid iteration over deep research. However, both speakers agree that AI entrepreneurship requires more fundamental research and historical understanding, suggesting a shift toward more knowledge-intensive startup approaches compared to the ‘move fast and break things’ mentality of earlier internet waves.
Overall Assessment

The speakers demonstrated strong consensus on several key themes: AI’s democratizing effect on entrepreneurship, the persistence of core entrepreneurial values despite technological change, the critical importance of building trustworthy AI systems, and the continued role of startups in driving innovation. They also agreed on the unique advantages that Indian entrepreneurs bring to the global startup ecosystem.

High level of consensus with complementary perspectives rather than disagreement. The speakers built upon each other’s points and provided reinforcing examples from their different backgrounds (enterprise AI, biotech startup, AI governance). This strong alignment suggests a mature understanding of AI entrepreneurship challenges and opportunities, with implications for policy makers and entrepreneurs that AI governance should be viewed as an enabler rather than a constraint, and that traditional entrepreneurial fundamentals remain relevant in the AI era.

Differences
Different Viewpoints
Role of policy and regulatory considerations in AI startups
Speakers: Arvind Jain, Navrina Singh, Malhar Bhide
For us, like, you know, as an enterprise company, you know, that, you know, we don’t actually think a whole lot about that. But it is important to sort of have, you know, these rules and regulations in place because otherwise AI can actually do significant damage, you know, in the industry. AI governance creates competitive moats through reliable, robust, and explainable systems that work within regulatory constraints Smaller companies benefit from not releasing products publicly until passing regulatory requirements and safety testing
Arvind suggests enterprise companies don’t need to focus heavily on policy considerations, while Navrina argues governance creates competitive advantages, and Malhar emphasizes the benefits of regulatory compliance for smaller companies
Impact of AI governance on innovation and ROI
Speakers: Audience, Navrina Singh
Questions about multidisciplinary problem-solving, AI security fields, and governance ROI reflect real entrepreneurial challenges AI governance shows clear ROI through faster AI adoption, increased deployment, and improved customer trust leading to revenue growth
Audience members expressed concerns that AI governance might hinder innovation and ROI (citing Amazon and Zoom leaders), while Navrina argued that governance actually accelerates business value and shows clear ROI
Unexpected Differences
Necessity of dedicated policy roles in AI startups
Speakers: Anirudh Suri, Malhar Bhide, Arvind Jain
AI entrepreneurship may require dedicated policy and regulatory risk management roles similar to traditional C-suite positions Smaller companies benefit from not releasing products publicly until passing regulatory requirements and safety testing For us, like, you know, as an enterprise company, you know, that, you know, we don’t actually think a whole lot about that
While the moderator suggested policy expertise might become as critical as technical roles, the panelists showed varying levels of engagement with this idea, with some dismissing its importance for their specific contexts
Overall Assessment

The discussion revealed moderate disagreements primarily around the role and importance of AI governance and policy considerations in startup operations, with speakers having different perspectives based on their company size, target markets, and regulatory environments

The disagreements were constructive and reflected different business contexts rather than fundamental philosophical differences. The varying perspectives on AI governance’s role suggest that the field is still evolving and different approaches may be valid depending on the specific startup’s circumstances and market focus.

Partial Agreements
Both agree that AI creates new security challenges requiring specialized solutions, but they focus on different aspects – Arvind emphasizes new attack vectors and entrepreneurship opportunities, while Navrina focuses on the need for comprehensive reliability testing across AI supply chains
Speakers: Arvind Jain, Navrina Singh
AI creates new attack vectors like prompt injection, requiring new cybersecurity disciplines and entrepreneurship opportunities AI’s dynamic nature requires proving reliability across the entire AI supply chain, unlike static technologies
Both agree that AI enables smaller, more efficient teams, but Arvind focuses on operational efficiency and cost reduction, while Malhar emphasizes knowledge democratization and cross-disciplinary capabilities
Speakers: Arvind Jain, Malhar Bhide
AI enables much leaner startups with fewer team members, less coding staff, and lower capital requirements AI democratizes knowledge and enables cross-disciplinary work with smaller teams and limited resources
Takeaways
Key takeaways
AI-driven entrepreneurship enables much leaner startups with smaller teams, lower capital requirements, and democratized access to knowledge across disciplines The fundamental entrepreneurial principles remain unchanged (ambition, risk-taking, solving real business problems), but AI uniquely transforms organizational structures and human roles AI governance and policy compliance are becoming competitive differentiators, not obstacles – companies with robust governance can adopt AI faster and build more trusted products Research has become more critical to startup success in the AI era, with teams needing to prove scientific and regulatory viability Creative destruction will continue despite big tech advantages, as innovation still originates from passionate individuals who can now leverage AI tools more effectively Indian entrepreneurs bring unique advantages including cultural drive, risk-taking experience from international relocation, and deep understanding of local markets New cybersecurity disciplines are emerging around AI-specific threats like prompt injection and hallucination detection, creating entrepreneurship opportunities Success in the AI era requires continuous unlearning and adaptation since established playbooks don’t exist yet
Resolutions and action items
Entrepreneurs should adopt an ‘AI-first’ mindset, consistently evaluating whether machines can perform tasks before hiring people Startups should invest in AI governance early as it enables faster deployment and builds customer trust leading to revenue growth Teams should focus on building reliable, explainable AI systems that work within regulatory constraints rather than just technological innovation Entrepreneurs should engage empirically with customers, scientists, and domain experts to validate problems and solutions Companies should implement fact-checking mechanisms and authoritative source trails for AI outputs to ensure accuracy
Unresolved issues
Whether every startup team needs dedicated AI governance/policy roles or if this can be distributed across existing team members How to balance innovation speed with regulatory compliance requirements, especially for early-stage startups The long-term implications of AI democratization on competitive moats and sustainable business advantages Specific frameworks for detecting and managing AI hallucinations across different use cases and industries How smaller startups can compete with big tech companies that have vast resources for AI safety and governance infrastructure
Suggested compromises
Smaller companies can delay public product releases until regulatory requirements are met and safety testing is complete, allowing them to operate with less formal governance initially Startups can leverage AI governance as a competitive advantage rather than viewing it as a constraint on innovation Teams can use AI tools to enhance rather than replace human expertise, particularly in regulated industries like healthcare and finance Entrepreneurs can balance technological innovation with business problem-solving by using AI as an enabler rather than the primary focus
Thought Provoking Comments
But now with AI, everything changes. In fact, the role of human itself is unclear in what roles seem to exist… You can actually start and chart a journey without actually knowing how to start a company. Because reinventing yourself, thinking AI first, can actually help you build an organization, which is very unconventional.
This comment fundamentally challenges traditional entrepreneurship paradigms by suggesting that AI doesn’t just change what companies do, but transforms the very blueprint of how organizations are structured and operated. It introduces the radical idea that conventional business knowledge may be less relevant in the AI era.
This shifted the conversation from comparing AI to previous tech waves to exploring how AI fundamentally disrupts organizational structures. It prompted Anirudh to immediately follow up with questions about leaner startups and led to deeper exploration of how AI changes team composition and capital requirements.
Speaker: Arvind Jain
Because of how good AI has gotten, knowledge has gotten a lot more democratized. And so there’s less of an excuse to actually be able to work in different fields in this sort of cross-disciplinary nature… One of the people from our team has graduated. Only one person has studied biology and they’re not the same people.
This insight reveals how AI is breaking down traditional educational and expertise barriers, enabling entrepreneurs to enter highly specialized fields without conventional credentials. It challenges the notion that deep domain expertise is a prerequisite for innovation in technical fields.
This comment reinforced Arvind’s point about unconventional organizations and led Anirudh to probe deeper into the role of research in AI startups. It established a theme about how AI democratizes access to complex fields that continued throughout the discussion.
Speaker: Malhar Bhide
You should not be worried about another person or even like AI taking your job. You should really be worried about a person who’s so good with AI actually replacing you… What are creators and entrepreneurs going to create just when they can unlearn very fast rather than… we don’t have a playbook right now for how you should be succeeding in the age of AI.
This reframes the entire disruption narrative from a company-level phenomenon to an individual-level transformation. It introduces the concept of ‘unlearning’ as a critical skill and acknowledges the absence of established success patterns in the AI era.
This comment fundamentally shifted the discussion about creative destruction from focusing on big tech vs. startups to examining individual adaptability and learning agility. It introduced a more nuanced view of disruption that influenced how the panelists discussed entrepreneurial advantages.
Speaker: Navrina Singh
The true moat that is happening for companies like Malhar is not just the technological innovation, because it is, you know, you’re able to do that much faster with a leaner team. But it is how do you do that consistently within the boundaries of the constraints and guidelines… it’s not just about building technology, but it is about building trusted technology.
This insight challenges the common assumption that speed and lean operations are the primary advantages in AI entrepreneurship. Instead, it positions regulatory compliance and trust as the new competitive moats, fundamentally changing what constitutes sustainable competitive advantage.
This comment elevated the discussion from technical capabilities to strategic positioning, leading to deeper exploration of whether AI governance should be a core function in every startup. It connected the technical and policy aspects of the conversation in a meaningful way.
Speaker: Navrina Singh
I think fundamentally, I think, you know, we are more hungry. You know, we are, like, you know, I think there’s something about, like, in our culture, you know, and where we are as a nation, there is, you know, that drive, you know, that, you know, Indian people have, you know, and which is what is actually creating, you know, this incredible success.
This comment introduces cultural and socioeconomic factors as key drivers of entrepreneurial success, moving beyond purely technical or market-based explanations. It suggests that hunger and drive stemming from cultural background may be more important than technical expertise or resources.
This observation added a cultural dimension to the entrepreneurship discussion and validated Malhar’s earlier point about the advantages of relocating and taking risks. It helped explain why Indian entrepreneurs have been particularly successful in Silicon Valley beyond just technical skills.
Speaker: Arvind Jain
Overall Assessment

These key comments collectively transformed what could have been a standard ‘AI entrepreneurship’ discussion into a nuanced exploration of fundamental shifts in how businesses are built, operated, and sustained. The conversation evolved from surface-level comparisons between tech waves to deep structural questions about organizational design, competitive moats, individual adaptability, and cultural advantages. The panelists built upon each other’s insights, creating a layered understanding that AI isn’t just another technology trend but a force that’s rewriting the rules of entrepreneurship itself. The discussion successfully bridged technical, policy, and cultural perspectives, offering the audience a comprehensive view of the changing entrepreneurial landscape.

Follow-up Questions
How do you handle AI hallucinations and can you give a relevancy score to the output?
This addresses a critical technical challenge in AI systems where models generate false or misleading information, which is particularly important for enterprise and regulated applications
Speaker: Audience member
Will we see the emergence of a new field equivalent to cybersecurity for AI, specifically ‘AI security’?
This explores whether AI’s unique vulnerabilities and attack vectors will require specialized security disciplines beyond traditional cybersecurity
Speaker: Audience member
How do you advocate for AI governance when big tech leaders claim it stops innovation and prevents ROI from AI?
This addresses the tension between implementing AI safety measures and maintaining innovation speed, a key policy and business challenge
Speaker: Audience member (works at Protego AI governance company)
How did you find the problem to solve in biology when you’re not from the biology field?
This explores how entrepreneurs can identify opportunities in domains outside their formal training, particularly relevant in the AI era where cross-disciplinary work is becoming more common
Speaker: Audience member to Malhar
What specific evaluation benchmarks and testing methods are needed across the entire AI supply chain?
This was mentioned as part of building reliable AI systems but not fully explored, representing a critical area for ensuring AI system reliability
Speaker: Navrina Singh (implied)
How do you implement guardrails to prevent AI models from being used to design dangerous pathogens?
This addresses biosafety concerns in AI-driven biological research, which is crucial for responsible development of AI in biotechnology
Speaker: Malhar Bhide (referenced ongoing research)
What are the specific advantages and challenges of Indian entrepreneurs building startups in the US versus those building in India?
This topic was touched upon but could benefit from deeper exploration of cultural, systemic, and market differences affecting entrepreneurial success
Speaker: Anirudh Suri (moderator)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Panel Discussion AI and the Creative Economy

Panel Discussion AI and the Creative Economy

Session at a glanceSummary, keypoints, and speakers overview

Summary

This panel discussion examined the complex relationship between artificial intelligence and cultural diversity in creative industries, featuring perspectives from business, policy, and open-source communities. The central question explored whether AI strengthens or weakens global creative output, with panelists agreeing that the answer depends largely on how AI systems are designed, governed, and implemented.


Anna Tumadote from Creative Commons emphasized that outcomes depend on whether AI models are open source with transparent governance frameworks or closed and opaque systems. She noted an emerging ethical inconsistency where artists object to AI training on their work while simultaneously using AI tools trained on global content. Nicholas Granatino from Tara Gaming presented India’s unique opportunity, arguing that the country’s rich public domain heritage, particularly ancient epics like the Mahabharata and Ramayana, could provide valuable training data for AI models while allowing India to create new intellectual property layers on top of this cultural foundation.


Kenichiro Natsume from WIPO acknowledged the challenge of achieving global consensus among 194 member states on AI governance, advocating for practical technological solutions rather than lengthy international treaty negotiations. The discussion revealed significant concerns about cultural gatekeeping by major AI platforms and the potential shrinking of creative commons as creators become more restrictive with their work due to lack of consent mechanisms.


All panelists agreed that human-centered approaches should guide AI governance in creative industries, emphasizing that creativity fundamentally remains a human activity that AI should enhance rather than replace.


Keypoints

Major Discussion Points:

AI’s Impact on Cultural Diversity: The panel explored whether AI strengthens or weakens global creative output, with consensus that the answer depends on factors like open vs. closed source models, governance frameworks, and design principles. There’s current risk of weakening diversity due to underrepresentation of certain cultures in training datasets.


India’s Strategic Opportunity in AI: Discussion of how India’s rich public domain heritage (particularly the Itihasas – ancient epics like Mahabharata and Ramayana) could provide a competitive advantage while the US and Europe face copyright paralysis in AI training, allowing India to create new IP layers on top of public domain content.


Global IP Framework Preparedness: Examination of whether current intellectual property systems can handle large-scale AI-generated content, with acknowledgment that achieving consensus among 194 WIPO member states for new international treaties would be extremely challenging, leading to more pragmatic technological solutions.


Ethical Inconsistencies in AI Use: Analysis of the contradiction where artists object to AI training on their work without consent while simultaneously using AI tools trained on global datasets, highlighting the tension between individual creator rights and collective creative advancement.


Future of Open Creative Commons: Discussion of how the open movement faces challenges from AI’s massive scale, with creators pulling back from sharing work, potentially shrinking the creative commons and breaking human-to-human collaboration that relies on copyright clarity.


Overall Purpose:

The discussion aimed to examine the intersection of artificial intelligence and intellectual property in creative industries, exploring both opportunities and challenges for global creative output, with particular focus on how different stakeholders (business, policy, and open source communities) can navigate the rapidly evolving landscape.


Overall Tone:

The discussion maintained a thoughtful, nuanced tone throughout, with panelists acknowledging complexity rather than offering simple solutions. The tone was collaborative and forward-looking, with speakers building on each other’s points. While there was underlying concern about potential negative impacts of AI on creativity and cultural diversity, the overall atmosphere remained constructive and solution-oriented, emphasizing the need for human-centered approaches and technological solutions that balance various stakeholder interests.


Speakers

Speaker: Moderator/Host of the discussion panel


Nicholas Granatino: Chairman of Tara Gaming, investor who sits on boards of frontier lab companies like Kivita and H company, focuses on business side and gaming industry


Kenichiro Natsume: Assistant Director General at WIPO (World Intellectual Property Organization), works on policy side and international intellectual property matters


Anna Tumadote: Chief Executive Officer of Creative Commons, expert in open movement and creative commons licensing


Additional speakers:


None identified beyond the speakers names list provided.


Full session reportComprehensive analysis and detailed insights

This panel discussion brought together three distinct perspectives to examine the complex intersection of artificial intelligence and cultural diversity in creative industries. The conversation featured Anna Tumadote, CEO of Creative Commons; Kenichiro Natsume, Assistant Director General at WIPO; and Nicholas Granatino, Chairman of Tara Gaming, each offering insights from their respective domains of open-source advocacy, international policy, and business innovation.


The Fundamental Question: Does AI Strengthen or Weaken Cultural Diversity?

The central inquiry explored whether AI enhances or diminishes global creative output, with all panellists agreeing that the answer fundamentally depends on how AI systems are designed, governed, and implemented. Anna Tumadote emphasised that outcomes hinge on whether AI models are open source with transparent governance frameworks or closed and opaque systems. She noted that whilst the open movement has traditionally managed problems on the margins—what she termed “5% problems”—AI’s massive scale has transformed these marginal issues into systemic challenges that threaten the entire creative ecosystem.


Kenichiro Natsume provided a nuanced perspective from the international policy standpoint, arguing that AI represents another disruptive technology in human history, similar to previous innovations that the copyright system has successfully accommodated. However, he acknowledged the unprecedented speed differential between human and AI learning processes, noting that whilst humans have always learned by mimicking others’ work—indeed, the Japanese term for “learn” originally meant “mimic or copy”—AI accomplishes this at incomparably faster rates.


Nicholas Granatino presented perhaps the most provocative argument, introducing the concept of “Captain America hegemony” to illustrate how Western cultural dominance in digital content creates systemic biases in AI training datasets. He argued that India’s rich oral traditions, despite representing 20% of the world’s population, remain dramatically underrepresented compared to Hollywood content and AAA gaming, creating fundamental imbalances in how AI models understand and generate creative content.


India’s Strategic Opportunity in the AI Landscape

A significant portion of the discussion focused on India’s unique positioning in the global AI development race. Granatino argued that whilst the United States and Europe face “massive uncertainty” and “paralysis” regarding AI training and copyright permissions, India possesses a strategic advantage through its rich public domain heritage. The itihasas—epic narratives including the Mahabharata and Ramayana—represent cultural wisdom that remains both in the public domain and part of living tradition.


This presents what Granatino termed India’s “biggest opportunity”: the ability to create new intellectual property layers on top of established public domain content whilst the West grapples with copyright restrictions. He highlighted his collaboration with Sarvam AI to digitise this cultural heritage using OCR and voice models, potentially creating training datasets that could give India a competitive edge in AI development. The strategic value lies not merely in the content itself, but in having 20% of the world’s population actively engaged with and capable of generating rich datasets around these cultural narratives in latency space.


Global IP Framework Challenges and Pragmatic Solutions

The discussion revealed significant scepticism about the readiness of current intellectual property frameworks to handle large-scale AI-generated content. Natsume candidly acknowledged the practical impossibility of achieving consensus among WIPO’s 194 member states for comprehensive international AI treaties, describing it as “a long journey” that the international community is not yet mature enough to undertake.


Instead, WIPO is pursuing what Natsume termed a “pragmatic approach,” focusing on technological solutions rather than legal frameworks. This approach aims to create technological infrastructure that allows creators to be appropriately rewarded whilst enabling tech companies to understand what content can and cannot be used. Natsume’s frank assessment that stakeholders “don’t have to be necessarily happy very much” but should be “unhappy to some extent equally” reflects the reality of international compromise in complex technological governance.


The moderator, drawing from experience with the Broadcast Treaty negotiations, referenced President Macron’s observation that “it’s not about regulation, it’s about civilization,” highlighting the deeper cultural implications of AI governance decisions.


The Attribution Crisis and Shrinking Commons

One of the most striking aspects of the discussion was the frank acknowledgement of ethical inconsistencies within the creative community. Tumadote identified a fundamental contradiction where artists object to AI training on their work without consent whilst simultaneously using AI tools trained on global datasets. This inconsistency reflects deeper tensions between individual creator rights and collective creative advancement.


The conversation revealed that this ethical confusion is contributing to what Tumadote described as “the shrinking of the commons”—creators are becoming more restrictive with their work due to lack of consent mechanisms, which ironically damages human-to-human collaboration that relies on copyright clarity. Tumadote argued that the issue increasingly resembles a labour concern rather than a copyright problem, with creators primarily fearing replacement rather than unauthorised copying.


The lack of attribution at the inference level—where users cannot see the origins of AI-generated content—further divorces people from understanding the creative sources that inform AI outputs. Nicholas illustrated this problem with a powerful analogy about the Nobel Prize for protein folding: while the prize recognised the AI breakthrough, it overlooked the thousands of contributors to the Protein Database whose work made the AI training possible.


However, the discussion also highlighted positive examples of AI-enhanced creativity. Anna mentioned artists like Holly Herndon and Imogen Heap as pioneers who are successfully integrating AI into their creative processes whilst maintaining human agency and artistic vision.


Technological Solutions Over Legal Frameworks

The panel converged on the need for practical, technological approaches to AI governance challenges. Rather than waiting for comprehensive legal frameworks, stakeholders are exploring technological infrastructure that can provide immediate solutions. WIPO plans to launch its first meeting on this technological approach on March 17th, signalling a shift from traditional treaty-making towards more immediate, practical solutions that can evolve with the technology.


These technological solutions might include better attribution systems, consent management platforms, and reward mechanisms that can operate across different jurisdictions and legal frameworks. Rather than simple opt-in or opt-out mechanisms, Tumadote advocated for a spectrum of “yes if” conditions—creators might agree to AI training if they receive attribution, rewards, or other forms of recognition.


Business Perspectives and Investment Opportunities

From the business standpoint, Granatino expressed optimism about AI’s potential, comparing the current moment to the early internet era but noting that AI development is progressing “much faster than the internet.” He emphasised that successful AI implementation will require collaboration between technological tools and human creativity, rather than replacement of human creative processes.


The investment thesis centres on recognising that AI enhances rather than replaces human creativity, particularly in complex creative endeavours involving hundreds of people. Granatino argued that storytelling—a fundamentally human skill—will become increasingly valuable in business contexts, suggesting that the future lies in human-AI collaboration rather than competition.


Human-Centred Governance as the Core Principle

Despite the complexity of the challenges discussed, all panellists agreed on a fundamental principle: AI governance must remain human-centred. Both Natsume and Tumadote emphasised that creativity fundamentally originates from human activity, and any governance framework must preserve and enhance human agency rather than diminish it.


This human-centred approach extends beyond simple preservation of human roles to ensuring that AI development serves human flourishing and cultural diversity. The discussion suggested that successful AI governance will require maintaining space for human creativity whilst harnessing AI’s capabilities to enhance rather than replace human creative processes.


The conversation concluded with shared recognition that whilst the challenges are significant, the opportunities for positive transformation remain substantial if stakeholders can develop collaborative, human-centred approaches that preserve cultural diversity whilst harnessing AI’s creative potential. The key lies in developing technological solutions that can bridge the gap between creators’ desire for agency and the practical needs of AI development, ensuring that the future of AI serves human creativity rather than replacing it.


Session transcriptComplete transcript of the session
Speaker

Yes. Yes. three crucial elements. I’ll start with Nicholas Granatino, who’s on the business side, the chairman of Tara Gaming. Second is Kenichiro Natsume, who is the assistant director general at WIPO on the policy side. And we have Anna Tumadote, who is the chief executive officer of Creative Commons. Since we have a half an hour, I don’t want to waste time. I have this urge to do a lot of context setting, but I will refrain. And I will jump into the first question for each of the panelists, and then we can do it more conversational. First question to each of the panelists. Anna, can we start with you? Does AI strengthen or weaken cultural diversity in the global creative output?

Anna Tumadote

It’s a good question, and I think we can just do our context setting along the way here with this. So I think this is one of those wonderful questions where the answer is going to be, it depends. It depends. Is the model open source, and we can all interrogate what’s in it and build upon it and improve it? Or is it closed source? Is the model got good governance frameworks attached to it so we can understand some of the intentions behind it or is it all very opaque? So I think ultimately it’s going to just come down to what sorts of values and design principles we’re able to instill as we build this new ecosystem.

I think currently we are at risk for a weakening.

Kenichiro Natsume

Thank you very much. My answer is not a binary answer. Because if we think about the intellectual property aspect, namely copyright, then AI is okay. AI can be used to enhance the copyright table or artworks. At the same time, that could be also kind of a threat because people can create something or generate something by using artificial intelligence. and in terms of the legal perspective or international perspective we do have a copyright system which in my personal opinion can still cope with the artificial intelligence because I think that artificial intelligence at this stage is one of the cutting edge technologies which is magnificent but one of the cutting technologies or disruptive technologies we have been experienced in the history of our human life.

Thank you.

Nicholas Granatino

Yeah. Namaste. I think I will actually give an answer which is very positive on the creativity side and that’s what we saw at Atara Gaming as an opportunity is the fact that from the web to social graph to now AI I think we always underestimate and don’t talk enough about the data and the content and the case of AI is what is called the latency space which includes the training data set. which is used for these models to respond to prompts. And if we look at the case of India, which has been mainly an oral tradition, and we look also at how much of their content have been digitized to the level of Hollywood in movies or AAA game in gaming, it is not represented.

And that has huge implication in terms of these models and whether or not they enhance creativity or whether or not they need creativity. And my answer to that is they need creativity. They need actually the wonderful epic and story and living traditions that are in the ETS, which are these Indian epics, to be represented in their data set. And at the moment, they’re not. And so the bigger question for me is how do we make sure that India, with its phenomenal history and culture and 20 % of the world population, punches at its weight? in the training data sets of these AI models. And the first work that needs to be done is a creative work. It’s not an AI work.

Speaker

You know, Nicholas, when I sent these questions, the kick I got was Nicholas pushed back happily and he gave me three questions which I told him that one of them really fired my imagination. I want to ask him about this in terms of this aspect and this whole AI IP sort of dispute in terms of the US and Europe are currently facing massive uncertainty in terms of AI training and copyright assets and permission. And that’s either, depending on who you are, it’s either stalling, you know, development, etc. You’ve said, Nicholas, that this paralysis is India’s biggest opportunity. How does India’s rich public domain heritage, the itihas, as you called it, give us a strategic edge right now?

Nicholas Granatino

Yeah, so the ETS, actually, maybe most of you know one part of the Mahabharata, which is one of the ETS. The second one is the Ramayana. The one I’m referring to is the Gita. And so maybe most of you are more familiar with the Gita, which is actually just a short part of the Mahabharata. These are epics that are thousands of years old. What’s phenomenal about them is they are in the public domain, but they are still living tradition. Parents and grandparents still tell their children about Lord Ram, about Ravan, about Hanuman. Prime Minister Modi speaks about it all the time as well. And they are in the public domain. And the reason, and nobody has actually done any work with them and created any IP on top of that, the quality we’re looking to do at Tara Gaming.

And so there is basically a data set. And what’s really exciting about Sarvam AI is that they’re actually going to help. to actually digitize with their OCR model, with their voice model, all this culture which is in the public domain and which can enter through SARVAM and through the works that we’re doing, actually the data set to train. And that creates an opportunity, I think, which is quite unique. And the second layer is the fact that you have 20 % of the world population here in India to talk about it and create rich data sets about that.

Speaker

And the interesting point is that on this public domain content and body of work, you have this opportunity to create IP. And Ken, the question is for you. Is the global IP framework prepared for large -scale AI -generated content today?

Kenichiro Natsume

Thank you very much for the very interesting but big question. It’s not very easy to answer in a short time, but let me try. Because I… Artificial intelligence, particularly in the area of the creative industry, of course, it’s changing the work of the artists and creators. and I’ve been discussing with let’s say in India for example taking advantage of my visit to Delhi meeting with different stakeholders publishers music industry or tech industries and the views are very much different it’s no secret and you know that the views are very much different that’s the reality in front of us and even among the same segment for example publishers there are different views and for example artists, one of my friend artists, she’s using AI to create digital art at the same time the other artists are kind of feeling threatened by the generation of the artwork or the output generated by so called generative AI so it’s not very straightforward and the reality in front of us is that it’s not There are very much different views and it’s not so easy to find a common denominator because the views are so different.

It’s not something like zero and one and let’s see 0 .5. It’s not like a mathematic. So if we see about the intellectual property system, at this stage our view is the following because that we, WIPO, as a UN international organization sometimes are asked that, hey WIPO, why don’t you think about making some rules or regulations, international treaty of AI and intellectual property. Not sounds bad, but how long does it take? We have 194 member states. Our principle is consensus. Okay. So 194, meaning 194 member states, including of course India, should agree upon one common thing. and just to be frank with you, don’t quote me, it’s a long journey. Sounds relatively easy. Yes, yes. So our approach is more pragmatic.

Okay, we can think of something, legal framework, but let’s put it aside because it’s not, the international scene is not matured enough. It’s not ready enough. So our idea is let’s think about something more practical, more technological, to see if there is any technological solution possible so that both creators’ side and industry side, tech industry side, can live with. I would say can live with. They don’t have to be necessarily happy very much. They have to be unhappy to some extent equally. They have to compromise each other. But it’s not an international legal negotiation. It’s a collaboration or cooperation explore to find some technological infrastructure where people can live with us so that creators can be benefited or remunerated as well as tech industries can utilize those products.

Thank you.

Speaker

Anna, you know one of the things Ken said is an artist and use AI to create artworks and you have artists today and if I look at the music industry film industry you have composers who use AI to generate production music and a lot of musical works maybe not declared or otherwise but it’s a huge industry but at the same time the creator voice often complains about the lack of consent in AI training whereas most of them work on AI models which have trained on a worldwide corpus of data and content. Do you see a sort of ethical or rational inconsistency here in this sort of kind of use and objection or what do you think is the correct position?

Anna Tumadote

Is there an ethical inconsistency? Yes, yes is the answer. It’s funny that you get the, hey, WIPO, can you fix this? Because Creative Commons gets that from time to time, too. In fact, in the early days of generative AI, we were getting a, hey, Creative Commons, can you fix AI for us? We’re like, okay, which direction are we going to fix it in? I’m just kidding. We never suggested that we would actually fix it. But it actually comes down to this question that you were asking. So, you know, here we have the world’s creativity that has been scraped, crawled, trolled, whatever, you know, however you want to describe it. And we built these massive foundation models.

And the, you know, the sort of relative weight of every individual work in there is infinitesimal. It’s tiny. It’s tiny. The bigger concern is what does use of these technologies do to the creative industries, right? It’s more a fear of, like, replacement. It’s a labor issue. It actually feels increasingly less. Like a copyright issue when we’re talking about some of these considerations. But then there’s that layer on top in the sort of inference level where. you know, you’re querying, like, tell me, you know, tell me a story about this, you know, tell me, tell me, you know, about a certain concept, or whatever the case may be, and we’re not seeing where that information is coming from, right?

So, we’re sort of divorced from where the origin of the creativity, or the knowledge, or whatever it was, came from, and that, I think, is going to be a longer -term problem for these tools, because you’re not really going to trust in how they work, and you see it show up similarly with the artistry piece, right? So, we have artists who have always embraced the free culture movement, you know, give things over to the public domain, or give things over with a creative commons license, and are enthusiastically experimenting with these technologies. We have artists, too, who have, like, vast bodies of work, where they’re building their own models, and so they’re just enhancing their own craft.

So, interesting examples to look at would be Holly Herndon and Imogen Heap, who’ve, like, really been on the forefront of this, um, but at the same time, it was to your point, you know, there are artists who are playing with this, but are like, no, no, no, anything that I create, that’s mine, but I’m going to use all the world’s creativity here freely, and we have to find some kind of a middle ground in this, because Ultimately, all knowledge builds on prior knowledge. All creativity builds on prior creativity. The richness of the public domain, like Nicholas was talking about, you can walk into any museum and be inspired. You don’t have to go, I saw this work and that work and that work and that work and that work, and now I’ve sketched this drawing.

But there is something with the technological layer where if you are fusing together different things or asking for certain styles or certain inspirations or so on, there really should be some sort of form of credit given.

Speaker

Please, Nicholas.

Nicholas Granatino

Yeah, I mean, I think everybody has celebrated, including the Nobel Committee, the work of Demis Hassabis at DeepMind on protein folding. But the reality is there have been actually scientists which have been for 50 years putting their crystal structure of proteins into a database called the Protein Database. I think it would have been nice of the Nobel Committee to also include the protein data bank as a recipient of that Nobel Prize. And actually, the data has always been put under the carpet. I mean, a lot of the big tech is saying content is free, what people pay for is search, tech, AI, whatever it is, the link, the graph, the link graph, the social graph, and now the AI model.

And so, the question they want to pose is, do you want AI as a society to have the best data? And the answer is probably yes on the condition that you are open, but if you are going to make money at the gateway of the chat or whatever is your application, and you’re going to assume that everybody works for you, I don’t think society wants that. President Macron yesterday talked about it’s not about regulation, it’s about civilization. And I think it’s, as a civilization, what do we want as a future? How much do we want to rework the work of these protein crystallographers, of these creatives, etc.? That’s the real question.

Speaker

Please, Ken.

Kenichiro Natsume

Just one quick note. Anna’s comment was very touching to me because you mentioned it. we human beings refer to or learn from the other person’s creative work, which is true. I’m a Japanese, and the Japanese term of learn originally meant mimic or copy. So the learning is starting from to look at some other person’s work and try to imitate, and based on that we develop our own flavor or texture. And this has been done by human beings for ages. The big difference is that if it’s done by human, it takes time. But if it’s done by computer, it takes very, very limited time. And that’s a big difference. So what it’s doing is essentially more or less the same, but the speed is completely different.

And maybe we have to draw a line. This is what exactly Anaba was saying. And the answer, where do we have to draw a line? Is a difficult question.

Anna Tumadote

You know, it’s so funny you say that because even in the open movement and in the open knowledge communities, we’ve had problems for years, but they’ve been problems on the margins, right? They’ve been the sort of 5 % problem, and we’ve thought to ourselves, you know, this is 95 % good, and so that’s good enough. But AI comes along, and the scale is so massive that now you have to grapple with this. So, for instance, like, what if you’ve shared your work freely, and then it’s used for nefarious purposes? Nobody wants that. Like, that’s the sort of, like, another ethical conundrum that you face. But copyright is not built to handle that. Like, society ideally would find ways around that.

Maybe there’s normative frameworks we need to introduce. Maybe we need to think about sort of different legal or technical solutions to this because the scale is just so extreme.

Speaker

So, Ana, is the open movement something that can withstand this, the AI onslaught or the spread of AI? Is it structured to do that, or would it have the same challenges that, let me call it the proprietary copyright model as the traditional… The traditional copyright model.

Anna Tumadote

I think there’s a transformation that’s going to have to happen because I think what we’re actively seeing is people pulling back from sharing their works because, you know, for whatever, you know, if they have no consent or, you know, consent mechanism or no agency over how it’s used, they’re going to probably do the only thing that they can in that situation and either take it back or put it behind a wall or try to make you pay for it or, you know, any of these other levers that they can pull. And we are actively seeing the shrinking of the commons already. Like, this is a really bad outcome. And here’s the real kicker when we’re talking about the sort of humans have been doing this for a long time.

Like, human -to -human collaboration relies on copyright clarity, on the CC licenses, on this ability that, like, if I write something, you know what you can do with it because it’s under this license and so on and so forth. But now we’re seeing creators say, no, no, I’m going to go more restrictive. And that breaks the human collaboration element. So there’s all these downstream negative consequences from this. So I think we can withstand it, but I think collectively we have to reckon with the fact that there is a problem. and the scale is so magnificent that we can’t just stick our fingers in our ears and say, you know, ultimately this is in the public interest.

It’s not going to be that way because if nobody shares, then there’s nothing left for us.

Speaker

You know, Nicholas, to your point on the opportunity for India and the use of ethos and public domain, but, you know, being the opportunity for India to create that IP layer, isn’t there also the danger that we might slip into this cultural gatekeeping by a handful of AI platforms? Is that then an overwhelming danger as well?

Nicholas Granatino

Yeah, absolutely. I mean, I think, you know, Europe and Mistral are pushing a lot for, you know, open source, and China is pushing a lot also in the open source community. But that’s just a layer above of what I think we’re talking about here. today. There’s something I call the Captain America hegemony, which is basically we all grew up in France, in India, anywhere with Captain America as being this kind of powerful with all the weapons, all the defense mechanism, etc. And if you actually have open source, you just have a gateway which is free to steal a corpus of creativity, which is in this Captain America hegemony. And I think what we want is we have to remember that AI is just on top of something that has been done, and so the creative process is not really part of that.

It’s above language, it’s above image, etc. And we know that everybody agrees in the AI community there’s actually two, three things that need to happen to reach AGI. And those two, three things probably overlap with a lot of the creativity that makes us unique and allow us to make this corpus that AI will continue to be trained of. So I’m quite optimistic because there is things that is pre – art, whether it’s text, whether it’s image, whether it’s video, whether it’s game, you know, that is still a creative process. Sometimes that involves hundreds of people. And that you’re not going to, it’s not that you’re going to have agents speaking to each other and creating a game.

We’re very far from that.

Speaker

Ken, you know, to your point on we need to find a middle ground, is this, realistically, is this a level and as a copyright lawyer, the Berne Convention, Rome, the whole push to sort of a harmonized, largely sort of co -existent copyright model across the world, has generally worked. The one exception being the Broadcast Treaty, which I started working when I was a young associate and it’s still being discussed and I love those documents because I see how that conversation has changed. But is, in the context of IP and AI, is global harmonization realistic or is fragmentation something? you know that we’ll all have to live with

Kenichiro Natsume

i wish i could immediately say yes however the reality is as i mentioned briefly before that having a consensus with 194 member states including this country and big other countries is not always easy so that’s why we are opting at this stage for a little bit softer approach so that the technique technological solution or technological platform or technological infrastructure could actually solve the issues where creators can be rewarded or remunerated appropriately where the tech companies can easily recognize what is opted in what is opted out what is the artwork made by generated by the artificial intelligence what is done by the human being so that they can understand what can be done and what cannot be done.

So that’s the approach we are taking place. And just for your information, we will launch the first meeting of that next month, March 17th, which is available online. So please stay tuned. Thank you.

Anna Tumadote

Oh, it’s in our calendar. Yeah. Great, thank you very much. Yeah, we’ll be there because I was thinking about this sort of global standard and framework question. And one of the things that we’ve tried to do with the Creative Commons community is think about, all right, what are the things that everybody wants in this moment? What are the choices they want? And what are the sort of conditions under which they would share, right? And you can imagine it going everywhere from, to your point, like in the EU, there’s the opt -out. It’s like, no, I’m not interested in this. Very important that we maintain limitations and exceptions there, though, for research. All the way to the full yes, because maybe there’s a world where people are like, put me in, put me in and tell people who I am.

But somewhere in between there, there’s a yes if. yes if you reward me, yes if you attribute me, yes if you contribute to this project, yes if you support the open infrastructure etc. and I think we just have to get a lot more sort of creative and nuanced in that spectrum.

Speaker

I sense that at least for the short -term countries are going to try to find their own you know sort of policy solutions given and hopefully and I would hope that the intention is to harmonize as much as possible because I think the implications of just the AI business across the world require harmonization and scream for harmonization and so will businesses that use those models to create more IP or IP -like content. Nicholas you sit on these boards of companies frontier lab companies like Kivita and H company if there’s one mandate to Indian investors or something like that, what would you say? I think there’s a lot of work and creators in this room you want to give to ensure that we aren’t India is just not consumer of AI 2030, what would that be?

Nicholas Granatino

No, I think as an investor, it’s a tremendous time. I mean, I think it feels like the internet all over again. I think there’s lots of opportunity. It’s moving very fast, you know, much faster than the internet. So it’s a bit, you know, difficult to pack the right opportunity. But I think it’s going to be a collaboration with these wonderful, you know, tools that we have and human creativity. And that is going to stay. I mean, some people say storytelling is going to be the main skill in business. That is very much a human inequality. So the future is bright.

Speaker

I’m seeing this big flashing red sign which says time’s up. I don’t know, mine or the panel’s. I’m hoping it’s only the panel’s. But I’m going to do a little Don Quater thing and do one last question in terms of what single principle should guide international AI governance in the creative industries over the next decade? Ken? Ken?

Kenichiro Natsume

That’s a big question. It says time’s up, so let me do it very briefly. I think we should put human -centered approach because still creativity comes from human beings’ activity, not from the artificial intelligence. So this is one fundamental which I should take. Thank you.

Anna Tumadote

I’ll just say plus one. Just keep the humans. Keep the humans at the center.

Speaker

Insightful answer as always. Thank you. Thank you to the panel. Thank you to this very engaging audience. Thank you for listening to us.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Anna Tumadote
6 arguments198 words per minute1312 words395 seconds
Argument 1
AI’s impact depends on whether models are open source with good governance or closed and opaque; currently at risk of weakening diversity
EXPLANATION
Anna argues that AI’s effect on cultural diversity depends on the design principles and governance frameworks of AI models. She emphasizes that open source models with transparent governance can strengthen diversity, while closed, opaque systems pose risks to cultural diversity in creative output.
EVIDENCE
She mentions the importance of being able to interrogate what’s in AI models, build upon them, and improve them, contrasting this with opaque systems.
MAJOR DISCUSSION POINT
AI’s Impact on Cultural Diversity and Creative Output
Argument 2
There is ethical inconsistency when artists use AI trained on global content while objecting to their own work being used without consent
EXPLANATION
Anna identifies a contradiction where creators use AI models trained on worldwide content while simultaneously complaining about lack of consent in AI training. She notes this creates an ethical dilemma about the relative weight of individual works in massive datasets versus concerns about replacement and labor issues.
EVIDENCE
She explains that individual works have ‘infinitesimal’ weight in training datasets and that the bigger concern is what AI does to creative industries as a labor issue rather than a copyright issue.
MAJOR DISCUSSION POINT
Ethical Inconsistencies in AI Use by Creators
Argument 3
AI’s massive scale transforms marginal problems into major issues; people are pulling back from sharing, shrinking the commons
EXPLANATION
Anna explains that while the open movement previously dealt with problems affecting only 5% of cases, AI’s massive scale has amplified these issues significantly. This has led to creators restricting their sharing practices, which negatively impacts the commons and human collaboration.
EVIDENCE
She mentions that copyright is not built to handle issues like works being used for nefarious purposes at this scale, and that people are actively taking content back or putting it behind paywalls.
MAJOR DISCUSSION POINT
Scale and Speed Differences Between Human and AI Learning
AGREED WITH
Kenichiro Natsume
DISAGREED WITH
Kenichiro Natsume
Argument 4
The open movement faces transformation as creators restrict sharing due to lack of consent mechanisms, breaking human collaboration
EXPLANATION
Anna warns that the open movement must adapt because creators are becoming more restrictive with their works when they lack agency over how AI uses their content. This shift toward restrictive licensing breaks the human-to-human collaboration that relies on copyright clarity and Creative Commons licenses.
EVIDENCE
She explains that human collaboration depends on copyright clarity and CC licenses, but creators are now going more restrictive, which has downstream negative consequences for collaboration.
MAJOR DISCUSSION POINT
Future of Open Movement and Creative Commons
DISAGREED WITH
Nicholas Granatino
Argument 5
Need for more nuanced spectrum of sharing conditions from opt-out to conditional sharing with attribution or rewards
EXPLANATION
Anna advocates for a more sophisticated approach to content sharing that goes beyond simple opt-in/opt-out mechanisms. She suggests various conditional sharing arrangements that could include attribution, rewards, or contributions to projects as middle-ground solutions.
EVIDENCE
She provides examples ranging from EU’s opt-out approach to ‘yes if’ conditions like ‘yes if you reward me, yes if you attribute me, yes if you contribute to this project, yes if you support the open infrastructure.’
MAJOR DISCUSSION POINT
Global Harmonization vs Fragmentation in AI Governance
AGREED WITH
Kenichiro Natsume
DISAGREED WITH
Kenichiro Natsume
Argument 6
Keeping humans at the center of AI development and governance
EXPLANATION
Anna emphasizes the fundamental principle that humans should remain central to AI governance and development decisions. This represents her core belief about how AI systems should be designed and managed.
EVIDENCE
She simply states ‘plus one’ to Ken’s human-centered approach and adds ‘Just keep the humans. Keep the humans at the center.’
MAJOR DISCUSSION POINT
Guiding Principles for AI Governance
AGREED WITH
Kenichiro Natsume
K
Kenichiro Natsume
6 arguments133 words per minute952 words428 seconds
Argument 1
AI can enhance copyrighted artworks but also poses threats; copyright system can still cope with AI as another disruptive technology
EXPLANATION
Kenichiro argues that from an intellectual property perspective, AI presents both opportunities and challenges for copyright. He believes the existing copyright system is capable of handling AI as it has handled other disruptive technologies throughout history.
EVIDENCE
He mentions that AI is ‘one of the cutting edge technologies which is magnificent but one of the cutting technologies or disruptive technologies we have been experienced in the history of our human life.’
MAJOR DISCUSSION POINT
AI’s Impact on Cultural Diversity and Creative Output
Argument 2
Different stakeholders have vastly different views; finding consensus among 194 WIPO member states for international treaties would take too long
EXPLANATION
Kenichiro explains that stakeholders across different segments have fundamentally different perspectives on AI and IP issues. He acknowledges that achieving consensus among all WIPO member states for international AI treaties would be a lengthy process that isn’t practical given the current maturity level of international discussions.
EVIDENCE
He mentions meeting with different stakeholders in India including publishers, music industry, and tech industries, noting that ‘even among the same segment for example publishers there are different views’ and that reaching consensus with 194 member states based on consensus principle would be ‘a long journey.’
MAJOR DISCUSSION POINT
Global IP Framework Readiness for AI-Generated Content
AGREED WITH
Speaker
Argument 3
WIPO is taking a pragmatic approach focusing on technological solutions rather than legal frameworks to help creators and tech industries coexist
EXPLANATION
Kenichiro outlines WIPO’s strategy of pursuing practical technological infrastructure solutions instead of waiting for international legal frameworks. The goal is to create systems where both creators and tech industries can coexist, even if neither side is completely satisfied.
EVIDENCE
He explains they’re looking for ‘technological infrastructure where people can live with us so that creators can be benefited or remunerated as well as tech industries can utilize those products’ and mentions launching the first meeting on March 17th.
MAJOR DISCUSSION POINT
Global IP Framework Readiness for AI-Generated Content
AGREED WITH
Anna Tumadote
DISAGREED WITH
Anna Tumadote
Argument 4
Humans have always learned by mimicking others’ work, but AI does this at unprecedented speed compared to human learning timelines
EXPLANATION
Kenichiro draws a parallel between human learning processes and AI, noting that both involve learning from existing works. However, he emphasizes that the critical difference lies in the speed of processing, with AI accomplishing in limited time what takes humans much longer.
EVIDENCE
He explains that in Japanese, ‘the term of learn originally meant mimic or copy’ and that ‘if it’s done by human, it takes time. But if it’s done by computer, it takes very, very limited time.’
MAJOR DISCUSSION POINT
Scale and Speed Differences Between Human and AI Learning
AGREED WITH
Anna Tumadote
DISAGREED WITH
Anna Tumadote
Argument 5
Global consensus among 194 countries is unrealistic; WIPO pursuing technological infrastructure solutions for March 2024 launch
EXPLANATION
Kenichiro reiterates that achieving global harmonization through traditional treaty-making processes is not realistic given the complexity of getting 194 member states to agree. Instead, WIPO is focusing on creating technological platforms and infrastructure solutions.
EVIDENCE
He mentions that WIPO operates on consensus principle with 194 member states and announces ‘we will launch the first meeting of that next month, March 17th, which is available online.’
MAJOR DISCUSSION POINT
Global Harmonization vs Fragmentation in AI Governance
Argument 6
Human-centered approach should guide AI governance since creativity fundamentally comes from human activity
EXPLANATION
Kenichiro advocates for maintaining a human-centered approach in AI governance, emphasizing that despite technological advances, creativity remains fundamentally a human activity. This should be the guiding principle for international AI governance in creative industries.
EVIDENCE
He states ‘I think we should put human-centered approach because still creativity comes from human beings’ activity, not from the artificial intelligence.’
MAJOR DISCUSSION POINT
Guiding Principles for AI Governance
AGREED WITH
Anna Tumadote
N
Nicholas Granatino
7 arguments163 words per minute1122 words411 seconds
Argument 1
AI needs creativity and diverse cultural content in training datasets; India’s oral traditions are underrepresented compared to Hollywood content
EXPLANATION
Nicholas argues that AI models require diverse creative input and that India’s rich oral traditions are significantly underrepresented in training datasets compared to Western content like Hollywood movies or AAA games. He emphasizes that AI needs creativity rather than replacing it, and the underrepresentation has major implications for AI model responses.
EVIDENCE
He points out that India ‘has been mainly an oral tradition’ and questions ‘how much of their content have been digitized to the level of Hollywood in movies or AAA game in gaming, it is not represented’ and notes India has ‘20% of the world population.’
MAJOR DISCUSSION POINT
AI’s Impact on Cultural Diversity and Creative Output
Argument 2
India’s rich public domain heritage (itihas/epics like Mahabharata, Ramayana) represents a unique opportunity as these are living traditions still told today
EXPLANATION
Nicholas highlights that India’s ancient epics like the Mahabharata and Ramayana, while thousands of years old and in the public domain, remain living traditions actively shared by parents, grandparents, and political leaders. This creates a unique opportunity for cultural preservation and AI training.
EVIDENCE
He explains that these epics ‘are thousands of years old’ and ‘are in the public domain, but they are still living tradition. Parents and grandparents still tell their children about Lord Ram, about Ravan, about Hanuman. Prime Minister Modi speaks about it all the time as well.’
MAJOR DISCUSSION POINT
India’s Strategic Opportunity in AI Development
Argument 3
While US and Europe face copyright paralysis, India can leverage public domain content to create new IP and training datasets with 20% of world population
EXPLANATION
Nicholas sees the copyright uncertainties in US and Europe as creating a strategic advantage for India. India can use its public domain heritage to create new intellectual property and rich datasets for AI training, supported by its large population base.
EVIDENCE
He mentions that ‘nobody has actually done any work with them and created any IP on top of that’ and that Sarvam AI will help ‘digitize with their OCR model, with their voice model, all this culture which is in the public domain’ with ‘20% of the world population here in India to talk about it and create rich data sets.’
MAJOR DISCUSSION POINT
India’s Strategic Opportunity in AI Development
Argument 4
Society should recognize and reward data contributors; the question is whether we want open AI with best data or closed systems profiting from others’ work
EXPLANATION
Nicholas argues that society needs to decide whether it wants AI systems to have access to the best data openly, or whether it’s acceptable for companies to profit from others’ work without recognition. He uses the example of protein crystallographers whose decades of work contributed to Nobel Prize-winning AI research without acknowledgment.
EVIDENCE
He cites the example of Demis Hassabis winning the Nobel Prize for protein folding work that relied on 50 years of scientists putting crystal structures into the Protein Database, suggesting ‘it would have been nice of the Nobel Committee to also include the protein data bank as a recipient of that Nobel Prize.’
MAJOR DISCUSSION POINT
Ethical Inconsistencies in AI Use by Creators
Argument 5
Danger of cultural gatekeeping by handful of AI platforms, even with open source models built on culturally hegemonic content
EXPLANATION
Nicholas warns about the risk of a few AI platforms controlling cultural representation, describing a ‘Captain America hegemony’ where global culture is dominated by Western narratives. He argues that even open source solutions don’t solve this fundamental problem of cultural bias in training data.
EVIDENCE
He describes ‘the Captain America hegemony, which is basically we all grew up in France, in India, anywhere with Captain America as being this kind of powerful with all the weapons’ and notes that ‘if you actually have open source, you just have a gateway which is free to steal a corpus of creativity, which is in this Captain America hegemony.’
MAJOR DISCUSSION POINT
Risks of Cultural Gatekeeping and Platform Dominance
Argument 6
Creative processes involving hundreds of people remain essential and cannot be replaced by AI agents
EXPLANATION
Nicholas maintains optimism about human creativity’s continued importance, arguing that the complex creative processes that involve large teams of people cannot be replicated by AI agents. He sees AI as a tool that will collaborate with rather than replace human creativity.
EVIDENCE
He states ‘there is things that is pre-art, whether it’s text, whether it’s image, whether it’s video, whether it’s game, you know, that is still a creative process. Sometimes that involves hundreds of people’ and ‘it’s not that you’re going to have agents speaking to each other and creating a game. We’re very far from that.’
MAJOR DISCUSSION POINT
Risks of Cultural Gatekeeping and Platform Dominance
DISAGREED WITH
Anna Tumadote
Argument 7
Tremendous investment opportunities similar to early internet, requiring collaboration between AI tools and human creativity
EXPLANATION
Nicholas expresses optimism about investment opportunities in the AI space, comparing the current moment to the early internet era. He emphasizes that the future will involve collaboration between AI tools and human creativity, with storytelling remaining a distinctly human skill.
EVIDENCE
He states ‘it feels like the internet all over again’ and ‘it’s going to be a collaboration with these wonderful, you know, tools that we have and human creativity’ noting that ‘some people say storytelling is going to be the main skill in business. That is very much a human inequality.’
MAJOR DISCUSSION POINT
Investment and Business Opportunities in AI
S
Speaker
7 arguments131 words per minute860 words390 seconds
Argument 1
The discussion should focus on practical questions rather than extensive context setting due to time constraints
EXPLANATION
The speaker emphasizes the need to efficiently use the limited 30-minute timeframe by jumping directly into substantive questions for panelists rather than spending time on background information. This reflects a preference for action-oriented discussion over theoretical framing.
EVIDENCE
The speaker states ‘Since we have a half an hour, I don’t want to waste time. I have this urge to do a lot of context setting, but I will refrain.’
MAJOR DISCUSSION POINT
Discussion Format and Time Management
Argument 2
US and Europe are facing massive uncertainty regarding AI training and copyright permissions, creating potential paralysis in development
EXPLANATION
The speaker identifies a significant challenge in Western jurisdictions where unclear legal frameworks around AI training on copyrighted content are creating development obstacles. This uncertainty is characterized as either stalling progress or creating opportunities depending on perspective.
EVIDENCE
The speaker mentions that ‘the US and Europe are currently facing massive uncertainty in terms of AI training and copyright assets and permission’ and that this is ‘either stalling, you know, development, etc.’
MAJOR DISCUSSION POINT
India’s Strategic Opportunity in AI Development
Argument 3
There is potential for creating new IP on public domain content, presenting unique opportunities for countries with rich heritage
EXPLANATION
The speaker highlights the strategic advantage that comes from having access to rich public domain cultural content that can be used to create new intellectual property. This represents a competitive advantage in the AI development landscape where training data is crucial.
EVIDENCE
The speaker notes ‘on this public domain content and body of work, you have this opportunity to create IP’
MAJOR DISCUSSION POINT
India’s Strategic Opportunity in AI Development
Argument 4
The music and film industries extensively use AI for production while creators simultaneously object to AI training on their works
EXPLANATION
The speaker points out a contradiction in creative industries where professionals use AI tools for music composition and production work, often without declaring it, while the same community protests against AI systems being trained on creative works without consent. This highlights the complex relationship between AI adoption and creator rights.
EVIDENCE
The speaker mentions ‘if I look at the music industry film industry you have composers who use AI to generate production music and a lot of musical works maybe not declared or otherwise but it’s a huge industry but at the same time the creator voice often complains about the lack of consent in AI training’
MAJOR DISCUSSION POINT
Ethical Inconsistencies in AI Use by Creators
Argument 5
Global harmonization in AI governance is essential for business operations and should be the goal despite current fragmentation trends
EXPLANATION
The speaker advocates for coordinated international approaches to AI governance, arguing that the global nature of AI business requires harmonized policies. While acknowledging that countries may pursue individual solutions in the short term, the speaker emphasizes that harmonization is necessary for businesses using AI models across jurisdictions.
EVIDENCE
The speaker states ‘I would hope that the intention is to harmonize as much as possible because I think the implications of just the AI business across the world require harmonization and scream for harmonization and so will businesses that use those models to create more IP or IP-like content’
MAJOR DISCUSSION POINT
Global Harmonization vs Fragmentation in AI Governance
AGREED WITH
Kenichiro Natsume
Argument 6
India should focus on becoming a creator rather than just consumer of AI technology by 2030
EXPLANATION
The speaker emphasizes the importance of India positioning itself as an active participant in AI development rather than merely consuming AI technologies developed elsewhere. This represents a strategic vision for India’s role in the global AI ecosystem over the next decade.
EVIDENCE
The speaker asks about ensuring ‘that we aren’t India is just not consumer of AI 2030’ and seeks advice for Indian investors and creators
MAJOR DISCUSSION POINT
India’s Strategic Opportunity in AI Development
Argument 7
A single guiding principle should govern international AI governance in creative industries for the next decade
EXPLANATION
The speaker seeks to identify a fundamental principle that could serve as the foundation for international AI governance specifically in creative sectors. This reflects the need for clear, unifying guidance amid complex and rapidly evolving AI governance challenges.
EVIDENCE
The speaker asks ‘what single principle should guide international AI governance in the creative industries over the next decade?’
MAJOR DISCUSSION POINT
Guiding Principles for AI Governance
Agreements
Agreement Points
Human-centered approach should guide AI governance
Speakers: Anna Tumadote, Kenichiro Natsume
Keeping humans at the center of AI development and governance Human-centered approach should guide AI governance since creativity fundamentally comes from human activity
Both speakers strongly agree that humans must remain central to AI governance and development, with creativity fundamentally originating from human activity rather than artificial intelligence
AI’s massive scale creates unprecedented challenges compared to human learning
Speakers: Anna Tumadote, Kenichiro Natsume
AI’s massive scale transforms marginal problems into major issues; people are pulling back from sharing, shrinking the commons Humans have always learned by mimicking others’ work, but AI does this at unprecedented speed compared to human learning timelines
Both speakers acknowledge that while AI and human learning processes are similar in nature, the scale and speed of AI processing creates fundamentally different challenges that require new approaches
Global harmonization through traditional treaties is unrealistic in the short term
Speakers: Kenichiro Natsume, Speaker
Different stakeholders have vastly different views; finding consensus among 194 WIPO member states for international treaties would take too long Global harmonization in AI governance is essential for business operations and should be the goal despite current fragmentation trends
Both acknowledge that while global harmonization is desirable and necessary for business, achieving it through traditional international treaty processes is not realistic given the complexity and time required for consensus among 194 countries
Need for practical technological solutions over legal frameworks
Speakers: Kenichiro Natsume, Anna Tumadote
WIPO is taking a pragmatic approach focusing on technological solutions rather than legal frameworks to help creators and tech industries coexist Need for more nuanced spectrum of sharing conditions from opt-out to conditional sharing with attribution or rewards
Both speakers advocate for practical, technological approaches to AI governance challenges rather than waiting for comprehensive legal frameworks, focusing on creating systems that allow different stakeholders to coexist
Similar Viewpoints
Both speakers identify fundamental ethical contradictions in how creators and society approach AI training data, highlighting the inconsistency between using AI trained on others’ work while objecting to their own work being used similarly
Speakers: Anna Tumadote, Nicholas Granatino
There is ethical inconsistency when artists use AI trained on global content while objecting to their own work being used without consent Society should recognize and reward data contributors; the question is whether we want open AI with best data or closed systems profiting from others’ work
Both speakers express concern about the concentration of power in AI platforms and the negative impact on cultural diversity and open collaboration, whether through cultural hegemony or restrictive sharing practices
Speakers: Nicholas Granatino, Anna Tumadote
Danger of cultural gatekeeping by handful of AI platforms, even with open source models built on culturally hegemonic content The open movement faces transformation as creators restrict sharing due to lack of consent mechanisms, breaking human collaboration
Both recognize the strategic advantage that countries with rich public domain cultural heritage have in the AI era, particularly India’s opportunity to leverage its living traditions for AI development and IP creation
Speakers: Nicholas Granatino, Speaker
India’s rich public domain heritage (itihas/epics like Mahabharata, Ramayana) represents a unique opportunity as these are living traditions still told today There is potential for creating new IP on public domain content, presenting unique opportunities for countries with rich heritage
Unexpected Consensus
Acknowledgment of ethical inconsistencies in creator behavior
Speakers: Anna Tumadote, Speaker
There is ethical inconsistency when artists use AI trained on global content while objecting to their own work being used without consent The music and film industries extensively use AI for production while creators simultaneously object to AI training on their works
It’s unexpected that both a Creative Commons CEO and a moderator would openly acknowledge and criticize the contradictory behavior of creators who use AI while objecting to AI training on their work. This honest assessment of ethical inconsistencies within the creative community shows remarkable candor
Optimism about AI-human collaboration despite challenges
Speakers: Nicholas Granatino, Anna Tumadote, Kenichiro Natsume
Creative processes involving hundreds of people remain essential and cannot be replaced by AI agents The open movement faces transformation as creators restrict sharing due to lack of consent mechanisms, breaking human collaboration Human-centered approach should guide AI governance since creativity fundamentally comes from human activity
Despite discussing numerous challenges and risks, all speakers maintain optimism about the future of human creativity and AI collaboration. This consensus on maintaining human centrality while embracing technological advancement is unexpected given the severity of the challenges they identify
Overall Assessment

The speakers demonstrate strong consensus on fundamental principles: human-centered AI governance, the need for practical technological solutions over lengthy legal processes, recognition of ethical inconsistencies in current practices, and the importance of cultural diversity in AI development. They agree on both the challenges (scale, speed, cultural hegemony) and opportunities (public domain heritage, collaboration potential) presented by AI.

High level of consensus on core principles and problem identification, with complementary rather than conflicting perspectives. The agreement spans across different stakeholder types (business, policy, advocacy) and suggests a mature understanding of AI governance challenges. This consensus provides a strong foundation for collaborative approaches to AI governance in creative industries, though implementation details may still require negotiation.

Differences
Different Viewpoints
Speed and scale of AI versus human learning processes
Speakers: Kenichiro Natsume, Anna Tumadote
Humans have always learned by mimicking others’ work, but AI does this at unprecedented speed compared to human learning timelines AI’s massive scale transforms marginal problems into major issues; people are pulling back from sharing, shrinking the commons
Kenichiro focuses on the speed difference as the key distinguishing factor between human and AI learning, suggesting this is where we need to draw lines. Anna emphasizes that the massive scale has transformed previously manageable 5% problems into major systemic issues affecting the entire commons ecosystem.
Approach to international AI governance frameworks
Speakers: Kenichiro Natsume, Anna Tumadote
WIPO is taking a pragmatic approach focusing on technological solutions rather than legal frameworks to help creators and tech industries coexist Need for more nuanced spectrum of sharing conditions from opt-out to conditional sharing with attribution or rewards
Kenichiro advocates for technological infrastructure solutions that allow coexistence even if neither side is completely satisfied, while Anna pushes for more sophisticated legal and normative frameworks with nuanced conditional sharing arrangements.
Optimism versus concern about AI’s impact on creativity
Speakers: Nicholas Granatino, Anna Tumadote
Creative processes involving hundreds of people remain essential and cannot be replaced by AI agents The open movement faces transformation as creators restrict sharing due to lack of consent mechanisms, breaking human collaboration
Nicholas maintains strong optimism about human creativity’s continued importance and collaboration with AI tools, while Anna expresses significant concern about the negative transformation of the open movement and breakdown of human collaboration due to AI’s impact.
Unexpected Differences
Role of open source in addressing cultural representation
Speakers: Nicholas Granatino, Anna Tumadote
Danger of cultural gatekeeping by handful of AI platforms, even with open source models built on culturally hegemonic content AI’s impact depends on whether models are open source with good governance or closed and opaque; currently at risk of weakening diversity
This disagreement is unexpected because both speakers generally support open approaches, but Nicholas argues that open source doesn’t solve the fundamental problem of cultural bias in training data (Captain America hegemony), while Anna sees open source with good governance as a potential solution to strengthen diversity.
Effectiveness of current copyright systems for AI
Speakers: Kenichiro Natsume, Anna Tumadote
AI can enhance copyrighted artworks but also poses threats; copyright system can still cope with AI as another disruptive technology AI’s massive scale transforms marginal problems into major issues; people are pulling back from sharing, shrinking the commons
Unexpected because both are IP/legal experts but have fundamentally different assessments. Kenichiro believes existing copyright systems can handle AI as they have other disruptive technologies, while Anna argues that AI’s scale has fundamentally changed the nature of the problems beyond what copyright can handle.
Overall Assessment

The speakers show significant disagreement on fundamental approaches to AI governance, the adequacy of existing legal frameworks, and the level of optimism about AI’s impact on creativity and cultural diversity. While they agree on keeping humans central, they diverge sharply on implementation strategies and urgency of concerns.

Moderate to high disagreement with significant implications for AI governance approaches. The disagreements suggest that stakeholders are still far from consensus on basic questions about whether existing systems can handle AI’s challenges, whether technological or legal solutions are preferable, and how urgent the threats to creative ecosystems really are. This level of disagreement could lead to fragmented approaches and delayed coordinated responses to AI governance challenges.

Partial Agreements
All speakers agree that humans should remain central to AI governance and that human creativity is fundamental. However, they disagree on implementation approaches – Kenichiro favors technological solutions, Anna advocates for nuanced legal frameworks, and Nicholas focuses on cultural representation and investment opportunities.
Speakers: Anna Tumadote, Kenichiro Natsume, Nicholas Granatino
Keeping humans at the center of AI development and governance Human-centered approach should guide AI governance since creativity fundamentally comes from human activity Creative processes involving hundreds of people remain essential and cannot be replaced by AI agents
Both speakers acknowledge ethical inconsistencies in how creators use AI while objecting to AI training on their work. However, Anna focuses on the labor and replacement concerns versus copyright issues, while Nicholas emphasizes the need for recognition and reward of data contributors.
Speakers: Anna Tumadote, Nicholas Granatino
There is ethical inconsistency when artists use AI trained on global content while objecting to their own work being used without consent Society should recognize and reward data contributors; the question is whether we want open AI with best data or closed systems profiting from others’ work
Both acknowledge the need for coordinated approaches but disagree on feasibility and methods. The Speaker emphasizes harmonization as essential for business operations, while Kenichiro argues it’s unrealistic through traditional treaty-making and advocates for pragmatic technological solutions.
Speakers: Kenichiro Natsume, Speaker
Global consensus among 194 countries is unrealistic; WIPO pursuing technological infrastructure solutions for March 2024 launch Global harmonization in AI governance is essential for business operations and should be the goal despite current fragmentation trends
Takeaways
Key takeaways
AI’s impact on cultural diversity depends on whether models are open source with good governance or closed and opaque, with current risk of weakening diversity India has a strategic opportunity to leverage its rich public domain heritage (itihas/epics) while US and Europe face copyright paralysis Global IP frameworks are not ready for AI-generated content due to vastly different stakeholder views and the impracticality of achieving consensus among 194 WIPO member states There are ethical inconsistencies when creators use AI trained on global content while objecting to their own work being used without consent The fundamental difference between human and AI learning is speed – AI processes at unprecedented scale compared to human learning timelines The open movement faces transformation as creators restrict sharing due to lack of consent mechanisms, leading to shrinking of the commons Human-centered approach should guide AI governance since creativity fundamentally comes from human activity AI represents tremendous investment opportunities similar to early internet, requiring collaboration between AI tools and human creativity
Resolutions and action items
WIPO will launch first meeting of technological infrastructure solution approach on March 17th (available online) Focus on developing technological platforms that help creators get rewarded while allowing tech companies to understand what content can be used Creative Commons and other stakeholders to participate in WIPO’s March meeting
Unresolved issues
Where to draw the line between acceptable AI learning and human learning given the speed difference How to achieve global harmonization vs accepting fragmentation in AI governance approaches How to prevent cultural gatekeeping by handful of AI platforms while maintaining open innovation How to balance creator consent and attribution with the practical needs of AI development How to maintain human collaboration when creators are becoming more restrictive with sharing How to ensure diverse cultural content (like India’s traditions) gets properly represented in AI training datasets How to reward data contributors and original creators in AI systems
Suggested compromises
WIPO’s pragmatic approach focusing on technological solutions rather than legal frameworks to help creators and tech industries ‘live with’ each other Creating a spectrum of sharing conditions from opt-out to conditional sharing with attribution, rewards, or other requirements (‘yes if’ approach) Technological infrastructure where creators can be appropriately rewarded while tech companies can easily recognize what is opted in/out Accepting that stakeholders may need to be ‘unhappy to some extent equally’ and compromise with each other Collaboration between AI tools and human creativity rather than replacement of human creative processes
Thought Provoking Comments
Nicholas’s observation about India’s oral tradition and underrepresentation in AI training data: ‘if we look at the case of India, which has been mainly an oral tradition, and we look also at how much of their content have been digitized to the level of Hollywood in movies or AAA game in gaming, it is not represented. And that has huge implication in terms of these models and whether or not they enhance creativity or whether or not they need creativity.’
This comment reframes the AI-creativity debate from a global perspective, highlighting how cultural representation in training data creates systemic biases. It introduces the crucial concept that AI models reflect the digital dominance of certain cultures while marginalizing others, despite their rich traditions.
This shifted the conversation from abstract discussions about AI’s impact on creativity to concrete examples of cultural inequality. It led the moderator to follow up with questions about India’s strategic opportunity and influenced subsequent discussions about public domain heritage as a competitive advantage.
Speaker: Nicholas Granatino
Ken’s insight about the speed differential: ‘The big difference is that if it’s done by human, it takes time. But if it’s done by computer, it takes very, very limited time. And that’s a big difference. So what it’s doing is essentially more or less the same, but the speed is completely different.’
This comment cuts to the heart of why AI feels different from traditional human learning and creativity. It’s not the process that’s fundamentally different, but the scale and speed, which creates entirely new ethical and practical challenges.
This observation helped crystallize why existing copyright frameworks struggle with AI. It provided a foundation for Anna’s subsequent comment about how AI amplifies problems that were previously marginal (5% problems becoming systemic), and influenced the discussion toward finding new frameworks rather than just applying old ones.
Speaker: Kenichiro Natsume
Anna’s warning about the shrinking commons: ‘we are actively seeing the shrinking of the commons already. Like, this is a really bad outcome… if I write something, you know what you can do with it because it’s under this license and so on and so forth. But now we’re seeing creators say, no, no, I’m going to go more restrictive. And that breaks the human collaboration element.’
This comment reveals an unintended consequence of AI development – that fear of AI exploitation is causing creators to retreat from open sharing, which ironically harms human-to-human collaboration. It shows how AI issues cascade into broader creative ecosystem problems.
This shifted the discussion from theoretical frameworks to immediate, observable consequences. It demonstrated that the AI debate isn’t just about future possibilities but about current damage to collaborative creative practices, adding urgency to finding solutions.
Speaker: Anna Tumadote
Nicholas’s ‘Captain America hegemony’ concept: ‘There’s something I call the Captain America hegemony, which is basically we all grew up in France, in India, anywhere with Captain America as being this kind of powerful… And if you actually have open source, you just have a gateway which is free to steal a corpus of creativity, which is in this Captain America hegemony.’
This metaphor powerfully illustrates how even open-source AI models can perpetuate cultural dominance if they’re built on datasets that already reflect Western/American cultural hegemony. It challenges the assumption that ‘open’ automatically means ‘fair’ or ‘diverse.’
This concept added a critical layer to the open vs. closed source debate, showing that openness alone doesn’t solve representation problems. It influenced the discussion toward considering not just access to AI tools, but the cultural foundations they’re built upon.
Speaker: Nicholas Granatino
Ken’s pragmatic approach to international governance: ‘our idea is let’s think about something more practical, more technological, to see if there is any technological solution possible so that both creators’ side and industry side, can live with. I would say can live with. They don’t have to be necessarily happy very much. They have to be unhappy to some extent equally.’
This comment is refreshingly honest about the realities of international consensus-building and offers a pragmatic alternative to waiting for perfect legal frameworks. The phrase ‘unhappy to some extent equally’ captures the essence of workable compromise.
This shifted the conversation away from idealistic solutions toward practical interim measures. It influenced Anna’s subsequent discussion of the ‘yes if’ framework and demonstrated that technological solutions might bridge gaps that legal frameworks cannot currently address.
Speaker: Kenichiro Natsume
Overall Assessment

These key comments transformed what could have been a theoretical discussion into a nuanced exploration of AI’s real-world cultural and economic implications. Nicholas’s insights about cultural representation and hegemony provided concrete examples of how AI perpetuates inequalities, while Ken’s observations about speed and pragmatic governance offered frameworks for understanding why traditional approaches fall short. Anna’s warning about the shrinking commons added urgency by showing immediate negative consequences. Together, these comments created a multi-layered conversation that moved beyond simple pro/anti-AI positions to examine the complex interplay between technology, culture, economics, and human collaboration. The discussion evolved from asking whether AI helps or hurts creativity to exploring how we can shape AI development to preserve cultural diversity and human agency.

Follow-up Questions
How do we make sure that India, with its phenomenal history and culture and 20% of the world population, punches at its weight in the training data sets of these AI models?
This addresses the underrepresentation of Indian cultural content in AI training datasets and the need for strategic approaches to ensure cultural diversity in AI models
Speaker: Nicholas Granatino
What technological solutions can be developed so that both creators’ side and industry side can live with each other in the AI ecosystem?
WIPO is exploring practical technological infrastructure solutions as an alternative to lengthy international legal negotiations, focusing on ways creators can be remunerated while tech industries can utilize content
Speaker: Kenichiro Natsume
Where do we have to draw the line between human learning (which takes time) and AI learning (which is very fast)?
This addresses the fundamental difference in speed between human and AI learning processes and the need to establish boundaries for acceptable AI behavior
Speaker: Kenichiro Natsume
How can we develop consent mechanisms and agency for creators over how their works are used in AI training?
This addresses the current lack of consent mechanisms that is causing creators to pull back from sharing their works, leading to a shrinking of the commons
Speaker: Anna Tumadote
What normative frameworks, legal, or technical solutions are needed to handle the scale and potential nefarious uses of openly shared creative works in AI?
The massive scale of AI has amplified problems that were previously marginal, requiring new approaches beyond traditional copyright frameworks
Speaker: Anna Tumadote
How can we prevent cultural gatekeeping by a handful of AI platforms while leveraging public domain content for competitive advantage?
This addresses the tension between using public domain cultural heritage as a strategic advantage and avoiding concentration of cultural power in few AI platforms
Speaker: Speaker (moderator) to Nicholas Granatino
What spectrum of sharing conditions (‘yes if’ scenarios) can be developed to give creators more nuanced choices about AI training use?
This explores the need for more granular consent mechanisms beyond simple opt-in/opt-out, including conditional sharing arrangements
Speaker: Anna Tumadote
How can global harmonization of AI and IP frameworks be achieved given the realistic challenges of getting 194 countries to consensus?
This addresses the practical challenges of international coordination on AI governance while recognizing the business need for harmonized approaches
Speaker: Speaker (moderator) to Kenichiro Natsume
What specific actions should Indian investors and creators take to ensure India is not just a consumer of AI by 2030?
This seeks concrete recommendations for positioning India as an AI creator and innovator rather than merely a consumer market
Speaker: Speaker (moderator) to Nicholas Granatino

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

India’s AI Future Sovereign Infrastructure and Innovation at Scale

India’s AI Future Sovereign Infrastructure and Innovation at Scale

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with the launch of Amrita Vishwa Vidya-peetham’s sovereign AI research report and a panel of industry and academic leaders, including representatives from NASCOM AI, IIT Bombay, Yotta, Tata Communications, HCL Software and GenSpark [1][3-5]. Ankit Bose, head of AI at NASCOM, introduced the discussion and asked each panelist to identify the single most important action for India to build sovereign AI capability [12][44-45].


Sunil Gupta highlighted that India’s main bottleneck is abundant compute, noting that the country lacked sufficient GPU resources until recent partnerships secured nearly 10 000 GPUs and a government-backed shared compute facility now aims to reach 38 000, with an additional 20 000 announced [54-57][66-68][235-237]. He argued that scaling this infrastructure and subsidising the first wave of model inference are essential for widespread adoption, because millions of GPUs will be needed for both training and serving AI services across sectors [239-242][247-250][254-257].


Kalyan Kumar stressed that sovereign AI also requires a robust data layer, describing HCL’s acquisition of vector-DB technology, the upcoming localized vector AI engine, and the need for data platforms, catalogs and contracts to ensure high-quality, distributed data for edge inference [96-104][106-108]. He warned that focusing only on hardware and models ignores the necessity of a data-centric approach, which he sees as the foundation for scaling AI applications [105-107].


Ganesh Ramakrishnan advocated interoperability at every stack layer, arguing that it enables participation, alternative solutions and scalable collaboration among academia, industry and government [151-156][162-168]. He linked interoperability to data ownership, proposing a data-product and data-catalog framework that respects creators’ rights and facilitates secure sharing [170-176][178-181]. He also emphasized co-design and collaboration across institutions, citing his consortium of nine academic bodies and recent international MOUs as examples of how joint effort can accelerate model development [191-199][203-209].


Brandon Mello identified three adoption barriers: “ROI invisibility” where CFOs cannot quantify returns, data-trust and compliance friction, and the lack of executive sponsorship, all of which stall AI pilots from reaching production [119-124][129-134][139-142]. He suggested that clear ROI metrics, streamlined governance and dedicated champions are needed to move projects beyond the pilot stage [125-128][135-138][140-142].


The panel collectively agreed that building sovereign AI requires coordinated compute provisioning, interoperable data infrastructure, skill development for engineers and researchers, and sustained government-industry collaboration [216-222][266-274][426-432]. The discussion concluded with a call for continued collaboration, the upcoming NASCOM “7 AI” initiative, and the intention to draft a national sovereign AI and AGI roadmap, underscoring the strategic importance of these efforts for India and the Global South [414-419][426-433].


Keypoints


Major discussion points


Compute scarcity is the bottleneck for sovereign AI in India.


Sunil Gupta stresses that while India has talent, data, and market size, it lacked sufficient GPU-based compute, which he and his company have been trying to supply (e.g., “the core problem … how do you make compute available in an abundant way” [46-60]; “we are running almost 10,000 chips” [66-68]; “India will need multiple million GPUs” [75-78]). He later describes the government-backed shared-compute facility that now aggregates ~38,000 GPUs and is being expanded (e.g., “the shared compute facility … combination of the compute capacity created by multiple providers” [224-236]; “India will be going to 50-60,000 GPUs … we need millions of GPUs” [239-258]).


A robust data stack and edge infrastructure are essential alongside hardware.


Kalyan Kumar outlines the need for a centralized-yet-distributed data platform, vector databases, and edge-ready inference, arguing that “the data platform is going to become very important” for scaling AI deployments ( [96-108]).


Interoperability and ecosystem collaboration are critical for scaling and inclusivity.


Ganesh Ramakrishnan calls for “interoperability at every layer” to enable participation, alternative solutions, and data-product ecosystems ( [151-168]). He later adds that co-design, academic-industry consortia, and open data contracts are the “biggest takeaway” for building India’s AI moat ( [193-205]).


Adoption failures stem from organizational and ROI challenges, not just technology.


Brandon Mello points out that 95 % of AI pilots never reach production because of “ROI invisibility,” “data and trust and compliance friction,” and the “champion problem” (lack of executive sponsorship) ( [119-143]). He later reinforces that solving real-world use cases, consolidating tools, and addressing language & data-security concerns are needed for mass adoption ( [335-351]).


Skill development and a shift from services to building IP are required for long-term sovereignty.


Kalyan emphasizes moving from a service-oriented model to building proprietary products, hiring “engineers, not just coders,” and investing in deep research (including quantum compute) to create home-grown IP ( [266-306]). NASCOM’s parallel effort to up-skill 150k developers and revamp curricula is cited as a concrete step ( [312-327]).


Overall purpose / goal of the discussion


The panel was convened to launch the Sovereign AI Research Report (Amrita Vishwa Vidyapeetham) and to chart a coordinated roadmap for India-and the broader Global South-to achieve AI sovereignty. Participants were asked to identify the single most impactful action their domain could take, with the aim of aligning industry, academia, and government around concrete priorities (e.g., compute availability, data infrastructure, interoperability, talent, and adoption pathways) that will enable India to develop, deploy, and control its own AI models and services.


Overall tone and its evolution


Opening (0-5 min): Formal and celebratory, with introductions and acknowledgments of the report launch.


Mid-session (5-30 min): Shifts to a problem-solving tone; speakers present urgent challenges (compute shortage, data stack gaps) and propose strategic solutions, often with a sense of urgency (“we need millions of GPUs,” [239-258]).


Later segment (30-45 min): Becomes collaborative and optimistic, emphasizing interoperability, consortium building, and skill development as enablers.


Closing (45-55 min): Returns to a call-to-action tone, urging participants to contribute to shared resources (QR code, MOU) and stressing the long-term, nation-building mission of sovereign AI.


Overall, the discussion moves from introductory formality to a focused, solution-oriented dialogue, ending with a unifying, forward-looking call for collective action.


Speakers

Speaker 1


Role / Title: Moderator / Event host (introduced the panel and announced the report launch)


Sunil Gupta


Role / Title: Co-founder and CEO of IOTA (also referenced as co-founder, MD and CEO of Yotta)


Areas of Expertise: Data centre operations, sovereign cloud infrastructure, large-scale GPU compute for AI models


Affiliation: IOTA (Yotta) – runs data-centre campuses and built the Sovereign Cloud in India [S1][S2]


Ganesh Ramakrishnan (also listed as Professor Ganesh Ramakrishnan)


Role / Title: Professor, IIT Bombay (distinguished panelist)


Areas of Expertise: Sovereign AI, foundation model development, interoperability, multilingual AI for India


Affiliation: IIT Bombay [S6][S7]


Ankit Bose


Role / Title: Head of AI, NASCOM


Areas of Expertise: AI strategy and implementation for NASCOM, developer enablement, AI education initiatives


Kalyan Kumar


Role / Title: Chief Product Officer (CPO), HCL Software


Areas of Expertise: Enterprise software products, sovereign-by-design software, data platforms, vector databases, AI infrastructure


Brandon Mello (referred to as Brenno Mello)


Role / Title: Founding GTM Executive, GenSpark (Genspark.ai)


Areas of Expertise: AI product commercialization, go-to-market strategy, enterprise AI adoption, agentic AI for knowledge workers [S14][S15]


Professor Ganesh Ramakrishnan (duplicate entry of Ganesh Ramakrishnan; listed for completeness)


Additional speakers:


Professor Suresh – Mentioned in the opening remarks as a professor invited to the stage; no further details provided.


Bharat Jain – Panelist; no title or affiliation specified in the transcript.


Bhaskar Gorti – EVP, Tata Communications (listed in the introductory panel lineup).


Full session reportComprehensive analysis and detailed insights

The session opened with a formal inauguration of the Sovereign AI Research Report produced by Amrita Vishwa Vidya-peetham. Speaker 1 thanked the audience, introduced the report’s release and invited senior representatives from Amrita – Pro-Vice-Chancellor Dr Manisha V Ramesh and the head of the AI-Safety Research Lab Dr Shiva Ramakrishnan – to the stage, followed by the panelists: Professor Ganesh Ramakrishnan (IIT Bombay), Bharat Jain, Sunil Gupta (Yotta), Bhaskar Gorti (Tata Communications), Kalyan Kumar (HCL Software) and Brenno Mello (GenSpark.ai) [1][3-5][12-18].


Ankit Bose, head of AI at NASCOM, opened the discussion by noting the successful launch and asking each participant to identify the single most important action India should take to build sovereign AI capability for the nation and the Global South [8-13][44-45].


Compute scarcity was identified as the primary bottleneck. Sunil Gupta explained that, although India possesses talent, data and a massive market, it lacks the specialised GPU-based compute required for modern AI. He framed the core problem as “how do you make compute available in an abundant way so that it becomes a hygiene factor” [46-60]. By the time of the panel his company was operating “almost 10 000 chips” and had trained the majority of the sovereign models now being released [66-68]. He warned that India will need multiple million GPUs to support both training and inferencing at scale [75-78]. To address this, the government has created a shared-compute facility that aggregates capacity from multiple providers, currently totalling about 38 000 GPUs, with an additional 20 000 announced [224-236][237]. Gupta stressed that this facility must be expanded to “50-60 000 GPUs” and ultimately to “millions of GPUs” to meet the demands of a billion-plus user base, especially as AI in India will be largely voice-first and accessed on low-end devices [239-258][244-247].


Turning to the data stack, Kalyan Kumar highlighted HCL’s acquisition of vector-DB technology (Actian’s Ingress engine and a Dutch CWI asset) and announced a forthcoming “localized vector AI engine” designed for edge deployment [96-104]. He argued that “the data platform is going to become very important” because AI applications will only scale if they are built on high-quality, well-catalogued data, with data products, contracts and metadata forming the foundation for trustworthy AI [105-108][106-108]. He also emphasized the need for a skill shift from coders to engineers and outlined the joint-venture with Foxconn – India Chips Limited – to build a 16/32 nm fab, describing it as “patient capital” that will secure future compute capacity even though the fab will take five years to become operational [266-292][441-447].


Professor Ganesh Ramakrishnan expanded the discussion to interoperability across the entire AI stack. He asserted that “interoperability at every layer encourages participation” and enables alternative solutions, scale-out architectures and the ability to balance fidelity, latency, sensitivity and specificity [151-160]. Ganesh linked interoperability to a data-product ecosystem, proposing “data catalogs and data contracts” that respect the creator’s rights (the principle “jiska data uska adhikar”) and facilitate secure sharing [165-176][178-181]. He illustrated the concept with his own consortium of nine academic institutions, which co-designs models such as a 22-language speech-to-text system using a mixture-of-experts architecture, thereby creating a “voice-first, multilingual AI” that can run on feature phones [191-209][212-213][244-247].


Brenno Mello shifted the focus to adoption barriers. Citing a MIT report, he noted that “95 % of AI pilots never make it to real production” and identified three systemic obstacles: “ROI invisibility” – CFOs cannot quantify returns, leading to stalled pilots; “data-trust and compliance friction” caused by departmental silos; and the “champion problem” – a lack of executive sponsorship [119-124][129-134][139-142]. He suggested that clear ROI metrics, streamlined governance and dedicated champions are essential to move projects beyond proof-of-concept [125-128][135-138][140-142].


Summarising these insights, Ankit emphasised that close-collaborated teams with a single point of view and executive sponsorship are required to overcome adoption challenges [144-146].


The conversation returned to the theme of compute as a shared national commodity. Ankit asked whether the country could treat compute like a utility, with the government coordinating providers to ensure low-price, abundant access [215-222]. Gupta affirmed that the current empanelment model already creates a “shared-compute facility” and argued that the government should also subsidise the first inferencing cycle of sovereign models to catalyse early revenue-generating use cases [236-242]. He reiterated that India creates and consumes “20 % of the world’s data” yet only “3 % is hosted in India”, underscoring the urgency of domestic infrastructure [254-257].


Skill development and the shift from services to indigenous IP were addressed by Kalyan. He traced HCL’s evolution from a service-oriented firm to a product-builder, noting the need for “engineers, not just coders”, and for research in fundamental science such as quantum computing [266-292][298-306]. The India Chips Limited joint-venture was presented as a long-term investment in domestic compute capacity [441-447][450-453].


Complementing this, Ankit described NASCOM’s ambition to up-skill 150 000 developers within six months, rewrite B.Tech/M.Tech curricula and introduce specialisations that produce “smarter engineers” capable of building sovereign AI [312-320][326-327].


Across the panel there was strong agreement on several pillars: (1) the necessity of massive, affordable GPU compute delivered through a shared-compute facility; (2) the importance of a modern, interoperable data stack with provenance, catalogs and contracts; (3) the need for collaborative, co-design ecosystems that span academia, industry and government; (4) the vision of a voice-first, multilingual AI serving billions; and (5) the imperative of human-in-the-loop, ethically aligned AI [46-60][96-108][151-168][191-209][244-247][371-376][385-386].


Rather than a conflict, the panel highlighted complementary perspectives: Gupta emphasized expanding the shared-compute facility, while Ganesh stressed that interoperability and scale-out architectures are essential to reach India’s diverse population [224-236][151-160]. On funding, Gupta called for government support for the first inferencing phase, whereas Kalyan focused on talent development and the India Chips Limited venture rather than direct subsidies [236-242][266-292][441-447]. On talent strategy, Ankit’s mass-upskilling plan contrasted with Kalyan’s call for a smaller, highly skilled engineering cohort [312-320][287-292]. Regarding data, Ganesh emphasized ownership, data catalogs and contracts rather than monetisation [165-176].


The panel concluded with a call to action. NASCOM announced the forthcoming “7 AI” initiative, a draft national sovereign-AI and AGI roadmap (accessible via a QR code), and the signing of an MOU between Amrita Vishwa Vidya-peetham and NASCOM to deepen collaboration [414-424][426-433][454-455]. Participants were urged to provide feedback, stay for a group photo and continue the dialogue.


In summary, the consensus was clear: India must combine massive, affordable compute, interoperable data infrastructure, skilled talent, and coordinated public-private partnership to achieve sovereign AI for the nation and the Global South.


Session transcriptComplete transcript of the session
Speaker 1

Thank you. Thank you. hello and good afternoon everyone thank you for joining us for this session on sovereign AI for India before we begin the panel discussion again we are happy to announce that there will be a launch of the sovereign AI research report by Amrita Vishwa Vidyapetam may I invite the following representatives to kindly join us on stage first for the release of the report from Amrita we would like to invite pro vice chancellor Dr. Manisha V. Ramesh and if available head of the AI safety research lab Dr. Shiva Ramakrishnan and any other representatives from Amrita Vishwa Vidyapetam that you would like to invite on stage sir alright Alright, Professor Suresh and if we could please have you on stage I would like to invite Mr.

Ankit Bose, Head NASCOM AI on stage as well We will Thank you so much Yeah, yeah, absolutely You can take a seat sir if you want Thank you Thank you. Thank you, everyone. We now move into the panel discussion. To guide this conversation, we are joined by Mr. Ankit Bose, head NASCOM AI. Joining him today are our distinguished panelists, Professor Ganesh Ramakrishnan from IT Bombay and Bharat Jain, Mr. Sunil Gupta, co -founder, MD, and CEO of Yotta, Mr. Bhaskar Gorti, EVP, Tata Communications, Mr. Kalyan Kumar, CPO, HCL Software, and Mr. Brenno Mello, founding GTM executive, GenSpark. Ankit, over to you. Professor Ganesh will be shortly joining us in two minutes. Thank you.

Ankit Bose

So hi everyone, I think we had a good launch and we have a very strong panel. So Ganesh was on the way and he is still stuck on the traffic, he is walking in. So meanwhile we start the discussion, I think, you know, happy to have a very strong panel. So why don’t we do this, we start with the introduction, right? I think Kalyan, we can start with your quick introduction. So Neil and then Bruno.

Kalyan Kumar

Yeah, hi, Kalyan Kumar, call me KK. I run the software product business for HCL, HCL Software. We are the largest India headquartered enterprise B2B software company with about 10 ,000 customers and about 1 .5 billion dollars of revenue. And very intricately involved in building software products which are sovereign by design.

Sunil Gupta

Hello, good afternoon. Good afternoon. Good afternoon. My name is Sunil Gupta. I am co -founder and CEO of IOTA. So we run data center campuses. We have built Sovereign Cloud in India, which is running a whole lot of mission -critical government of India applications. Recently, we migrated Bhashini from a hyperscale cloud to our Sovereign Cloud. Our claim to fame in the last two years is that we have got thousands of NVIDIA GPU chips in India. And all the models which you are hearing getting launched in this summit, MITS, Sarvam model, IOT, Bombay’s Bharat Gen model or Socket model, they all have been trained on our GPU clusters, and now they are being made available to public use.

Thank you.

Brandon Mello

Hello. Good afternoon. My name is Brandon Mello. I work for Genspark .ai, a follow -up -based company. We have been around for about 10 months. We are the largest growing AI company right now in the world. We just broke $200 million in ARR. Our solution has been incredibly well -received. adopted in the market. It is our third largest market and our solution is to drive adoption from the bottom up by bringing agentic AI to the knowledge worker. Thanks for letting me be here.

Ankit Bose

Great, great, great. And hi, folks. I’m Ankit Bose. I head AI for NASCOM. So, whatever NASCOM does in AI something, I support that. I lead that, right? And we will be joined by Ganesh, who is from Bhadrajin. He’s leading the, you know, sovereign AI modern building effort in the country, right? So, I think meanwhile you join, let’s start. I think, Sunil, let me start with you, right? The first question I think I would want to ask after five days of immense brainstorming around, you know, AI for the country, AI for the world, right? You know, what is the top thing you say which, you know, India has to do, right, to build its sovereign capability, not only for the country, plus for the global south?

Sunil Gupta

Yeah. Ankit, if I take everybody, Just two years or maybe two and a half years down the line, when Chad GPT got on world scene, basically AI capability came in consumer hands. A big debate happened in India’s obviously government circle, industry circle, telecom circle, technology circles everywhere. That while India has got everything which is needed to succeed in AI, like we have been software and services leaders for last three decades. We have a startup ecosystem. On skill set index of mathematics, science, engineering, we are always the best. As a market, we are literally close to 1 billion people carrying smartphones, creating consuming content. AI ultimately resulted to most of the cases, you know, some apps which will be giving some productivity to us.

So both on the demand side and the supply side, including data sets like India will have the best data sets available. So everything India has, but what India was not having at that time was compute. Because AI does not run. And regular data centers or regular CPU computes, it required this. specialized GPU computes. So I would say that the biggest problem and of course you have to take care of the entire stack models, data sets, applications, everything. But the core problem to solve for taking AI to the masses was that how do you make compute available in an abundant way so that we don’t think of that. That should become just a hygiene which is always available.

And that’s the problem we tried to solve. You know way back at that time Jensen was in India. I happened to get to meet him and he says we as NVIDIA are too committed to India. We can extend your parity allocation. We can give you engineering support, everything. But somebody has to take a step forward of not only putting your data centers and power and everything but you also need to put in chips and we will give you everything. And from there to now today we are running almost 10 ,000 chips. You know as I said majority of the models which you are hearing sovereign models getting launched in India. You know they have been trained on a GPU.

But the real thing I would say is start now. Many of these models are great, you must have heard Sarvam Modeller beating Gemini and ChatGPT on many of the match marks. And they are making them absolutely for India use cases like OCR, you know the handwritten notes and all that thing, how do you get convert and all that stuff. So these are real India purpose built use cases and models. When they start scaling, when they start getting adopted by masters, we have seen one UPI changed our lives. Imagine we have UPI in 50 different sectors in the country, 50 UPI movement will come into India. At that time, the number of GPUs required will be millions. Today we are happy as a country, we have X thousand of GPUs.

But if you as a single company like SpaceX or like Meta can have 1 million GPUs, India as a country require multiple million GPUs. So while we are working on all the upper layer of stacks and Indians are very good at that, models, data sets, applications. We need to solve this issue. We are taking care of infrastructure problems. We are taking care of railways and roadways and airports. We also need to create this digital infrastructure. We take care of that, make it available abundantly to every startup, every, you know, I would say academic community. We make it available at a very low price. Government India AI machine is doing a human’s role. On one side, they have asked people like us, incentivized us to invest into the GPUs.

But they are taking GPUs from us, putting their own money, putting their own subsidy and then giving it to Sarvams and IITs and sockets of the world. And they think now you make, you don’t have to bother about money. Just go and make India’s plastic model. And the result is to seem in two years, India has come a long way and we have a long way to go. Compute problem has to be solved.

Ankit Bose

Great. Thank you. Thank you, Sunil. Same question to you, KK. You know, what is the one thing you feel can add the edge, right? The whole.

Kalyan Kumar

When you look at sovereign and I think Minister of Electronics and IIT Vaishnavji, he was mentioning. The. Mr. talking the five layers layer stack right and that’s where if you what sunil mentioned is for a easier way i say i use the word infrastructure which can combine energy or the ping power uh cooling the whole stack so that’s that’s providing that layer and then explain the whole model piece i think as you train and when you start to deploy at scale a couple of things becoming very interesting so you need to start to also build a data stack data platforms vector dbs edge vector i personally think you can do as much centralization the way the data consumption model is going is going to highly get distributed going to go down into the edge correct so you need a very different kind of inferencing and those capabilities so you need a data layer something which uh which we are doing is very interesting outside of oracle and ibm uh the only other company which has all the patents for database is Ethier, because we acquired Actian.

So Actian owns the original patent of Ingress. And every derivative today, whether it is Postgres or every one of them is basically an Ingress query processor derivative, including SQL Server and others. Like that, we also acquired an asset from CWI in Netherlands. So we have a VectorDB, the original Vector engine. So we’ve been building a lot of those asset portfolio, HDB, now releasing a, in April we’re going to release a localized vector AI engine, which again can run on, because as the AI PCs become more and more, Edge becomes more and more, so building that. And building the data disciplines. I think that’s a very important layer. A lot of times what happens is we worry about infrastructure, and then we think about model, and then app.

The data platform is going to become very important, because as we’re building the data platform, the enterprise will only scale if you get your data. centric approach, data products, data contracts, data catalogs and those kind of things. Because finally the AI use case is going to be built on how good quality your data is. Yeah.

Ankit Bose

Great point. I think compute data, data stack for the country, I think very important. Let me come to Venu. Again, the same question, right? If India have to build a server AI for the country and Global South, what’s the top one thing you will say which will help the whole cause?

Brandon Mello

Yeah, so it’s interesting. MIT last year ran a big report and they said 95 % of AI pilots actually never made it to real production, right? So in my point of view, this is never really a tech problem. It’s really a production problem, right? So in my point of view, actually like when I look at a our solution, right, like we are able to deploy over thousands of companies in only eight weeks, right? So when I look at that, there’s really, it comes down to three reasons why this is happening in the industry, right? And the first one is what I call ROI invisibility, right? So when you look at companies right now, it’s really easy to get a budget for a pilot, right?

But what comes to the reality is can they get a budget to get the project done, right? So the data that I have to share with you guys, which is astonishing, is a third of CFOs really nowadays, they cannot quantify ROI inside of their organizations, right? And only one out of ten can actually have tools that can actually measure ROI, right? So. What ended up happening is whenever you talk to those organizations. right? Companies, and you ask, like, how are you actually going to measure productivity gains or how are you going to, like, they don’t have the answer, right? So it ends up, like, what’s the baseline? Like, they don’t have the answer, right? So whenever you bring to, like, the CFO to get that project approval, ends up on the project never getting approved and ends up on that cycle of, like, it ends up getting stuck into a pilot, right?

So when you look at what, you know, number two is, like, I think it’s data and trust and compliance friction, right? I think there’s a huge red tape in terms of what happens inside of organizations, right? I think that it’s very departmentalized, where, like, each part of the organization is trying to solve for each part of the department, right? So when I look at IT, it’s trying to solve for IT. Procurement is trying to solve for IT. Procurement is trying to solve for IT. Procurement is trying to solve for IT. procurement. Because no one’s really trying to solve that as an organization, the project ends up stalling. So something that can essentially take a few months to resolve ends up taking six months to a year.

And like I say in sales, time kills every deal. Last but not least, I think my third point is the champion problem. I think there’s a severe issue within organizations nowadays is there’s really no executive sponsorship. And whenever you don’t have executive sponsorship, especially for AI opportunities, deals never get approved. And people, especially at the bottom tier, they don’t understand what’s going on. And when there’s no clear alignment within the middle tier management, deals never get approved.

Ankit Bose

Great. I think let me summarize probably the three points that, you know, you need a close collaborated teams, right, with a single point of view with executive sponsorship. I think that will solve the adoption piece at least at last, right? Let me come to you, Professor Ganesh, right? Ganesh, I think what we are discussing is the, we have discussed a lot on AI for last five days for India, for globe, you know, and then we had three point of views. I asked them, give me one top thing. You heard probably from Breno and KK and then from, you know, Sunil was confused. What is that your top one take which India should do so that we can lead the seven race for the country and the globe?

Ganesh Ramakrishnan

I would suggest interoperability at every layer. I think it is also alluded to by earlier panelists. Interoperability encourages participation and in the words of PSA, if you are there in our Bharat, genesision is a meaningful participation right interoperability also helps you present alternatives because there is no one size fits all and you need to also ensure that in the trade off between fidelity and latency or between sensitivity and specificity you are able to find the right sweet spot which is suitable for you you can pick something that is appropriate I just on a lighter note I was driving from the PSA office and there was such traffic jam which most of you experienced so I exercised my sovereignty and I started walking so you find alternatives when you think sovereign 3 kilometers that’s why I was late so there are alternatives and also provisions for human participation much better there could be places where AI could be substitutional but many other places where you may want it to be just supplementary or complementary.

So alternatives is another thing that interoperability provides for. And I think the very key is scale out. I mean if just by scaling up we could cater to everyone, great. I would say that at least matches one checkbox which is people being catered to. But even we are not there. Scaling up is not going to cater. The capabilities are not there. But even if it were hypothetically, I think participation would also ensure that people are part of the process. It’s informed. I mean Bharat Jain, I take pride in one of our consortium members at IIM Indore. We are a consortium of nine academic institutions. And in the Institute of Management, what are they doing? They do a fabulous job in going to many of the second tier cities, going to people who have data and engage in conversations, education.

That data is an asset and you could actually transform that asset into IP generation. generation and not just source data. So the dialogue, right, and informed decision making is where participation is encouraged when you have interoperability. I just want to add just what he said. He made a very interesting point. How do you monetize data, correct? And this is something which needs a very different approach because today what happens is you are sourcing data and I think PM yesterday made a very amazing statement, correct? He’s saying, jiska data uska adhikar, correct? Very interesting. But if you look at what he’s saying is the creator of the data, the producer of the data, the consent provider for the use, all have a role to play and that’s what I’ve been using this word called data product or a data catalog.

So you need a catalog first. You need to build a data product and then set up a data contract, which is the fundamental, fundamental for interoperability. I just want to add. Because if that gets solved, I can choose my own personal data and say my data catalog, you can have five things to access. I think India has proven that amazing way of identity payments. So I think we can actually set up an environment where you can really build this. And the data benefactor is also the same person. So great point, Professor. I think it probably means definitely removing or optimizing the various layers and taking it to the last person in the rank. And it will help scale to the 1 .4 billion what we need.

I think thank you for that. Let me ask you again a second question. I think this is a very, very direct question. I think as a country, I think we are building our foundation models. You are one of the person who is building foundation models of the country. And at large, we have built sub -500 billion parameter model. And globally, we are going to 5 trillion or plus. The comparison is so huge, right? What do you think India’s moat can be when we are really, you know, in such a situation where we are at a disadvantage, though we have to aggressively, you know, handle that? Yeah, so the other important takeaway, which probably, you know, addresses some part of what you’re saying, what you’re asking is cooperation, right?

Collaboration. A collaboration, honestly, is not just a transactional process. It begins here, right? The will to understand the other side. I just published a book, you know, Informatics and AI for Healthcare. This is with my colleague, Shetha Jadhav. And what we did in the entire book was I tried to, I mean, I empathize with all the entire life cycle of a healthcare practitioner. And we tried to map every, ML example, informatic example, parsing to healthcare, right? and vice versa there was reciprocation from the other side as well it was very interesting exercise I think that’s how co -design also happens, so collaboration is actually to do innovation and again China has shown in many ways, right in contrast to the US ecosystem that co -design can lead to very innovative ideas, and co -design often is even lacking at the level of algorithms and infrastructure, right right there, new algorithms can come up but all the way to application layers so collaboration also comes by creating ecosystem where people can participate since you alluded again to Bharat Jain, we have a consortium of 9 academic institutions and the whole collaboration is through a section 8 company a not for profit company, which engages with for profit entities but also the academic institutions 60 full time employees work with 100 plus researchers, master students, it’s been a very profound exercise in a very short span of time I mean we may say we are late since you brought up also the landscape outside which is 1 trillion plus parameters and that’s also our North Star at least from the India AI vision that is our goal to get to at least 1 trillion parameters but even the 17 million parameter model that we have released there is a lot of research due diligence that has gone into the architecture choice and actually we are very proud of whatever model we released because ensuring that you know if you have two shared experts one of them is actually catering to languages and mixed code the other is catering to domain due diligence that was actually done based on Indian context right the fact that we covered 22 languages in our speech model the text to speech model again all of that is raised we explicitly captured the common phonetic vocabulary of Indian language And that’s only possible through this process of empathy.

I mean, linguist has to empathize with the computer scientist and vice versa. If we do that, we can actually create magic. Believe me. You can create magic. We just have to break our silos and the biggest silos sitting here. I mean, in fact, an endorsement to this was when we actually built our LLM enabled speech to text model. We had a projector layer which actually projected from speech to text. And we used a mixture of experts for the projection. It was very interesting. The expert for Hindi and Marathi performed very similarly. I mean, they were the same expert. Expert got shared. Whereas for Telugu, there was collaboration between Hindi and Tamil experts. So, data, domain knowledge, all of them actually are reinforcing each other.

So, this is actually a time where we can break the language barrier in my interaction with you. on 8th Jan, I gifted him a book from our consortium called Samanway Samanway stands for bringing all languages together and he said, we need to use AI also to show the strength of India it’s not just AI for India, but AI by India great, great, I think the point of collaboration and you know the story what we all have heard single stake course is a bunch of stakes I think it’s very true and that’s what is the mode for India collaboration, building that collaborative effort between different universities, bringing 9 different universities together to work and it’s a gigantic work, especially what you have created is amazing also, we are very happy 3 days back, we also announced at MOU with our heritage foundation sitting in the US we got a lot of support from people in the Bay Area, so once you open up for collaboration, you will find there is support from around the world and it’s very very good and I think that’s the most important Great, great, great.

Thank you, thank you, Professor Ganesh.

Ankit Bose

So, let me come to Sunil, you, right? I think we all agree that, you know, compute is one of the biggest player and pillar, right? And then government is doing their bit, right? I think they are doing their bit. But again, I think in terms of compute for the country, for some unity, can it be a shared commodity? Can it be, you know, some commodity which different, you know, factors of the country or probably ecosystem come together and build, right? How to solve that problem? Because as you rightly said, few thousands versus few lakhs, right? That’s something, yeah, very high.

Sunil Gupta

Number one, they said, you all come and panel with us at a right price point, right quality, and you declare how much GPUs you can give. They were not forcing us. They said, okay, you decide how much you want to give. We all got empaneled. We contributed GPUs, which were made available to startups. Then government said, every quarter we will come and we’ll encourage new and new providers to come up with the facility. And even existing players can also top up their capacities. And every next time, because the market forces, when the quantities start increasing, supplies start increasing, the pricing also will start reducing. Government say, okay, if new player comes, they can reduce the price.

Existing players will have to match. And they keep on empaneling more and more capacity. And that is something which has resulted into that 38 ,000 GPUs, which government is talking about, the shared compute facility, which is nothing but a, you can say, combination of the compute capacity created by multiple providers like us. And now yesterday, Prime Minister announced that 20 ,000 more are being added to this facility. So I would say, both as a concern, except this is proven that last 18 months, must is doable and both are the technology right while technically it’s possible that the same model can get trained like Ganeshji I’m sure can can talk very authoritatively on this subject technically also you can train on multiple different clusters of course inferencing you can do in multiple different places but even if you don’t do that you are actually what government did very democratically okay IIT we will put you into this service provider okay Sarvam will put you into this service provider okay GAN will put you into this provider so government is democratically making sure that they are encouraging industry to invest into this creating this capability which is required and we because we are getting business we are scaling up now we are investing more and more now and then they are making it available to people because India needs its own models we may use frontier models for certain purposes but as minister was saying that 95 percent of the use cases of the country can very well be done by a 20 billion to 100 billion parameter model right of course Ganeshji is carrying a mandate to create a trillion parameter model also in which country required almost we can for all those things why anybody else can do right their success Bharat Jain success and Sarvam success has proven that India can do it right so I would say that shared compute framework which has been done it is proven we just need to scale it up and my request to government which I think they are doing is don’t limit it only for training of models because models training is one step done now these models will be going to massage for adoption and you require millions of GPUs I think I’m repeating myself but that is where government need to fund the first cycle of inferencing on these models when users start adopting let’s say agriculture use case or a healthcare use case or a education use case or whichever use case which come on multiple UPI equal and use case will come up it will take time for users to start adopting it start accepting it making it a part of their lives at that time it will take time for users to start adopting it start accepting it making it a part of their lives at that time only user will be happy to pay 10 paisa per transaction or maybe 50 rupees per month subscription for that that time these models and use cases will become self -sufficient to generate revenue also then they will need government support but at least for i would say first cycle of inferencing maybe one year or two years government not only support the funding of the training of the model but also they support the first phase of inferencing on this model so that adoption happens revenue models emerge and after that government can say okay let private sector invest and government will come back to their original role of regulator

Ankit Bose

great so i think i think probably it will augment and put fewer thoughts right so the india mission has really created the single fire right yeah this fire is going to every state in the country yes all 28 states all eight union territories they are building aicoes yes and the mandate for each co is to give compute right i think that like a small wildfire it will spread all across the country it will be phenomenal but again i think at the same time you know we have to keep up the pace right i think one thing is space.

Sunil Gupta

Absolutely, Ankit, just trying to, this is something which I know two years back when we said that I’m putting 8000 GPUs, everybody started laughing. Because we were starting with the base when India was not having GPUs, right? Today, we comfortably say okay, India will be going to 50 -60 ,000 GPUs but even today I can tell you India require millions of GPUs. In US, just 3 or 4 deep tech companies are collectively owning millions of GPUs. India has got 1 .4 billion people out of which 1 billion people are carrying smartphones, creating, consuming content every single minute. And as Ganeshji will talk about, they all are creating voice -based AI because India’s AI will be voice -based. People are talking in their own native language or a mixture of Hindi, English, everything.

And they’ll be comfortable doing that instead of writing in their native language or screen which is not so easy. When you’re doing that and actually innovations are being done that even from feature phone or regular telephone line, not using smartphones, you will be able to talk to an AI model at the back end. When you are basically talking about 1 .4 billion people coming in the AI fold for multiple use cases. Just imagine what type of number of GPUs will be coming for inferencing and how many GPUs will be coming for training multiple models for sectoring all these things. So you are right, Ankit. What we have done in last two years is kudos to the whole ecosystem, to government and everybody, all of us.

But we need to keep on building for next 7, 8, 10 years. Sorry, just to give one or two more data points. India is creating and consuming 20 % of world’s data. One -fifth of the world’s data is created and consumed by India. Only 3 % of that data is hosted in India. That shows the upscope of the infrastructure both at the physical data center level and also in terms of the compute or GPU level India need to build. Because we don’t want any single country or any single company start dictating our digital destiny. We need to be as much sovereign as possible.

Ankit Bose

Thank you, Sunil. Thank you. Kalyan, let me come to you. So, Kalyan, I think one big base for the sovereignty is the skill set. to research, develop, deploy, right? And do all of that responsibly, right? I think SCL being, you know, one of the companies who have done that, right, in the last two, three years, what will be your nuggets, right? I think how other companies, other players in the country, other countries can do that, right?

Kalyan Kumar

So, if you look at, see, what is India known for? India is known for capability, historically. NASSCOM, right? But that capability was historically, and for a majority, and most of the business capability for hire. You basically are building capability to build things for others, and that’s been the core business. We’ve now become pretty much, if you really look at, if some other country thinks sovereignty, 50 % of their, global tech engineering services, development operations talent is sitting out of India. You see those GCC crates. But where is the pivot? The pivot is, I think what Professor was talking about, is you have to pivot towards build. We are always more towards service. So building, research, development, build your own IP, and how do you make India for the world?

I think it’s very important. I think that’s what our journey has been. So what we did is in 2015 -16, because we have one advantage, we are a single majority shareholder run company. Mr. Nader had a very ambitious vision. He said, we are building products for others, we should start building for yourself. It’s 2015. It’s a very conscious strategy, and he realized if you want to play in the global market, you need to have access to market permission and market access. Because people would only buy if you are a software product company. So that whole idea of acquiring India intellectual property, because if you really start to see the underlying of these pieces, you could build on open source and other stuff, but suddenly what’s happening is some of these open source companies are getting acquired and suddenly becoming closed source.

This is becoming a very interesting plan and suddenly some of them are getting classified as dual use. Suddenly they’ll say, oh, this is dual use tech, so I can only release this. So what we’re seeing from a skill standpoint, you need lesser smarter people. So I’m making a very controversial statement. You need lesser people, smarter people. You need engineers more than coders. See what’s happening is that we’re building quarters. You need engineers, people who think systems thinking you need people who are research bent. I meet students and I asked MBA students, what did you do? I did engineering. I said, why the hell did you waste four years of your life? If you wanted to go and do an MBA, the things like, why are you not doing deeper?

Why don’t you specialize in a domain? But those are things like even fundamental things. I would say. The big leap is going to be. I think India can solve something very interestingly, and as he’s referring to the PSA, quantum. Because I think the kind of compute needs you have, and looking at energy GPUs, you could completely change the computational paradigm. So hence, but that needs fundamental science, research, physics. Like no one wants to study physics. If you go back 20 years back in this country, everyone wanted to go and do coding. So those are the fundamental skills. So what we’re doing, in a very small way, we are acquiring, we are building talent and research pools.

So 50 % of HCL software product business is in India, engineering. But my second largest engineering center is in Rome. Third is in Israel. Then I’m in Perth, Austin, Chemsford outside of Boston. Why? Because if global companies can come to India and acquire talent to build and research, and then build an IP and take it to US, I’m doing the reverse. So AppScan, which is a code security product, the security heuristics is built in Israel. The, SAS UX is built in Boston. but the core engineering is in Bangalore but the IP is registered in India which is where we are moving a very different way we are now tapping global talent to build for us so we are still a billion and a half we are not big but we have got 130 countries so we are a step in the change it’s a long journey it needs to get away from short term thinking hire people to get them built I think you have to go to a very different model I think that’s what we are starting within the larger scheme of HCM but I think we are walking the right path I think we are acquiring assets continuously and building that

Ankit Bose

so let me add probably what I am seeing in the skill level the persona at least what NASCOM is focused on is the developer and the way we code is changing so NASCOM has done concentrated effort to help developers learn the new way of coding redefine the whole SCLC as a target what I have taken my team has taken we have taken a target of you know enabling 150k developers across the country next six months. Make them AI enabled, AI ready. Help them change the whole, you know, or unlearn and learn the new way. I think that’s what, is one thing, right? But finally, I think, which I should make everyone aware, I think there will be announcements sometime soon.

But with the MIT and, you know, the education industry, we are rewriting the whole, you know, technical, BTEC, MTEC, MCA, BCA curriculum, right? I think we are adding more specialization, as rightly said. Because we need specialists. We don’t need journalists. As an engineer, he studies 48 subjects in four years. At the end, what is he specialized on? It is his luck, right? The group he gets, the project he takes, somehow, some job he gets, right? So, I think that’s what we are changing. Soon, there will be announcements happening. But again, I think that’s what is happening at the background. Coming back to Benno, Benno, you have a product which is so simple, anyone can use it and build agents through that, right?

And get, you know, benefit, benefit from it. that. Let me ask you this. I think the one big piece of AI to really be mature and impact is adoption, right? And you started with the 95 % project fail or probably don’t go to production, right? So if we have to really do adoption at scale, what are the top issues you see, right? And how do you suggest, you know, the companies or folks here can take some pointers to mitigate it in their life functionally?

Brandon Mello

Yeah. So I’ll give you three. One is very specific to India, actually. those are relatable to our solution, but I think those are real use cases because the proof is, like I said, the proof is in the pudding, right? One is like you got to solve a real use case, something that is actually changing in people’s life. So AI is complex and AI is people still like trying to figure out AI. So it needs to be something that is into people’s everyday life. So in our case, for example, let’s go back. So if you look at Cursor or Lovable, right, they changed the life of, you know, vibe coding, software engineers. In our situation here at GenSpark, we looked at people that were producing office work, right?

So people looking at producing Excel, PowerPoint, and essentially just like any mechanical work on the everyday office work, right? Because if you think about it, every time you office task, all of that office work is very mechanical, right? And that’s why we realized all this massive growth in our solution, right? So to your point, I think that adoption… comes from like something that is something that can change people’s life and something in a very simplistic way right I think the second the second thing is should be consolidation of tools right I think from the time that we wake up in the morning I think most of us pick up our phones and we have we inundate about messages and naps and then we go to our office work and then we have probably a hundred tools that we have to touch you know actually we looked at a you know draw our research at work you know people waste in average two and a half hours a day right just you know flipping between different solutions right so in that causes contacts loss of context right so if there’s a waking consolidate tools that also drives adoption right you know we have probably a hundred tools that we have to touch you know so I think the third one is especially in India is In fact, there’s a lot of different languages in this country, which you brought up, right?

So I think in this country, especially LLMs, I think really struggling with being able to drive the right language, especially with all the different dialects that this country has. So being able to really naturalize and be able to bring the sovereignty here, I think is very important. And I think last but not least, people are very scared about data, right? And how that data, once they bring data into AI, how is that data going to be treated, right? So I think the solution needs to bring that sense of security of how that data is going to be managed.

Ankit Bose

Great. Thank you, Breno. I think with the last segment, last question, 30 seconds each, right? Again, probably starting with Breno, since you have the mic, right? So AI is not a short game. It’s a game for the next five years, 10 years, decades. Probably centuries. you know what is the challenge as a humanity we have to mitigate you feel that you know we don’t align with something which is hazardous to us

Brandon Mello

yeah so I think it’s you know actually I was having breakfast the other day and actually a person I was serving asked me the exact same question and I think that it’s how human beings interact with AI I think we’re still trying to figure out how to properly interact with AI and I think the speed of AI is evolving I think we’re still uncertain how to manage that I think the line on the sand moves so fast that we can’t really catch up to that right and the interaction of AI and us no one really knows how to do it yet

Ankit Bose

so I’ll map the earlier part in this part. You know, a very specific use of AI for self to, you know, make, you know, your life simpler. We’ll adopt AI skill. And we have to build a certain, you know, the processes to interact with AI in the long run. Because AI is changing, things are changing. Thank you, Breno. Coming back to you, Professor Ganesh, right? Same question, 30 seconds. What’s the challenge you see if we make something, you know, not aligned?

Professor Ganesh Ramakrishnan

I think the biggest challenge in not making AI aligned is that we will become products, not even consumers, right? We want to be in the steering wheel. I remember my very fondly, my first machine translation paper, I called it, you know, machine assisted human translation. Obviously, I can’t, I mean, that will sound too regressive. But the key is provenance. Right? I mean, how can you leave provenance? at every step in the stack, whether it’s data aggregation, which is again aligned with ecosystem. You need an ecosystem to leave provenance on the data part, whether it’s metadata refinement, data curation, provenance at the level of trading, tokenization, provenance at the observability, the other keyword, right? At the level of the way the model performs.

Models are glass boxes, because that gives you enough breathing space. Where do you, where should you actually yield your practices versus existing practices? So I think if you don’t have that view, the recipes, if they’re not made available, if the education isn’t there, I mean as a prof I always focus on the education part, I think we’ll become products.

Ankit Bose

Thank you, thank you. Sunil, you and then Kalyan.

Sunil Gupta

No, I think I concur with the views that at the end of the day we should not do AI for the sake of doing AI. It is a means to achieve an end purpose and the end purpose is beneficial. for the masses. I remember I think I was seeing on a YouTube video when Prime Minister Sir met all the startups and Professor Ganesh was there and I think Prime Minister Sir said to everybody don’t create toys, don’t use make AI to make toys, right, and use AI which benefits the masses in the real problem which they face in their real lives. So that is something that that is where the name of this event also has come in the Impact Summit, right, that and I think yesterday also used one word that unlike the previous summit where we are too much concerned about security governance which are things to be done but at the same time, keval bhai nahi rekhna hai AI ka, AI se aap apna bhagya bana sakte, apna bhahisha bana sakte ho.

So kaise AI se how we sort of create an impact, we benefit the masses and also machine should not end up dictating our lives as again I would say ke we should not end up becoming product itself. As much AI makes improvements, it possibly will never reach a stage where it starts acquiring human’s emotions, it starts acquiring our sense of gut, it starts acquiring our sense of culture, it starts acquiring what we speak, our body language, not just with our words. So I think human in the loop and human remaining the master of AI is something we’ll have to guard against all the time.

Ankit Bose

Interaction, don’t become product, have human -centric development. Kalyan?

Kalyan Kumar

I would say, break this into four key areas. Professor mentioned, I think the consumer AI, so I’m going to break it into consumer, enterprise, government and critical national infrastructure defense. So let’s, the reasons, all fours are going to play, just like ten seconds. Consumer AI, you are the product, unfortunately. You now have to use data control to decide how much of what you give to get. It’s a give to get mode, correct? In the consumer AI. Because the day you click I agree on an Android 4 on an Apple intelligence, suddenly you are the product and you’re getting something back but that give to get balance and that’s where the role of the regulator in my opinion has a far more play than in the enterprise of regulation enterprise god made world in seven days because he had no installed base enterprise cios you go and talk to cios on the ground their reality is that they’ve got a big problem architectural problem their data landscape is broken so they have to pivot from process workflow to data first big shift so they need to start about lineage metadata most of these companies don’t have metadata correct metadata discovery use techniques acknowledge graph to understand the metadata and then you organize your data for so that AI can be benefited I think the big place in govtech government government citizen engagement g2c massive but that’s where I think that sovereign AI play comes in where the work which serve them is doing or or the whole bar agent important because that’s where you can host citizen service platform and the last is for critical national infrastructure air gap networks, private AI and defense.

So I think we need to also have a very broken up view of this whole thing rather than trying to have one brush to paint all of them. But I think the last is sovereignty is all about choice. Making choice. Like he walked here. It’s a great choice. I can run on hyperscaler A, B. I can run on IOTA. I can run on CIFI. I can run on any or I can run on my own infrastructure. Then I need to have choice of it’s all about choice. And second is please AI exists for human good. So put the people back into the center. Human because we suddenly have made human someone in the side and everything is about AI.

It’s about people using AI surrounding them. So that’s what my thought was.

Ankit Bose

Great. Thank you. I think we have had a lot of good nuggets from everyone. I think we’ll continue this conversation after this. As a part of NASCOM, I think 7 AI is a big initiative for us. I think we have been driving it since last three, three and a half years. Ganesh knows that. Sunil knows that. services companies, we have worked enough with them. To keep it on, I think it’s not an end point. We have to think about the sovereignty and we have to think about how India builds the AGI capability, quantum AGI capability. I think that’s the journey we are on as NASCOM. I think we are writing a current policy document for government on sovereign AI and AGI roadmap.

And I think the QR code is there. The QR code will be here and I want all of you to have a look. It’s a dark one. Please work on it. I think that’s that. Yeah, Ganesh?

Professor Ganesh Ramakrishnan

I mean, the potential is so immense. We have not even scratched the surface, not even the tip of the iceberg we have touched. So, sovereignty is critical because the amount of inefficiency in that entire stack needs to be done away with. GPUs were never designed for building these models, right? Legacy and how can we use even the large work we are doing, workload to actually do better? A SIG design? can we use it to have better model serving engines? So, there’s so much to do. I think everyone should get inquisitive about the entire stack. That’s where sovereignty comes.

Ankit Bose

Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborator. We will have a QR code and please respond to that. Give your inputs. And with that, thank you to my panelists. I loved it and I think hope you also loved it. Thank you again.

Kalyan Kumar

Just one thing I want to just say. Watch on 21st, the PM is inaugurating a new JV which HCL is announcing with Foxconn. It’s called India Chips Limited. I would call it a patient capital. It’s about 16 and 32 nanometer fab which are creating. Basically it’s like a OSAT unit. It’s going to come out after 5 years. You have to build the whole thing. But also building that skill, correct? It’s a big important thing. And we have to start now. We cannot wait for 5 years on the line. So,

Speaker 1

Thank you so much to our panelists I request the panelists to please stay back for a group photo right now You can also access the report that Ankit has been talking about in the QR code displayed on the digital background before and leave feedback I’m also happy to announce Thank you Thank you to our panelists I’m also happy to announce an MOU being signed with Amrita Vishwa Vidyapetam and NASCOM right now Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (34)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The session opened with a formal inauguration of the Sovereign AI Research Report produced by Amrita Vishwa Vidya‑peetham, with senior representatives from Amrita – Pro‑Vice‑Chancellor Dr Manisha V Ramesh and Dr Shiva Ramakrishnan – attending.”

The knowledge base records that Amrita Vishwa Vidyapetam participated in the report launch ceremony, confirming the involvement of Amrita in the inauguration [S2].

Confirmedhigh

“Compute scarcity was identified as the primary bottleneck for building sovereign AI capability in India.”

Multiple sources describe infrastructure and compute limitations as the critical bottleneck for AI development in India, confirming the report’s emphasis on compute scarcity [S105] and the national goal to deploy tens of thousands of GPUs [S58].

Confirmedmedium

“The government has created a shared‑compute facility that aggregates capacity from multiple providers, currently totalling about 38,000 GPUs, with an additional 20,000 announced.”

The knowledge base notes India’s mission to deploy over 38,000 GPUs as public infrastructure, confirming the reported 38,000-GPU figure; the additional 20,000 announcement is not covered in the sources, so the 38,000 part is confirmed [S58].

Additional Contextmedium

“The shared‑compute facility is part of a collaborative framework called “Maitri” that provides shared access to compute, data, and AI models as digital public goods.”

S106 describes the Maitri platform as a collaborative framework offering shared compute, data, and model access, adding detail to the report’s description of the shared‑compute facility.

External Sources (111)
S1
India’s AI Future Sovereign Infrastructure and Innovation at Scale — 2225 words | 200 words per minute | Duration: 665 secondss Hello, good afternoon. Good afternoon. Good afternoon. My na…
S2
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — -Sunil Gupta: Co-founder, MD, and CEO of Yotta – operates data center campuses and built Sovereign Cloud in India, manag…
S3
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S4
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S5
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S6
Announcement of New Delhi Frontier AI Commitments — -Ganesh: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S7
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Raised by:Kalyan Kumar and Professor Ganesh Ramakrishnan Raised by:Professor Ganesh Ramakrishnan
S8
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — – Ganesh Ramakrishnan- Kalyan Kumar – Sunil Gupta- Ganesh Ramakrishnan
S9
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborat…
S10
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-future-sovereign-infrastructure-and-innovation-at-scale — Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborat…
S12
India’s AI Future Sovereign Infrastructure and Innovation at Scale — I would say, break this into four key areas. Professor mentioned, I think the consumer AI, so I’m going to break it into…
S13
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — – Ganesh Ramakrishnan- Kalyan Kumar- Sunil Gupta – Kalyan Kumar- Ankit Bose – Sunil Gupta- Ganesh Ramakrishnan- Kalyan…
S14
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-future-sovereign-infrastructure-and-innovation-at-scale — And like I say in sales, time kills every deal. Last but not least, I think my third point is the champion problem. I th…
S15
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Hello. Good afternoon. My name is Brandon Mello. I work for Genspark .ai, a follow -up -based company. We have been arou…
S16
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Hello. Good afternoon. My name is Brandon Mello. I work for Genspark .ai, a follow -up -based company. We have been arou…
S17
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Raised by:Kalyan Kumar and Professor Ganesh Ramakrishnan Raised by:Professor Ganesh Ramakrishnan
S19
Enhancing rather than replacing humanity with AI — The technology that worries us might also help us, but only if we stay engaged rather than retreat into pure resistance….
S20
UNSC meeting: Artificial intelligence, peace and security — Yi Zeng:My name is Yi Zeng and I would like to take this opportunity to share with distinguished representatives my pers…
S21
Ethical AI_ Keeping Humanity in the Loop While Innovating — Thank you. ensure that they don’t suffer from climate changes and shocks. I mean, the problems are so inspiring. So I th…
S22
Science AI & Innovation_ India–Japan Collaboration Showcase — yeah i think uh two perspectives uh One is in our solutioning, when we, and I’m going to take a live example, when we ac…
S23
Indias Roadmap to an AGI-Enabled Future — Evidence:Examples include agricultural loan assessment in Tamil and legal aid reasoning in Hindi – problems affecting hu…
S24
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Voice technology and multilingual capabilities were highlighted as crucial horizontal solutions for healthcare AI in Ind…
S25
Driving Indias AI Future Growth Innovation and Impact — Minister Jayant Chaudhary outlined the government’s approach to AI democratization, highlighting the India AI mission’s …
S26
How to build trust in user-centric digital public services | IGF 2023 Day 0 Event #193 — Audience:So I have, in a way, a related question to cybersecurity. You asked previously how to deal with trust in the ag…
S27
Panel Discussion Data Sovereignty India AI Impact Summit — The discussion began by challenging conventional notions of sovereignty, with moderator Arghya Sengupta framing the cent…
S28
The future of Digital Public Infrastructure for environmental sustainability — These can promote compliance and foster enhanced stakeholder engagement. Furthermore, data analysis is underscored as an…
S29
Empowering Women Entrepreneurs through Digital Trade and Training ( Global Innovation Forum) — Policymakers are beginning to realise the significant influence of the digital world and its potential impact on various…
S30
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Data Foundations: Proper data infrastructure is essential, with most companies still needing to complete foundational wo…
S31
Designing Indias Digital Future AI at the Core 6G at the Edge — Summary:Roy emphasizes that infrastructure challenges, particularly power consumption and site requirements, are the mai…
S32
Regulating Open Data_ Principles Challenges and Opportunities — Digital ecosystems simply do not function in silos. However, enabling data to move across borders should not mean that c…
S33
Main Session on Sustainability &amp; Environment | IGF 2023 — In conclusion, the analysis presents various arguments and stances on the significance of standards and sustainability. …
S34
The Right to Data for Development (Bluenumber) — The importance of interoperability in agriculture data systems is also highlighted. Interoperability refers to the abili…
S35
Importance of Professional standards for AI development and testing — – Moira De Roche- Liz Eastwood Havey believes that failures like the Post Office scandal result from poor implementatio…
S36
AI as critical infrastructure for continuity in public services — He observes that despite rapid technological advancement and availability of platforms and GPUs, organizations struggle …
S37
Keynote-Rishad Premji — Explanation:Rather than focusing on technological capabilities, there is recognition that the main challenges lie in org…
S38
Supply Chain Fortification: Safeguarding the Cyber Resilience of the Global Supply Chain — Injecting sovereignty in policy making and industry is needed. With the amount of material being brought into the decis…
S39
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Shikoh Gitau: and I’m really glad to be here. Thank you so much for having me. And apologies for joining in late. So, th…
S40
Driving Indias AI Future Growth Innovation and Impact — Evidence:By combining infrastructure and open source, costs can be made palatable for Indian citizens. The goal is servi…
S41
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — High level of consensus with strong alignment between industry experts, academics, and policymakers. This suggests a mat…
S42
Driving Enterprise Impact Through Scalable AI Adoption — Summary:The main disagreements centered on educational priorities (fundamental vs. applied skills), assessment methods (…
S43
Discussion Report: Sovereign AI in Defence and National Security — Faisal advocates for a strategic approach where countries focus their limited sovereign resources on the most critical c…
S44
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — 2.Infrastructure capacity- having sovereign compute for advanced models If AI is to become electable in our democracies…
S45
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — 1.Infrastructure Scaling: Continue accelerating from thousands to millions of GPUs required for population-scale deploym…
S46
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Infrastructure and Compute Requirements for Sovereign AI: The panel extensively discussed India’s need for massive GPU i…
S47
Driving Indias AI Future Growth Innovation and Impact — Industry representatives highlighted significant challenges and opportunities in India’s AI landscape. A.S. Rajgopal fro…
S48
From India to the Global South_ Advancing Social Impact with AI — Consensus level:High level of consensus with significant implications for coordinated AI development strategy. The align…
S49
The Global Power Shift India’s Rise in AI & Semiconductors — Consensus level:High level of consensus with complementary perspectives rather than conflicting views. The speakers come…
S50
Building Trustworthy AI Foundations and Practical Pathways — Consensus level:High level of consensus with complementary expertise – Thakkar provides the broad technological and econ…
S51
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Unexpectedly, there was strong consensus across industry, government, and academic perspectives on the need for collabor…
S52
Upskilling for the AI era: Education’s next revolution — This comment is insightful because it addresses a common criticism of large-scale initiatives – that they focus on quant…
S53
How AI Is Transforming Indias Workforce for Global Competitivene — Disagreement level:Moderate disagreement with significant implications – while speakers share common goals of inclusive …
S54
Critical battle for high-quality data in AI industry — According tothe Economist, Adobe has defied predictions of its demise in the face of AI by leveraging its vast database …
S55
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “An interesting fact is that most of the AI models in the world work in English”[41]. “But your AI model works in Indian…
S56
Building the Workforce_ AI for Viksit Bharat 2047 — From the community health worker delivering nutrition to an expecting mother to the balancing worker strategizing access…
S57
The Future of Public Safety AI-Powered Citizen-Centric Policing in India — The expansion of language support remains an ongoing challenge and opportunity. Currently, Bhashini is being enhanced to…
S58
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Thank you and colleagues panelists great to be here and great to see a large kind of attendance that we have seen over t…
S59
Open Forum #30 High Level Review of AI Governance Including the Discussion — – Access to high-end compute resources Abhishek Singh, Under-Secretary from the Indian Ministry of Electronics and Info…
S60
Keynote-Rishad Premji — This comment transforms the discussion by repositioning India’s challenges as strengths. It provides the logical foundat…
S61
Open Internet Inclusive AI Unlocking Innovation for All — Anandan provided an optimistic assessment of India’s position in consumer AI applications, revealing that “India today h…
S62
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — India possesses many essential ingredients for AI success: a robust software services industry, thriving startup ecosyst…
S63
India’s AI Future Sovereign Infrastructure and Innovation at Scale — This comment reframed the entire sovereignty discussion by identifying compute infrastructure as the critical bottleneck…
S64
Keynote-Mukesh Dhirubhai Ambani — The third commitment centres on building India’s sovereign compute infrastructure through three interconnected initiativ…
S65
Keynote-Mukesh Dhirubhai Ambani — Distinguished guests, my fellow Indians, namaste. The Global AI Impact Summit is a defining moment in India’s tech histo…
S66
High Level Youth IGF : Building a Resilient, Inclusive and Safe Digital Future for West Africa — Building resilience through robust infrastructure and cybersecurity is essential
S67
Successes &amp; challenges: cyber capacity building coordination | IGF 2023 — Enhancing the skills and capabilities of public administrations as they transition into the digital realm requires a str…
S68
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Data Foundations: Proper data infrastructure is essential, with most companies still needing to complete foundational wo…
S69
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-for-shared-prosperity — And it’s that kind of computing power that is essential. It’s essential for training large AI models. It’s essential for…
S70
Opening of the session/OEWG 2025 — Malawi: Thank you so much, Chair. Allow me to first thank the GFCE and UK government, through the Women in Internation…
S71
WSIS Action Line C6: Digital Ecosystem Builders in action: Redefining the role of ICT regulators — Petros Galides: Thank you. Thank you very much, Moderator, dear Ahmed. Just a few words about eMERGE, as my colleague sa…
S72
S73
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Existing initiatives like the Global Digital Compact and Open Government Partnership provide an opportunity to create co…
S74
The Right to Data for Development (Bluenumber) — The importance of interoperability in agriculture data systems is also highlighted. Interoperability refers to the abili…
S75
Importance of Professional standards for AI development and testing — – Moira De Roche- Liz Eastwood Havey believes that failures like the Post Office scandal result from poor implementatio…
S76
Keynote-Rishad Premji — Rather than focusing on technological capabilities, there is recognition that the main challenges lie in organizational …
S77
Keynote-Rishad Premji — Explanation:Rather than focusing on technological capabilities, there is recognition that the main challenges lie in org…
S78
Opening Remarks (50th IFDT) — The overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitu…
S79
Open Mic &amp; Closing Ceremony — The overall tone was formal yet appreciative. There was a sense of accomplishment and gratitude expressed throughout, wi…
S80
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S81
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S82
WSIS Prizes 2025 Winner’s Ceremony — The tone throughout the ceremony was consistently celebratory, formal, and appreciative. It maintained a positive and co…
S83
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Thank you very much for having me. It’s always fun to listen to everyone here on this. I was hoping somebody was going t…
S84
[Brussels e-briefings] The Eurozone ‘time bomb’: Can the single currency be rescued for good? — The main positive news is that the Eurozone has understood the need for extraordinary measures to be taken. The sense of…
S85
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — This comment shifted the tone from technical solutions to strategic urgency, emphasizing the need for speed and coordina…
S86
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued) – session 6 — The speaker commenced by acknowledging the Chair’s dedication in revising the Annual Progress Report, particularly the s…
S87
Closure of the session — Thailand: Thank you, Chair, for giving me the floor. Thailand supports the establishment of a Permanent Mechanism that…
S88
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S89
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S90
Dynamic Coalition Collaborative Session — The discussion began with an optimistic, collaborative tone as panelists shared their expertise and perspectives. Howeve…
S91
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — The discussion maintained a consistently collaborative and solution-oriented tone throughout. Speakers were optimistic a…
S92
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S93
Building Trusted AI at Scale – Keynote Anne Bouverot — Overall Tone:The tone is diplomatic, optimistic, and collaborative throughout. It begins with ceremonial courtesy and ap…
S94
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S95
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — Raghavan argues that while the world focuses on immediate metrics like largest models or fastest chips, these are transi…
S96
Opening remarks — Hartmut Glaser:of Science and Technology. was focused on artificial intelligence. President Lula da Silva asked us to di…
S97
State of play of major global AI Governance processes — Dohyun Kang:Thank you very much for introducing me, and thank you again, the Secretary General of the ITU, and under the…
S98
Panel Discussion Data Sovereignty India AI Impact Summit — So first of all, thank you. I’ll just keep it. I’ve answered this in two parts, and real quickly. So. So. critical quest…
S99
Opening of the session — Singapore: Thank you Mr. Chair on behalf of my delegation I’d like to express our thanks to you and your team for the p…
S100
Advancing Scientific AI with Safety Ethics and Responsibility — Thanks Shyam. I think first, yeah first thing that we need to understand is how that ecosystem is and then see if certai…
S101
What Proliferation of Artificial Intelligence Means for Information Integrity? — Ivars Pundurs: THE remainder of the episode is about the collapse and recession of the IMF by IWM. It’s basically about …
S102
National Disaster Management Authority — An unexpected disagreement emerged on the primary bottleneck – Mohapatra identifies data quality as the main issue (only…
S103
National Disaster Management Authority — Explanation:An unexpected disagreement emerged on the primary bottleneck – Mohapatra identifies data quality as the main…
S104
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Despite technical and economic opportunities, significant policy challenges remain. Chandra identified lack of coordinat…
S105
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — The first constraint involves infrastructure limitations, which Patel described as “oxygen for AI.” The global shortage …
S106
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Agarwal explained that while India has strong talent and skills, they faced challenges with compute infrastructure and d…
S107
https://app.faicon.ai/ai-impact-summit-2026/building-public-interest-ai-catalytic-funding-for-equitable-compute-access — so I would say that the focus is not on rationing but on intelligent prioritization I think that’s going to be the focus…
S108
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Dr. Saurabh Garg outlined India’s approach through the proposed “Maitri” platform, a collaborative framework designed to…
S109
AI Infrastructure and Future Development: A Panel Discussion — And we think that if we do buy all of those chips, we really help create a lot of market cap for Lisa and team. And they…
S110
Shaping the Future AI Strategies for Jobs and Economic Development — kilometers away from Earth. We partner with Agni Cool, which is a space tech company, and the space ecosystem has evolve…
S111
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — ask I don’t think there is any country in the world whose government has given its citizens… In India’s context. Yes, …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sunil Gupta
10 arguments200 words per minute2225 words665 seconds
Argument 1
GPU scarcity and need for massive scale (Sunil Gupta)
EXPLANATION
Sunil emphasizes that India’s AI progress is hampered by a shortage of GPU compute, which is essential for training and deploying models at scale. He argues that making abundant GPU resources a basic hygiene is critical for mass AI adoption.
EVIDENCE
He notes that while India has strong demand and data, it lacked compute, stating that AI cannot run on regular CPUs and requires specialized GPU compute, which was missing at the time ([54-60]). He later quantifies the gap, saying millions of GPUs will be needed for future use cases, whereas currently only a few thousand are available ([70-78]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The critical need for massive GPU infrastructure in India is highlighted in the panel discussion, noting the gap between demand and available GPUs and the target of scaling to 50-60,000 GPUs [S1][S2].
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
AGREED WITH
Ankit Bose, Kalyan Kumar
DISAGREED WITH
Ganesh Ramakrishnan, Kalyan Kumar
Argument 2
Shared compute facility as a commodity, government empaneling and scaling (Sunil Gupta)
EXPLANATION
Sunil describes a government‑led shared compute facility where multiple providers contribute GPU capacity, which is then made available to startups and other users at competitive prices. He highlights the empaneling process and recent expansions as evidence of scalability.
EVIDENCE
He explains that providers voluntarily declare how many GPUs they can supply, are empaneled, and the government encourages new entrants, leading to a pool of 38,000 GPUs and an additional 20,000 announced by the Prime Minister ([224-236]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sunil describes a government-led shared compute model where providers are empaneled and contribute GPUs, creating a pool of 38,000 GPUs that can be accessed competitively by startups [S1][S2].
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
AGREED WITH
Ganesh Ramakrishnan, Kalyan Kumar
DISAGREED WITH
Ganesh Ramakrishnan, Kalyan Kumar
Argument 3
India’s rich datasets as an advantage, but must be hosted locally (Sunil Gupta)
EXPLANATION
Sunil points out that India generates a large share of global data, which is a strategic asset for AI, but stresses that most of this data is stored abroad, creating a vulnerability. He calls for domestic hosting to ensure sovereignty.
EVIDENCE
He states that India creates and consumes 20 % of the world’s data yet only 3 % of it is hosted within the country, underscoring the need for local infrastructure ([254-257]).
MAJOR DISCUSSION POINT
Data Infrastructure, Interoperability & Provenance
AGREED WITH
Ganesh Ramakrishnan, Kalyan Kumar
Argument 4
AI must serve the masses, not be “toys” or replace humans (Sunil Gupta)
EXPLANATION
Sunil argues that AI should be a means to solve real problems for the population rather than being developed as a novelty or for entertainment. He urges focus on impactful applications that improve everyday life.
EVIDENCE
He recalls the Prime Minister’s admonition to startups not to create “toys” but to build AI that benefits the masses, emphasizing purpose-driven development ([380-384]).
MAJOR DISCUSSION POINT
Ethical Alignment & Human‑Centric AI
Argument 5
Preserve human‑in‑the‑loop, prevent AI from becoming the product itself (Sunil Gupta)
EXPLANATION
Sunil stresses the importance of keeping humans central to AI systems, ensuring that AI augments rather than replaces human decision‑making, and guarding against AI becoming a product that dictates lives.
EVIDENCE
He notes that AI should not acquire human emotions or culture and that a human-in-the-loop approach must be maintained ([385-386]).
MAJOR DISCUSSION POINT
Ethical Alignment & Human‑Centric AI
AGREED WITH
Ganesh Ramakrishnan, Ankit Bose
Argument 6
Government should fund the first cycle of inferencing on sovereign AI models to accelerate adoption and create sustainable revenue streams.
EXPLANATION
Sunil argues that beyond training, the government must support the initial inferencing phase of domestically built models so that early adopters can use them, generate value, and later transition to private sector funding.
EVIDENCE
He explains that the shared compute framework is proven but requests the government not to limit support only to model training; instead, it should fund the first inferencing cycle for use cases such as agriculture, healthcare, and education, enabling users to start paying for services and allowing revenue models to emerge before private investment takes over [236-241].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He advocates extending the shared-compute framework beyond model training to fund the initial inferencing cycle, enabling early adopters to generate value before private funding takes over [S1].
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
AGREED WITH
Ankit Bose
DISAGREED WITH
Kalyan Kumar
Argument 7
India’s AI future will be voice‑first, requiring multilingual voice models to reach the mass population.
EXPLANATION
Sunil highlights that the majority of Indian users will interact with AI through voice in their native languages, so building robust speech‑to‑text and voice assistants is essential for widespread adoption.
EVIDENCE
He notes that “India’s AI will be voice-based” and that people will be comfortable using AI via feature phones or regular telephone lines, speaking in native or mixed languages rather than typing, underscoring the need for voice-centric solutions [244-247].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion emphasizes that India’s AI will be voice-based, with multilingual speech-to-text models needed for billions of users [S23][S24].
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
Argument 8
Make compute available at low, affordable prices for startups to accelerate AI adoption.
EXPLANATION
Sunil stresses that beyond merely providing GPU capacity, the compute must be offered at a very low price point so that emerging startups can access it without prohibitive costs, thereby fostering rapid innovation and scaling of AI solutions.
EVIDENCE
He notes that the shared compute facility is made available to startups at a very low price, emphasizing the importance of affordable access to GPUs for the ecosystem’s growth [84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shared compute facility is offered to startups at very low prices to foster rapid innovation and scaling of AI solutions [S1].
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
Argument 9
Sovereign Cloud as a domestic platform for mission‑critical government applications
EXPLANATION
Sunil explains that NASCOM has built a Sovereign Cloud in India that currently hosts a large number of mission‑critical applications for the Indian government, providing a secure and locally controlled environment for public services.
EVIDENCE
He states that “We have built Sovereign Cloud in India, which is running a whole lot of mission-critical government of India applications” and adds that they have migrated the Bhashini service from a hyperscale cloud to this sovereign infrastructure ([22-24]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sunil notes that a Sovereign Cloud has been built in India and now hosts many mission-critical government applications, including the migration of Bhashini from a hyperscale provider [S2].
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
Argument 10
Migration of key national services to the sovereign cloud demonstrates a shift from reliance on foreign hyperscale providers
EXPLANATION
By moving Bhashini, an important language‑technology service, from an external hyperscale cloud to the domestic Sovereign Cloud, Sunil highlights a strategic move toward data sovereignty and reduced dependence on foreign providers.
EVIDENCE
He notes that “Recently, we migrated Bhashini from a hyperscale cloud to our Sovereign Cloud” ([23-24]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The migration of the Bhashini language-technology service to the domestic Sovereign Cloud illustrates a strategic move toward data sovereignty [S2].
MAJOR DISCUSSION POINT
Data Infrastructure, Interoperability & Provenance
S
Speaker 1
2 arguments55 words per minute330 words359 seconds
Argument 1
Launch of the Sovereign AI research report to guide national strategy (Speaker 1)
EXPLANATION
Speaker 1 announces the release of a new Sovereign AI research report produced by Amrita Vishwa Vidyapeetham, positioning it as a guiding document for India’s AI roadmap.
EVIDENCE
During the opening remarks, the speaker invites the vice-chancellor of Amrita to launch the report and mentions its availability via a QR code on the digital background ([1]).
MAJOR DISCUSSION POINT
Institutional Collaboration & Reporting
Argument 2
Signing of MOU between Amrita Vishwa Vidyapeetham and NASCOM to foster cooperation (Speaker 1)
EXPLANATION
Speaker 1 announces a formal memorandum of understanding between the academic institution and NASCOM, signalling a partnership to advance sovereign AI initiatives.
EVIDENCE
At the close of the session, the speaker states that an MOU is being signed with Amrita Vishwa Vidyapeetham and NASCOM ([454-455]).
MAJOR DISCUSSION POINT
Institutional Collaboration & Reporting
G
Ganesh Ramakrishnan
21 arguments157 words per minute1464 words558 seconds
Argument 1
Interoperability at every stack layer, data products and contracts to enable participation (Professor Ganesh Ramakrishnan)
EXPLANATION
Ganesh argues that ensuring interoperability across all layers of the AI stack encourages broader participation, allowing diverse stakeholders to contribute and choose appropriate trade‑offs between fidelity, latency, and other factors.
EVIDENCE
He describes interoperability as a means to provide alternatives, support human participation, and enable different fidelity-latency balances, citing examples from his own consortium work ([151-166]).
MAJOR DISCUSSION POINT
Data Infrastructure, Interoperability & Provenance
AGREED WITH
Sunil Gupta, Kalyan Kumar
Argument 2
Need for data provenance, metadata and cataloguing for trustworthy AI (Professor Ganesh Ramakrishnan)
EXPLANATION
Ganesh stresses that trustworthy AI requires provenance at every stage—data collection, curation, tokenisation, and model observability—supported by robust metadata and cataloguing systems.
EVIDENCE
He outlines the necessity of provenance for data aggregation, metadata refinement, tokenisation, and model observability, calling models “glass boxes” to provide transparency ([371-376]).
MAJOR DISCUSSION POINT
Data Infrastructure, Interoperability & Provenance
AGREED WITH
Sunil Gupta, Ankit Bose
Argument 3
Academic‑industry consortiums to co‑design models and foster collaboration (Professor Ganesh Ramakrishnan)
EXPLANATION
Ganesh highlights the creation of a nine‑institution academic consortium that co‑designs foundation models, emphasizing collaborative research and shared ownership of AI assets.
EVIDENCE
He mentions a consortium of nine academic institutions, coordinated through a Section 8 not-for-profit, involving over 100 researchers and students, and cites joint publications and model development efforts ([162-169], [191-200]).
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
AGREED WITH
Kalyan Kumar, Ankit Bose
Argument 4
Ensure provenance, transparency and alignment throughout the stack (Professor Ganesh Ramakrishnan)
EXPLANATION
Ganesh reiterates that AI systems must embed provenance and transparency at each layer to maintain alignment with human values and avoid becoming opaque products.
EVIDENCE
He discusses the importance of provenance, glass-box models, and education to keep AI aligned, warning that without these practices AI could become a product rather than a tool ([371-376]).
MAJOR DISCUSSION POINT
Ethical Alignment & Human‑Centric AI
Argument 5
Scale‑out, not just scale‑up, is required to deliver AI services to the entire population.
EXPLANATION
Ganesh argues that merely increasing the size of centralised compute resources (scaling up) will not meet the diverse needs of India’s billions of users. Instead, a distributed, scale‑out approach is needed to ensure that AI capabilities can be delivered at the required latency and fidelity across the country.
EVIDENCE
He notes that while scaling up would be helpful, “the capabilities are not there” and that “even if it were hypothetically, I think participation would also ensure that people are part of the process”; he then stresses that “scale out” is needed to cater to everyone, implying a distributed architecture ([151-160]).
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
DISAGREED WITH
Sunil Gupta, Kalyan Kumar
Argument 6
Voice‑first AI with extensive multilingual support is essential for Indian adoption.
EXPLANATION
Ganesh emphasizes that India’s AI future will be voice‑driven, requiring models that understand and generate content in many local languages. Building language‑specific experts and covering a broad set of Indian languages will make AI usable for the majority of the population.
EVIDENCE
He states that “India’s AI will be voice-based” and that their speech model covers “22 languages”; he also describes using a mixture-of-experts architecture where experts for Hindi, Marathi, and Telugu collaborate, highlighting the technical focus on multilingual capability ([245-247], [210-212]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ganesh stresses that India’s AI will be voice-driven, requiring models that cover many local languages, aligning with the panel’s emphasis on voice-first, multilingual AI [S23][S24].
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
Argument 7
Current GPU hardware is ill‑suited for AI model training and serving; specialised hardware designs are needed.
EXPLANATION
Ganesh points out that GPUs were originally built for graphics, not for the massive parallelism required by modern AI models. He suggests exploring new hardware concepts, such as a SIG (Specialized Integrated GPU) design, to improve model serving efficiency and reduce legacy constraints.
EVIDENCE
He remarks that “GPUs were never designed for building these models” and asks whether a “SIG design” could be used to achieve better model serving engines, indicating a need for purpose-built compute hardware ([429-432]).
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
Argument 8
Treat data as a monetisable asset through data products, catalogs and contracts to foster participation and generate economic value.
EXPLANATION
Ganosh argues that data should be managed as a strategic asset that can be packaged into data products, with clear catalogues and contracts governing its use. This approach enables creators to retain rights, monetize their contributions, and encourages broader ecosystem participation.
EVIDENCE
He explains that “data is an asset” and that “you could actually transform that asset into IP generation”; he then outlines the need for a “catalog first”, a “data product”, and a “data contract” as foundational for interoperability and value creation ([165-176]).
MAJOR DISCUSSION POINT
Data Governance & Economic Development
DISAGREED WITH
Sunil Gupta
Argument 9
Breaking organisational silos through cross‑sector consortiums accelerates AI innovation and ensures inclusive development.
EXPLANATION
Ganesh stresses that collaboration must go beyond transactional relationships, involving co‑design between academia, industry, and non‑profits. Such consortiums enable shared research, pooled expertise, and faster progress toward sovereign AI solutions.
EVIDENCE
He describes a consortium of nine academic institutions coordinated via a Section-8 not-for-profit, mentions a recent MOU with a heritage foundation in the US, and highlights support from the Bay Area, illustrating a broad, collaborative ecosystem ([194-204]).
MAJOR DISCUSSION POINT
Collaboration & Ecosystem Building
Argument 10
AI should be positioned as complementary or supplementary rather than purely substitutive, providing alternatives and preserving human participation.
EXPLANATION
Ganesh argues that AI systems need to offer options that support human users, allowing AI to augment tasks in some contexts while remaining optional in others, rather than replacing human roles entirely.
EVIDENCE
He notes that there can be situations where AI is substitutional, but many other scenarios require AI to be supplementary or complementary, and emphasizes the importance of providing alternatives and ensuring human participation in AI deployments [151-166].
MAJOR DISCUSSION POINT
Ethical Alignment & Human‑Centric AI
Argument 11
Adopt mixture‑of‑experts architectures to efficiently support India’s multilingual landscape in speech‑to‑text models.
EXPLANATION
Ganesh describes using a projector layer combined with a mixture‑of‑experts design, where language‑specific experts (e.g., Hindi‑Marathi shared expert, Telugu collaborating with Hindi and Tamil) enable high‑quality performance across many Indian languages, demonstrating a scalable technical strategy for multilingual AI.
EVIDENCE
He explains that their LLM-enabled speech-to-text model uses a projector layer and a mixture-of-experts approach, with shared experts for Hindi and Marathi and collaborative experts for Telugu, illustrating how this architecture handles linguistic diversity [210-212].
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
Argument 12
Stakeholders should cultivate curiosity across the entire AI stack to drive holistic innovation.
EXPLANATION
Ganesh calls for everyone involved—researchers, developers, policymakers, and industry—to be inquisitive about all layers of the AI ecosystem, from hardware to data to applications, so that integrated solutions can be created.
EVIDENCE
He states that “everyone should get inquisitive about the entire stack,” urging a broad, cross-layer engagement with AI technologies [426-433].
MAJOR DISCUSSION POINT
Capacity development & Holistic System Design
Argument 13
Leverage India’s existing digital identity and payment infrastructure to create a data ownership and consent framework for sovereign AI
EXPLANATION
Ganesh proposes that the proven Aadhaar and UPI systems can be repurposed to give individuals control over their data, enabling the creation of data catalogs, contracts, and monetisation mechanisms that keep data benefactors as the rightful owners. This approach would strengthen data sovereignty by embedding consent and provenance at the source.
EVIDENCE
He points out that India has demonstrated an effective identity-payment ecosystem and suggests that this can be used to build an environment where data owners retain rights and can create data products, emphasizing that “the data benefactor is also the same person” ([177-184]).
MAJOR DISCUSSION POINT
Data Governance & Sovereign AI
Argument 14
Interdisciplinary empathy between domain experts (e.g., linguists) and technologists is essential for building AI models that truly serve Indian contexts.
EXPLANATION
Ganesh stresses that effective AI requires close collaboration between specialists such as linguists and computer scientists, allowing each to understand the other’s constraints and contribute to model design.
EVIDENCE
He describes co-authoring a healthcare book where he had to empathise with both clinicians and ML practitioners, and later notes that “a linguist has to empathise with the computer scientist and vice versa” to create useful AI solutions [197-202].
MAJOR DISCUSSION POINT
Collaboration & Ecosystem Building
Argument 15
Co‑design of AI solutions across sectors (e.g., healthcare) drives innovative outcomes and ensures relevance to end‑users.
EXPLANATION
Ganesh argues that involving stakeholders from different domains in the design process leads to AI systems that are better aligned with real‑world needs and generate higher impact.
EVIDENCE
He cites the development of a healthcare-focused AI book and the broader consortium effort that brings together academia, industry, and non-profits to co-design models, demonstrating how cross-sector collaboration fuels innovation [197-200].
MAJOR DISCUSSION POINT
Collaboration & Ecosystem Building
Argument 16
India’s AI potential is immense and still largely untapped, requiring holistic research across the entire AI stack.
EXPLANATION
Ganesh points out that the current AI efforts have only scratched the surface of what is possible, calling for continued, comprehensive research that spans hardware, data, models, and applications to fully realise India’s sovereign AI capabilities.
EVIDENCE
He remarks that “the potential is so immense” and that “we have not even scratched the surface” of AI, indicating a need for broader, deeper investigation across all layers of the stack [426-428].
MAJOR DISCUSSION POINT
Artificial intelligence
Argument 17
Edge‑focused AI engines and vector databases are essential for scaling AI across India’s diverse and distributed user base.
EXPLANATION
Ganesh argues that as AI‑enabled devices proliferate, deploying AI capabilities at the edge becomes critical to reduce latency and meet local needs. He highlights the development of a localized vector AI engine that can run on edge hardware, emphasizing that a distributed, edge‑centric approach complements scale‑out strategies.
EVIDENCE
He explains that as AI PCs become more common, edge computing gains importance and HCL is preparing to release a localized vector AI engine designed to operate on edge devices, illustrating the push for edge-ready AI infrastructure [102-104]. He also stresses that merely scaling up centralised compute will not suffice and that a scale-out model is needed to serve the entire population, reinforcing the need for distributed edge deployment [151-160].
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
DISAGREED WITH
Sunil Gupta, Kalyan Kumar
Argument 18
AI systems must retain human steering control to avoid becoming autonomous products
EXPLANATION
Ganesh warns that if AI is not kept aligned, it could turn into a product that operates without human direction, emphasizing the need for continuous human oversight and control over AI decision‑making.
EVIDENCE
He says, “I think the biggest challenge in not making AI aligned is that we will become products, not even consumers… we want to be in the steering wheel” ([367-369]).
MAJOR DISCUSSION POINT
Ethical Alignment & Human‑Centric AI
Argument 19
A not‑for‑profit Section 8 consortium model enables equitable collaboration between academia, industry, and non‑profits, fostering shared ownership of sovereign AI assets.
EXPLANATION
Ganesh explains that the AI effort is organized as a consortium of nine academic institutions coordinated through a Section 8 not‑for‑profit company, which allows both for‑profit and non‑profit entities to work together on shared research and development, ensuring that the resulting AI assets are collectively owned rather than dominated by any single commercial player.
EVIDENCE
He describes the consortium of nine academic institutions, coordinated via a Section 8 not-for-profit entity, involving over 100 researchers and master’s students, and later mentions a recent MOU with a heritage foundation in the US that further expands collaborative support, illustrating the inclusive structure of the partnership [162-169][194-204].
MAJOR DISCUSSION POINT
Collaboration & Ecosystem Building
Argument 20
Protecting open‑source foundations from commercial capture is essential; sovereign AI should prioritize open‑source development to avoid dependence on proprietary, closed‑source technologies.
EXPLANATION
Ganesh warns that while open‑source components are a key building block for sovereign AI, many open‑source projects are being acquired and turned into closed‑source or dual‑use technologies, which threatens the openness and accessibility needed for a truly sovereign ecosystem.
EVIDENCE
He notes that several open-source companies are being acquired and becoming closed-source, and that some are being classified as dual-use, limiting their availability for sovereign AI development [284-286].
MAJOR DISCUSSION POINT
Artificial intelligence
Argument 21
End‑to‑end collaboration across the entire AI stack—from algorithm research to application deployment—is necessary for sovereign AI to ensure that innovations translate into real‑world impact.
EXPLANATION
Ganesh stresses that collaboration must go beyond transactional relationships and include co‑design that spans algorithmic development, model training, and practical application layers, so that new algorithms are integrated with use‑case specific solutions and deliver tangible benefits for India.
EVIDENCE
He describes that collaboration begins with a willingness to understand the other side, co-design of models, and that new algorithms can emerge but must be carried through to application layers, emphasizing holistic, stack-wide cooperation [193-200].
MAJOR DISCUSSION POINT
Collaboration & Ecosystem Building
A
Ankit Bose
6 arguments173 words per minute1450 words501 seconds
Argument 1
Call for coordinated policy to make compute widely available (Ankit Bose)
EXPLANATION
Ankit urges the creation of a coordinated policy framework that treats compute as a shared commodity, enabling various ecosystem players to collectively build and access GPU resources.
EVIDENCE
He asks Sunil and Kalyan what the single top action should be for sovereign capability and later frames the question about making compute a shared commodity for the country ([92-95], [215-222]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ankit calls for a coordinated policy treating compute as a shared commodity, echoing the panel’s discussion of a government-led shared compute model and its scaling [S1].
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
AGREED WITH
Sunil Gupta
Argument 2
Upskilling 150 k developers, curriculum overhaul (Ankit Bose)
EXPLANATION
Ankit outlines NASCOM’s initiative to train 150,000 developers within six months and to revamp technical curricula (B.Tech, M.Tech, MCA) with deeper specialisation to meet AI skill demands.
EVIDENCE
He describes the target of 150k developers, the partnership with MIT and education industry to rewrite curricula, and the focus on specialist training rather than generic engineering ([312-320]).
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
AGREED WITH
Kalyan Kumar, Ganesh Ramakrishnan
DISAGREED WITH
Kalyan Kumar
Argument 3
Requirement for cross‑functional collaboration and executive buy‑in to drive adoption (Ankit Bose)
EXPLANATION
Ankit summarises that successful AI adoption requires tightly coordinated teams with a single point of view and strong executive sponsorship to overcome organisational inertia.
EVIDENCE
He explicitly summarises three points-close collaboration, single point of view, executive sponsorship-as the solution to adoption challenges ([144-146]).
MAJOR DISCUSSION POINT
Adoption Challenges & ROI
AGREED WITH
Brandon Mello
Argument 4
AI alignment and safety must be prioritized to prevent the development of hazardous or misaligned systems that could harm society.
EXPLANATION
Ankit warns that AI should not be pursued as a short‑term game; instead, long‑term alignment safeguards are needed to avoid creating technologies that could become dangerous.
EVIDENCE
He remarks that “AI is not a short game… we have to mitigate… we don’t align with something which is hazardous to us” indicating a call for safety-first development [352-357].
MAJOR DISCUSSION POINT
Ethical Alignment & Human‑Centric AI
Argument 5
Physical space constraints are a critical factor in scaling AI compute resources across the country.
EXPLANATION
Ankit highlights that beyond compute power, the availability of physical infrastructure (space) is essential for deploying large‑scale AI hardware nationwide.
EVIDENCE
He briefly notes “one thing is space” when discussing the need to keep up the pace of AI infrastructure deployment [238-239].
MAJOR DISCUSSION POINT
Infrastructure & Scaling
Argument 6
NASCOM is drafting a policy document and roadmap for sovereign AI and AGI for the Indian government
EXPLANATION
Ankit states that NASCOM is preparing a policy paper for the government that outlines a roadmap for sovereign AI and artificial general intelligence, indicating a proactive role in shaping national AI strategy.
EVIDENCE
He mentions “We are writing a current policy document for government on sovereign AI and AGI roadmap” and refers to a QR code that will provide access to this material ([419-424]).
MAJOR DISCUSSION POINT
The enabling environment for digital development
K
Kalyan Kumar
8 arguments175 words per minute1697 words579 seconds
Argument 1
Long‑term hardware fab (India Chips Limited) to secure future compute (Kalyan Kumar)
EXPLANATION
Kalyan announces a joint venture with Foxconn to build a 16/32 nm semiconductor fab (India Chips Limited), describing it as patient capital that will eventually supply domestic chips for AI compute.
EVIDENCE
He provides details of the JV, the fab’s technology node, its OSAT nature, and the five-year timeline, emphasizing the urgency to start now ([441-447]).
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
AGREED WITH
Sunil Gupta, Ankit Bose
Argument 2
Building a modern data stack (vector DB, edge inferencing) as a core layer (Kalyan Kumar)
EXPLANATION
Kalyan stresses that beyond hardware, a modern data infrastructure—including vector databases, edge inferencing, and centralized data platforms—is essential for scaling AI applications.
EVIDENCE
He details HDB, Actian’s Ingress patents, acquisition of a vector engine from CWI, and plans to release a localized vector AI engine for edge devices ([96-108]).
MAJOR DISCUSSION POINT
Data Infrastructure, Interoperability & Provenance
AGREED WITH
Sunil Gupta, Ganesh Ramakrishnan
Argument 3
Shift from services to building own IP; need smarter engineers and research focus (Kalyan Kumar)
EXPLANATION
Kalyan argues that India must pivot from a service‑oriented model to building proprietary IP, requiring engineers with systems thinking, research orientation, and deeper domain expertise.
EVIDENCE
He recounts HCL’s 2015-16 strategic shift, the need for smarter engineers over coders, and examples of IP creation across global centres ([266-292]).
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
AGREED WITH
Ankit Bose, Ganesh Ramakrishnan
Argument 4
Investing in fundamental science (quantum, physics) for next‑gen compute (Kalyan Kumar)
EXPLANATION
Kalyan highlights the necessity of fundamental research in quantum computing and physics to drive future compute paradigms, noting that current GPU‑centric approaches may be insufficient.
EVIDENCE
He references the PSA’s quantum roadmap, the need for new compute paradigms, and the scarcity of physics talent ([298-304]).
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
Argument 5
Strong regulatory oversight is needed in consumer AI to protect users from becoming the product and to manage data‑give‑to‑get dynamics.
EXPLANATION
Kalyan points out that consumer‑facing AI applications often turn users into data sources, so a robust regulatory framework is required to safeguard privacy and ensure fair data exchange.
EVIDENCE
He explains that in consumer AI “you are the product” and emphasizes the “role of the regulator” in managing the give-to-get model, highlighting the need for policy intervention to protect users [390-398].
MAJOR DISCUSSION POINT
Human‑Centric AI & Data Governance
DISAGREED WITH
Sunil Gupta
Argument 6
Enterprise AI success depends on robust data lineage and metadata discovery capabilities to enable trustworthy data‑first approaches.
EXPLANATION
Kalyan argues that many enterprises lack proper metadata and data‑lineage tools, which hampers AI deployment; establishing these capabilities is critical for reliable AI outcomes.
EVIDENCE
He notes that “most companies don’t have metadata” and stresses the need for “metadata discovery, data lineage, and cataloguing” to build trustworthy data products for AI applications [395-401].
MAJOR DISCUSSION POINT
Data Infrastructure & Trust
Argument 7
Building sovereign software products by design to ensure national control over technology
EXPLANATION
Kalyan emphasizes that HCL Software not only delivers services but also creates software products that are engineered to be sovereign by design, ensuring that critical enterprise tools remain under Indian ownership and control.
EVIDENCE
He describes his role as “We are … building software products which are sovereign by design” while outlining HCL’s scale and revenue ([15-17]).
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
Argument 8
AI development should prioritize fewer, smarter engineers with systems thinking over large numbers of coders.
EXPLANATION
Kalyan argues that the future of sovereign AI in India depends on hiring a smaller pool of highly skilled engineers who can think system‑wide and conduct research, rather than mass hiring of generic coders. This shift is essential to move from a service‑oriented model to building proprietary IP.
EVIDENCE
He states, “You need lesser people, smarter people. You need engineers more than coders. See what’s happening is that we’re building quarters. You need engineers, people who think systems thinking, you need people who are research-bent” ([287-292]).
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
DISAGREED WITH
Ankit Bose
B
Brandon Mello
4 arguments147 words per minute1171 words475 seconds
Argument 1
ROI invisibility: CFOs cannot quantify benefits, limiting pilot approvals (Brandon Mello)
EXPLANATION
Brandon points out that many CFOs lack tools or data to calculate ROI for AI projects, leading to difficulty in securing budgets and causing pilots to stall.
EVIDENCE
He cites that a third of CFOs cannot quantify ROI and only one in ten have tools to measure it, which hampers project approval ([119-124]).
MAJOR DISCUSSION POINT
Adoption Challenges & ROI
Argument 2
Organizational friction and lack of executive sponsorship stall AI projects (Brandon Mello)
EXPLANATION
He describes how departmental silos, procurement bottlenecks, and the absence of executive champions cause AI initiatives to drag on for months or years, killing momentum.
EVIDENCE
He details friction across IT, procurement, and the need for executive sponsorship, noting that without it projects never get approved ([129-138], [139-142]).
MAJOR DISCUSSION POINT
Adoption Challenges & ROI
AGREED WITH
Ankit Bose
Argument 3
Real‑world use cases, tool consolidation, language localisation, and data security as adoption enablers (Brandon Mello)
EXPLANATION
Brandon argues that AI adoption improves when solutions address concrete everyday problems, consolidate fragmented tools, support India’s multilingual landscape, and assure data security.
EVIDENCE
He mentions GenSpark’s focus on office-work automation, the need to reduce tool-switching time, challenges of multiple Indian languages, and concerns about data handling ([336-351]).
MAJOR DISCUSSION POINT
Adoption Challenges & ROI
Argument 4
Uncertainty around human‑AI interaction; need to define safe engagement (Brandon Mello)
EXPLANATION
Brandon reflects on the broader societal uncertainty about how humans will interact with increasingly capable AI systems, calling for clearer frameworks to manage this relationship.
EVIDENCE
He shares a personal anecdote about being asked how humans should interact with AI, noting the rapid evolution of AI and the lack of established guidelines ([358-366]).
MAJOR DISCUSSION POINT
Ethical Alignment & Human‑Centric AI
Agreements
Agreement Points
India faces a critical shortage of GPU compute and must scale infrastructure dramatically to enable sovereign AI at national scale.
Speakers: Sunil Gupta, Ankit Bose, Kalyan Kumar
GPU scarcity and need for massive scale (Sunil Gupta) Call for coordinated policy to make compute widely available (Ankit Bose) Long‑term hardware fab (India Chips Limited) to secure future compute (Kalyan Kumar)
Sunil stresses that the lack of abundant GPU resources is the core bottleneck for AI adoption and projects a need for millions of GPUs ([54-60][70-78]). Ankit asks whether compute can be treated as a shared commodity and a national resource ([215-222]). Kalyan points to the upcoming India Chips Limited fab as a strategic move to secure future domestic chip supply ([441-447]).
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with multiple reports highlighting India’s need to expand from tens of thousands to hundreds of thousands of GPUs for population-scale AI, as noted in industry briefings and policy papers [S45][S46][S47].
A government‑led shared compute facility, where multiple providers contribute GPUs at low cost, is a viable model to democratise access to AI resources.
Speakers: Sunil Gupta, Ankit Bose
Shared compute facility as a commodity, government empaneling and scaling (Sunil Gupta) Call for coordinated policy to make compute widely available (Ankit Bose)
Sunil describes the empaneling process that has created a pool of 38,000 GPUs, with an additional 20,000 announced, offered to startups at very low prices ([224-236]). Ankit frames the need for a policy that treats compute as a shared commodity for the country ([215-222]).
POLICY CONTEXT (KNOWLEDGE BASE)
The consensus on heterogeneous, shared compute for democratizing AI is documented in the Heterogeneous Compute for Democratizing Access report and the AI Summit working group on democratizing resources [S41][S58][S59].
India must retain its data domestically and build robust data‑governance mechanisms (catalogues, contracts, provenance) to ensure sovereignty and trust.
Speakers: Sunil Gupta, Ganesh Ramakrishnan, Kalyan Kumar
India’s rich datasets as an advantage, but must be hosted locally (Sunil Gupta) Need for data provenance, metadata and cataloguing for trustworthy AI (Professor Ganesh Ramakrishnan) Building a modern data stack (vector DB, edge inferencing) as a core layer (Kalyan Kumar)
Sunil notes that India creates/consumes 20 % of global data but only 3 % is hosted locally ([254-257]). Ganesh argues for data products, catalogs and contracts to enable participation and provenance ([165-176]). Kalyan outlines HCL’s work on vector databases and edge-ready data platforms as essential infrastructure ([96-108]).
POLICY CONTEXT (KNOWLEDGE BASE)
Policy emphasis on data localisation and governance is reflected in discussions on digital sovereignty and regulatory capacity for AI testing [S44][S59].
Interoperability across all layers of the AI stack and a scale‑out architecture are essential to serve India’s diverse, billions‑strong user base.
Speakers: Ganesh Ramakrishnan, Sunil Gupta, Kalyan Kumar
Interoperability at every stack layer, data products and contracts to enable participation (Professor Ganesh Ramakrishnan) Shared compute facility as a commodity, government empaneling and scaling (Sunil Gupta) Edge‑focused AI engines and vector databases are essential for scaling AI across India’s diverse and distributed user base (Kalyan Kumar)
Ganesh stresses that interoperability encourages participation and supports scale-out rather than just scale-up ([151-160]). Sunil’s shared-compute model with multiple providers embodies a practical scale-out approach ([224-236]). Kalyan highlights edge-ready vector engines to bring AI to the periphery ([102-104]).
POLICY CONTEXT (KNOWLEDGE BASE)
Interoperability and scale-out architecture are highlighted as key for heterogeneous compute and scaling GPU numbers to millions [S41][S45].
India’s AI future will be voice‑first and multilingual, requiring models that support many local languages.
Speakers: Sunil Gupta, Ganesh Ramakrishnan
India’s AI future will be voice‑first, requiring multilingual voice models to reach the mass population (Sunil Gupta) Voice‑first AI with extensive multilingual support is essential for Indian adoption (Professor Ganesh Ramakrishnan)
Sunil points out that AI in India will be accessed via voice on feature phones, covering native languages ([244-247]). Ganesh reinforces this by noting their speech-to-text model covers 22 languages and uses mixture-of-experts for Indian languages ([245-247]).
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple panels stressed a voice-first, multilingual strategy for Indian languages, citing Bhashini expansion and the need for Indian-language models [S55][S57][S61].
Human‑in‑the‑loop oversight and AI alignment are non‑negotiable to prevent AI from becoming an autonomous product that harms society.
Speakers: Sunil Gupta, Ganesh Ramakrishnan, Ankit Bose
Preserve human‑in‑the‑loop, prevent AI from becoming the product itself (Sunil Gupta) Need for data provenance, metadata and cataloguing for trustworthy AI (Professor Ganesh Ramakrishnan) AI alignment and safety must be prioritized to prevent the development of hazardous or misaligned systems (Ankit Bose)
Sunil warns that AI must remain a tool with human steering and not acquire human emotions ([385-386]). Ganesh calls for provenance and glass-box models to keep AI aligned ([371-376]). Ankit stresses the need for long-term alignment safeguards ([352-357]).
POLICY CONTEXT (KNOWLEDGE BASE)
Trusted AI at scale and sovereign AI risk frameworks call for human-in-the-loop controls and alignment mechanisms [S43][S44][S50].
Upskilling developers, revising curricula and fostering specialist talent are essential to build indigenous AI IP.
Speakers: Kalyan Kumar, Ankit Bose, Ganesh Ramakrishnan
Shift from services to building own IP; need smarter engineers and research focus (Kalyan Kumar) Upskilling 150 k developers, curriculum overhaul (Ankit Bose) Academic‑industry consortiums to co‑design models and foster collaboration (Professor Ganesh Ramakrishnan)
Kalyan argues for a pivot to building IP with fewer, smarter engineers and deeper domain expertise ([266-292]). Ankit outlines NASCOM’s target to train 150,000 developers and rewrite technical curricula ([312-320]). Ganesh highlights a nine-institution consortium that co-designs models and builds IP ([162-169]).
POLICY CONTEXT (KNOWLEDGE BASE)
Policy recommendations for AI upskilling emphasize a needs-based, quality-focused approach rather than sheer quantity [S52][S53][S56].
Successful AI adoption requires close cross‑functional collaboration, a single point of view and strong executive sponsorship.
Speakers: Brandon Mello, Ankit Bose
Organizational friction and lack of executive sponsorship stall AI projects (Brandon Mello) Requirement for cross‑functional collaboration and executive buy‑in to drive adoption (Ankit Bose)
Brandon identifies ROI invisibility, departmental silos and missing executive champions as reasons pilots fail ([119-124][129-142]). Ankit summarises that close collaboration, a single point of view and executive sponsorship are needed to solve adoption challenges ([144-146]).
POLICY CONTEXT (KNOWLEDGE BASE)
High consensus on collaborative, cross-sector governance for AI is documented in AI policy roadmaps and summit discussions [S51][S58][S59].
Government should fund the first inferencing cycle of sovereign models to catalyse early adoption and create sustainable revenue streams.
Speakers: Sunil Gupta, Ankit Bose
Government should fund the first cycle of inferencing on sovereign AI models to accelerate adoption and create sustainable revenue streams. Call for coordinated policy to make compute widely available (Ankit Bose)
Sunil urges that beyond training, the government should support the initial inferencing phase so users can start paying for services and generate revenue ([236-241]). Ankit frames the broader policy need for shared compute as a national resource ([215-222]).
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for catalytic government funding of early inferencing and subsidized GPU access appear in policy briefs on AI growth and democratizing resources [S40][S58][S45].
Similar Viewpoints
Both stress that without affordable, abundant compute resources AI projects cannot progress—Sunil highlights the hardware shortage while Brandon points out that lack of ROI measurement (and thus funding) stalls pilots ([84][119-124]).
Speakers: Sunil Gupta, Brandon Mello
GPU scarcity and need for massive scale (Sunil Gupta) ROI invisibility: CFOs cannot quantify benefits, limiting pilot approvals (Brandon Mello)
Both argue that a modern, interoperable data infrastructure—including vector databases and edge capabilities—is essential for scalable sovereign AI ([96-108][151-160]).
Speakers: Ganesh Ramakrishnan, Kalyan Kumar
Interoperability at every stack layer, data products and contracts to enable participation (Professor Ganesh Ramakrishnan) Building a modern data stack (vector DB, edge inferencing) as a core layer (Kalyan Kumar)
Both propose a policy‑driven, shared‑compute model where the government coordinates providers to make GPU resources widely accessible at low cost ([215-222][224-236]).
Speakers: Ankit Bose, Sunil Gupta
Call for coordinated policy to make compute widely available (Ankit Bose) Shared compute facility as a commodity, government empaneling and scaling (Sunil Gupta)
Unexpected Consensus
Both industry leaders (Sunil Gupta) and startup‑focused experts (Brandon Mello) agree that affordable compute is a prerequisite for AI adoption, despite their different market positions.
Speakers: Sunil Gupta, Brandon Mello
Make compute available at low, affordable prices for startups to accelerate AI adoption (Sunil Gupta) ROI invisibility: CFOs cannot quantify benefits, limiting pilot approvals (Brandon Mello)
Sunil explicitly mentions low-price access to GPUs for startups ([84]), while Brandon highlights that without clear ROI (often a cost issue) pilots fail ([119-124]). Their convergence on cost as a barrier was not anticipated given their differing perspectives.
POLICY CONTEXT (KNOWLEDGE BASE)
Consensus on affordable compute is reflected in industry-government-academic alignment reports and Sunil Gupta’s statements on compute gaps [S41][S46][S47].
Consensus between a senior government‑linked AI provider (Sunil Gupta) and an academic researcher (Ganesh Ramakrishnan) on the necessity of a voice‑first, multilingual AI strategy for mass adoption.
Speakers: Sunil Gupta, Ganesh Ramakrishnan
India’s AI future will be voice‑first, requiring multilingual voice models to reach the mass population (Sunil Gupta) Voice‑first AI with extensive multilingual support is essential for Indian adoption (Professor Ganesh Ramakrishnan)
Despite coming from different sectors, both stress that voice-based, multilingual models are the key to reaching billions of users ([244-247][245-247]). This alignment across industry and academia was not explicitly highlighted earlier.
POLICY CONTEXT (KNOWLEDGE BASE)
The alignment between government-linked providers and academic researchers on voice-first multilingual AI is documented in panels discussing Indian language models [S55][S46][S57].
Overall Assessment

The panel shows strong convergence on three pillars: (1) massive, affordable compute infrastructure (including shared‑compute models and future domestic chip fab); (2) robust data governance and interoperable data stacks; (3) human‑centric, multilingual, voice‑first AI with clear alignment and executive sponsorship. Skill development, collaborative consortia and government support for early inferencing are also widely endorsed.

High consensus across industry, academia and policy makers, indicating a unified national agenda for sovereign AI. The alignment suggests that forthcoming policies are likely to focus on shared compute facilities, data sovereignty frameworks, and large‑scale skill‑building programmes, which could accelerate India’s AI capabilities while ensuring ethical and inclusive outcomes.

Differences
Different Viewpoints
Approach to AI skill development and workforce upskilling
Speakers: Ankit Bose, Kalyan Kumar
Upskilling 150 k developers, curriculum overhaul (Ankit Bose) AI development should prioritize fewer, smarter engineers with systems thinking over large numbers of coders.
Ankit proposes training 150,000 developers in six months and revamping curricula to create many specialists [312-320], while Kalyan argues that the focus should be on a smaller pool of highly skilled engineers with systems-thinking abilities rather than mass coder hiring [287-292].
POLICY CONTEXT (KNOWLEDGE BASE)
Moderate disagreement on skill development priorities (fundamental vs applied) was noted in the AI workforce transformation discussion [S53].
Strategy for scaling compute resources for sovereign AI
Speakers: Sunil Gupta, Ganesh Ramakrishnan, Kalyan Kumar
GPU scarcity and need for massive scale (Sunil Gupta) Shared compute facility as a commodity, government empaneling and scaling (Sunil Gupta) Scale‑out, not just scale‑up, is required to deliver AI services to the entire population. Edge‑focused AI engines and vector databases are essential for scaling AI across India’s diverse and distributed user base.
Sunil stresses that India lacks enough GPUs and proposes a centralized shared-compute pool created through government empaneling, aiming to increase GPU numbers to tens of thousands [54-60][70-78][224-236], whereas Ganesh (and Kalyan) argue that merely scaling up central capacity is insufficient and that a distributed, scale-out architecture with edge-ready engines is needed to reach the whole population [151-160][102-104].
Who should fund the initial inferencing phase of sovereign models
Speakers: Sunil Gupta, Kalyan Kumar
Government should fund the first cycle of inferencing on sovereign AI models to accelerate adoption and create sustainable revenue streams. Strong regulatory oversight is needed in consumer AI to protect users from becoming the product and to manage data‑give‑to‑get dynamics.
Sunil calls for government financial support for the first inferencing cycle to enable early adoption and revenue generation [236-242], while Kalyan emphasizes regulatory oversight and suggests that private sector and market mechanisms should drive adoption, implying less direct government funding for inferencing [390-398].
Data strategy: localisation vs monetisation
Speakers: Ganesh Ramakrishnan, Sunil Gupta
Treat data as a monetisable asset through data products, catalogs and contracts to foster participation and generate economic value. India’s rich datasets as an advantage, but must be hosted locally.
Ganesh proposes building data products, catalogs and contracts to monetize data and encourage ecosystem participation [165-176], whereas Sunil highlights that most Indian data is stored abroad and calls for domestic hosting to ensure sovereignty, focusing on localisation rather than monetisation [254-257].
Unexpected Differences
Long‑term hardware production vs reliance on imported GPUs
Speakers: Kalyan Kumar, Sunil Gupta
Long‑term hardware fab (India Chips Limited) to secure future compute (Kalyan Kumar) GPU scarcity and need for massive scale (Sunil Gupta)
Kalyan announces a joint venture to build a domestic semiconductor fab for future AI chips, emphasizing patient capital and a five-year timeline [441-447], while Sunil focuses on acquiring large numbers of NVIDIA GPUs from abroad to meet current demand, indicating differing views on the primary source of compute hardware [54-60][70-78].
Quantity vs quality in AI workforce development
Speakers: Ankit Bose, Kalyan Kumar
Upskilling 150 k developers, curriculum overhaul (Ankit Bose) AI development should prioritize fewer, smarter engineers with systems thinking over large numbers of coders.
Ankit’s plan targets mass training of 150,000 developers and curriculum changes to quickly expand the talent pool [312-320], whereas Kalyan argues for a strategic shift toward hiring fewer but more capable engineers with deep systems expertise, challenging the mass-training approach [287-292].
POLICY CONTEXT (KNOWLEDGE BASE)
Debate over focusing on sheer numbers of trainees versus depth of expertise is highlighted in upskilling commentary emphasizing quality over quantity [S52].
Overall Assessment

The panel converged on the need for a sovereign AI ecosystem but diverged on how to achieve it. Major friction points include the preferred method of scaling compute (centralised shared pool vs distributed edge‑centric scale‑out), the role of government funding versus regulatory or private‑sector mechanisms for early inferencing, and contrasting philosophies on workforce development (mass upskilling vs elite engineering). These disagreements reflect differing priorities between immediate infrastructure deployment and longer‑term strategic autonomy.

Moderate to high – while all participants share the overarching goal of sovereign AI, the contrasting approaches to compute provisioning, funding models, and talent strategy could impede coordinated policy implementation unless reconciled.

Partial Agreements
Both agree that India must build sovereign AI capability and increase compute capacity, but Sunil favours a centralized shared‑compute model while Ganesh stresses distributed scale‑out and edge deployment as essential for nationwide reach [54-60][70-78][151-160].
Speakers: Sunil Gupta, Ganesh Ramakrishnan
GPU scarcity and need for massive scale (Sunil Gupta) Scale‑out, not just scale‑up, is required to deliver AI services to the entire population.
Takeaways
Key takeaways
Compute scarcity is the primary bottleneck for sovereign AI in India; massive scaling of GPU resources is required. A shared, government‑empanelled compute facility is being built, but it must be expanded to millions of GPUs for training and inferencing. Long‑term hardware self‑reliance (e.g., India Chips Limited fab) is essential to secure future compute capacity. A modern data stack—including vector databases, edge inferencing, metadata catalogues, and data provenance—is critical for trustworthy AI. Interoperability at every layer of the AI stack enables participation, choice, and the ability to combine multiple models and services. Adoption is hindered by ROI invisibility, lack of executive sponsorship, and organizational friction; real‑world, language‑localised use cases and tool consolidation are needed. Skill development must shift from a services‑only model to building indigenous IP; upskilling 150 k developers and revising curricula are planned. Ethical alignment and human‑in‑the‑loop design are required so AI serves the masses rather than becoming a product or a risk. Collaboration between academia, industry, and government (e.g., the nine‑institution consortium, Amrita‑NASCOM MOU) is seen as the engine for sovereign AI progress.
Resolutions and action items
Launch of the Sovereign AI research report by Amrita Vishwa Vidyapeetham. Signing of an MOU between Amrita Vishwa Vidyapeetham and NASCOM to foster cooperation. Government to continue empaneling and subsidising GPU capacity, with a target of 38,000 GPUs already and an additional 20,000 announced. NASCOM to create a policy document on sovereign AI and AGI roadmap (QR code provided for feedback). NASCOM’s initiative to upskill 150,000 developers across India within six months. HCL announced the upcoming joint venture ‘India Chips Limited’ to build a 16/32 nm fab for future compute needs. Call for the first cycle of inferencing on trained models to be funded by the government to jump‑start adoption. Panelists invited to stay for a group photo and to provide feedback on the report via the QR code.
Unresolved issues
Financing and logistics for scaling GPU infrastructure from tens of thousands to the millions needed for nationwide inferencing. Concrete mechanisms to ensure that India’s massive data volumes are hosted locally and governed securely. Specific business models that will make large‑scale inferencing financially sustainable after the initial government subsidy. Standardised frameworks for data provenance, metadata, and contracts that can be adopted across diverse sectors. Clear guidelines for safe human‑AI interaction and alignment to prevent misuse or loss of control. How to effectively coordinate and govern the interoperability of heterogeneous models, tools, and platforms. Strategies to overcome ROI invisibility and to institutionalise executive sponsorship for AI projects.
Suggested compromises
Treat compute as a shared commodity: multiple private providers contribute GPUs, with government‑driven price competition and empanelment to keep costs low. Adopt interoperability as a design principle, allowing multiple vendors and open‑source alternatives to coexist rather than enforcing a single stack. Combine government funding for both model training and the initial phase of inferencing, then transition to private‑sector financing once usage scales. Balance scaling‑up (larger models) with scaling‑out (distributed edge inferencing) to meet both latency and capacity requirements. Encourage co‑design between academia and industry to leverage domain expertise while sharing development costs.
Thought Provoking Comments
The biggest problem for taking AI to the masses in India is how to make compute available in an abundant way – we need a shared, low‑price GPU infrastructure that becomes a hygiene factor.
He pinpointed the core bottleneck (compute) that underlies all other AI capabilities, moving the conversation from abstract potential to a concrete, actionable infrastructure challenge.
This comment shifted the discussion toward concrete government‑industry collaboration on GPU provisioning, prompting follow‑up questions about shared commodity compute and leading Sunil and others to describe the empanelment model and scaling plans.
Speaker: Sunil Gupta
When you look at sovereign AI, the data layer is the most important – we need a centralized‑to‑edge data platform, vector DBs, and data contracts/catalogs to ensure quality and interoperability.
He introduced a less‑discussed but critical component – the data stack – and highlighted specific technical assets (Actian, Vector engine) that differentiate HCL’s approach.
Opened a new thread about data infrastructure, leading Ganesh to expand on data products and interoperability, and deepening the technical depth of the conversation beyond compute.
Speaker: Kalyan Kumar
95 % of AI pilots never make it to production because of ROI invisibility, data‑trust/compliance friction, and the champion problem – lack of executive sponsorship.
He reframed the challenge from a purely technical issue to a business‑adoption problem, identifying three systemic barriers that explain why many AI projects stall.
Shifted the tone from infrastructure to adoption, prompting Ankit to summarise the need for collaborative teams and executive sponsorship, and influencing later remarks about user‑centric design.
Speaker: Brandon Mello
Interoperability at every layer encourages participation, offers alternatives, and enables scaling out – without it we risk a single‑vendor lock‑in and limit the ecosystem.
He introduced the concept of interoperability as a strategic principle for sovereign AI, linking technical design to ecosystem health and policy.
Guided the discussion toward open standards and collaborative models, influencing Sunil’s comments on shared compute and Kalyan’s points on data contracts.
Speaker: Ganesh Ramakrishnan
Collaboration is not just transactional; it requires empathy across domains – linguists, computer scientists, and policymakers must co‑design models, as shown by our multilingual mixture‑of‑experts architecture.
He provided a concrete example of co‑design leading to technical innovation, emphasizing interdisciplinary empathy as a moat for India’s AI development.
Reinforced the earlier interoperability theme, added depth to the discussion on building Indian‑specific models, and inspired Kalyan’s remarks on shifting from service to building IP.
Speaker: Ganesh Ramakrishnan
The skill shift needed is from hiring many coders to fewer, smarter engineers who can do systems thinking, research, and even quantum‑level compute – we must invest in fundamental science and not just short‑term coding talent.
He challenged the prevailing talent strategy, urging a long‑term, research‑oriented approach and linking it to future compute paradigms like quantum.
Prompted a broader view of talent development, influencing Ankit’s mention of upskilling 150k developers and setting the stage for discussions on education reform.
Speaker: Kalyan Kumar
AI should not become a product that consumes humans; we must keep humans in the loop, ensure provenance at every stack level, and avoid building ‘toys’ that don’t serve the masses.
He brought an ethical and purpose‑driven perspective, echoing the summit’s impact focus and warning against misaligned AI development.
Re‑centered the conversation on societal impact, leading to consensus among panelists about human‑centric AI and influencing the closing remarks about sovereignty and regulation.
Speaker: Sunil Gupta
Break AI into four domains – consumer, enterprise, government, and critical national infrastructure – each needs its own regulatory and choice framework; sovereignty is about giving users choice of platform.
He provided a structured taxonomy for sovereign AI, clarifying that a one‑size‑fits‑all approach won’t work and emphasizing choice as a core sovereign principle.
Synthesised earlier points into a clear framework, helping wrap up the discussion and guiding the final emphasis on policy, regulation, and multi‑vendor ecosystems.
Speaker: Kalyan Kumar
Overall Assessment

The discussion was driven forward by a series of pivotal insights that moved the conversation from high‑level enthusiasm to concrete challenges and solutions. Sunil Gupta’s focus on compute scarcity anchored the dialogue in infrastructure realities, while Kalyan Kumar’s emphasis on the data stack and skill transformation broadened the technical and talent dimensions. Brandon Mello shifted the lens to adoption barriers, prompting a consensus on the need for executive sponsorship. Ganesh Ramakrishnan’s calls for interoperability and interdisciplinary co‑design introduced a strategic, ecosystem‑wide perspective that tied together infrastructure, data, and talent. Together, these comments created a layered narrative: first identifying the foundational bottlenecks, then outlining the necessary technical and human infrastructure, and finally framing the ethical and policy imperatives for sovereign AI in India. This progression shaped a nuanced, actionable roadmap rather than a purely promotional dialogue.

Follow-up Questions
How can compute be treated as a shared commodity across the ecosystem to meet India’s massive GPU demand?
Addressing the shortage of GPUs is critical for scaling sovereign AI models and inference workloads for a billion‑plus population.
Speaker: Ankit Bose
What frameworks and standards are needed to ensure interoperability at every layer of the AI stack?
Interoperability enables participation, alternative solutions, and scaling across diverse models, data, and hardware, which is essential for a sovereign AI ecosystem.
Speaker: Ganesh Ramakrishnan
How should India develop data catalogs, data products, and data contracts to monetize data while respecting ownership rights?
Creating clear data ownership and monetization mechanisms is vital for building a sustainable data economy and supporting AI model training.
Speaker: Ganesh Ramakrishnan
What research is needed to build a robust data platform (including vector databases, edge inference, and data contracts) for sovereign AI?
A strong data infrastructure underpins model quality, scalability, and distributed inference, especially as AI workloads move to the edge.
Speaker: Kalyan Kumar
How can India accelerate skill development to shift from service‑oriented talent to engineering and research talent for AI and emerging technologies like quantum computing?
Building a workforce of engineers and researchers, rather than just coders, is necessary for creating indigenous IP and long‑term AI leadership.
Speaker: Kalyan Kumar
What strategies can mitigate the three adoption barriers identified (ROI invisibility, data‑trust/compliance friction, and lack of executive sponsorship) in Indian enterprises?
Overcoming these barriers is essential to move AI pilots from proof‑of‑concept to production at scale.
Speaker: Brandon Mello
How can the government support the first cycle of AI model inferencing to enable revenue‑generating use cases?
Funding inference infrastructure is needed to bridge the gap between model training and real‑world adoption, especially for sector‑specific applications.
Speaker: Sunil Gupta
What approaches can ensure AI alignment and provenance throughout the data‑to‑model pipeline to prevent AI from becoming a mere product?
Maintaining alignment and traceability safeguards ethical use and keeps humans in control of AI outcomes.
Speaker: Ganesh Ramakrishnan
How can India increase domestic hosting of its own data (currently only ~3% is hosted locally) to strengthen sovereignty?
Local data residency reduces reliance on foreign infrastructure and supports secure, sovereign AI development.
Speaker: Sunil Gupta
What are the technical and policy steps required to build a national AI/AGI roadmap, including quantum‑AGI capabilities?
A comprehensive roadmap guides coordinated investment, research, and regulation needed for long‑term AI leadership.
Speaker: Ankit Bose
How can AI be designed for voice‑based interaction on low‑end devices (e.g., feature phones) to reach the broader Indian population?
Enabling AI access on basic devices expands inclusion and leverages the massive smartphone‑plus‑feature‑phone user base.
Speaker: Sunil Gupta
What governance models are needed to balance choice of compute providers (hyperscalers, sovereign clouds, private infra) while ensuring security and sovereignty?
Providing multiple compute options safeguards against vendor lock‑in and supports national security objectives.
Speaker: Kalyan Kumar
What mechanisms can be put in place to capture and preserve provenance at each step of the AI stack (data aggregation, curation, model performance) for transparency?
Provenance tracking enhances trust, auditability, and compliance with emerging AI regulations.
Speaker: Ganesh Ramakrishnan
How can the AI community develop and adopt interoperable data contracts that enable seamless data sharing across academia, industry, and government?
Standardized data contracts facilitate collaboration, data monetization, and compliance with data‑ownership principles.
Speaker: Ganesh Ramakrishnan
What research is required to explore alternative compute paradigms (e.g., quantum, specialized ASICs) for AI workloads beyond traditional GPUs?
Exploring new hardware could address the scaling limits of GPU‑based compute and provide a strategic advantage.
Speaker: Kalyan Kumar

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit

Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session centered on AI sovereignty and Fujitsu’s strategy to deliver sovereign AI capabilities for nations such as India [1][2][4-8]. Sovereignty is defined as owning, controlling, and flexibly managing data and models without excessive third-party dependence [4-10]. Fujitsu highlighted its 90-year history and breakthroughs, including upcoming 2 nm ARM servers and a 20-exaflop AI supercomputer powered by Monaca [15-22][36-41]. Monaca, a Japan-made 2 nm chip, will be succeeded by a 1.4 nm version with 256-core and 128-core CPUs and an NPU for inference [42-48][55-58]. Fujitsu’s software stack is fully open source, avoiding lock-in and tuned for AI, HPC, and data-center workloads [44-53]. Its quantum roadmap aims for 250 logical qubits by 2030 and a 10 000-qubit machine within three years, placing it among the top three players [62-71]. Network offerings include a 1.6-terabit low-power switch for long-range transmission and open-RAN integration to efficiently orchestrate AI workloads [76]. The Takane LLM platform and Kozuchi AI agent, built on a proprietary security layer, enable domain-specific, tunable, secure models for defense, healthcare, and finance [77-84]. Fujitsu markets an end-to-end solution that combines compute, network, and software, allowing selective adoption and leveraging partners such as AMD, Lockheed Martin, and Supermicro [85-99]. These components aim to create a physical AI platform that runs on edge devices such as robots, drones, and medical equipment [86-95]. The session concluded with an announcement of a fireside chat featuring CDAT and Intel executives, moderated by Aman Khanna [100-104]. Overall, Fujitsu positions itself as a Japanese alternative to U.S. vendors, offering open, secure AI infrastructure across compute, quantum, networking, and software [31-33].


Keypoints


Major discussion points


Definition and importance of AI sovereignty – The speaker frames sovereignty as the ability to own, control, and flexibly manage data and AI models without heavy reliance on third-party providers, stressing its relevance for nations such as India that seek leadership in AI [1-9][10-11].


Fujitsu’s sovereign-by-design hardware portfolio – Fujitsu highlights its home-grown, cutting-edge compute assets-including 2 nm ARM-based “Monaca” servers, a planned 20 exaflop AI supercomputer, NPUs for inference, and an open-source software stack – all built in Japan to ensure security and avoid vendor lock-in [20-23][36-43][44-48][55-58].


Quantum and HPC integration as a strategic advantage – The company positions its quantum roadmap (250 logical qubits by 2030, 1 000-qubit machine launching soon) alongside high-performance computing to deliver mission-critical AI workloads, underscoring a unique combined capability [62-71][69-73].


Network and photonics solutions for low-latency, power-efficient AI delivery – Fujitsu describes a 1.6 Tbps (future 3.2 Tbps) optical switch, long-reach low-power transmission, and open-RAN orchestration that together enable secure, high-speed movement of AI workloads across data-center and edge environments [76].


End-to-end AI software platform and ecosystem partnerships – The “Takane” large-language-model platform and “Kozuchi” AI-agent stack provide domain-specific, fine-tunable, and secure models, while Fujitsu stresses a total-solution approach-partnering with firms such as AMD, Lockheed Martin, and Supermicro-to deliver compute, network, and application layers as a unified offering [78-86][92-99].


Overall purpose / goal


The discussion is a strategic presentation aimed at positioning Fujitsu as a provider of a complete, sovereign AI ecosystem-encompassing hardware, quantum/HPC, networking, and software-that enables governments and enterprises (especially in markets like India and Europe) to retain full control over their data and AI workloads while avoiding dependence on foreign cloud or AI vendors.


Tone of the discussion


The tone is consistently promotional and confident, using technical detail to convey credibility and optimism about Fujitsu’s capabilities. It remains upbeat throughout, with no noticeable shift to a more neutral or critical stance.


Speakers

Speaker 1


Role/Title: Presenter (Fujitsu executive delivering a keynote on AI sovereignty)


Area of Expertise: Artificial intelligence, sovereign AI platforms, high-performance computing, quantum computing, networking, Fujitsu hardware and software solutions


Speaker 2


Role/Title: Moderator / Session host (introducing the upcoming fireside chat)


Area of Expertise: Event moderation / facilitation[S1][S2][S3]


Additional speakers:


– None identified beyond the two speakers listed above.


Full session reportComprehensive analysis and detailed insights

Speaker 1 opened the session by linking the theme of AI sovereignty – also raised in the previous plenary – to the strategic ambitions of nations such as India that wish to become leaders in artificial intelligence. He framed sovereignty not as a political slogan but as a technical requirement: organisations must own and control their data while retaining the flexibility to manage, tune and deploy AI models without excessive reliance on external providers [1-10][11].


He then highlighted Fujitsu’s 90-year heritage, from early DRAM and mainframe work alongside IBM to recent breakthroughs such as the world’s first two-nanometre ARM-based servers and a leading quantum-technology roadmap [14-23][20-22][21]. In 2021 Fujitsu launched a U.S. brand that aggregates all of its solutions for customers, reinforcing the company’s global reach [34-35].


The vision for a sovereign AI platform rests on three pillars – software, compute and networking [26-28].


Compute & Quantum – Fujitsu noted that it operated the fastest supercomputer in the world for five consecutive years, a record that stood until two years ago [36-38]. The upcoming hardware centrepiece is the Monaca two-nanometre ARM-based chip, which has confidential computing built at the hardware level to drive security [38-41]. Within two years the company plans to assemble a 20-exaflop AI supercomputer around Monaca, followed by a 1.4-nanometre processor family that will include a 256-core CPU, a 128-core CPU and an integrated NPU specialised for AI inference [42-48]. The stack is described as completely open with “nothing locked in,” allowing customers to fine-tune models without vendor lock-in [44-48][44-46].


On the quantum side, Fujitsu positions itself among the world’s top three quantum players. Its roadmap targets 250 logical qubits by 2030, with a 1 000-qubit machine scheduled to go live in Kawasaki next month and a 10 000-qubit system expected within three years. Integration of quantum processors with high-performance computing is intended to support mission-critical AI workloads that demand both speed and precision [62-66][67-71][62-71].


Networking – The company is developing a 1.6 TB photonic switch (expandable to 3.2 TB) that provides long-range, low-latency, power-efficient transmission for data-centre interconnects [70-71]. This hardware is paired with an open-RAN orchestration framework that can move AI workloads efficiently across optical and wireless links, extending sovereign AI capabilities to edge environments [72-73].


AI Software Stack – Fujitsu’s domain-specific platforms comprise the Takane large-language-model (LLM) platform and the Kozuchi AI-agent stack, both built on Fujitsu’s own security layer. They target verticals such as defence, nuclear energy, healthcare, finance, government and manufacturing [84-86]. Because the underlying software stack is fully open, customers can fine-tune models and use third-party tools without becoming dependent on Fujitsu-only solutions [44-48][44-46].


Fujitsu emphasises that it does not sell isolated components but an end-to-end solution that bundles compute, networking and application layers. It has already forged partnerships with major OEMs and system integrators-including AMD, Lockheed Martin, Supermicro and various robotics manufacturers-to deliver a cohesive physical-AI platform across a range of use-cases [85-99][96-98].


Looking ahead, Fujitsu envisions a “Kozuchi” physical operating system that embeds brain-inspired intelligence into robots and other edge devices. Research on memory-retention aims to let devices such as drones, medical equipment and smartphones retain state and operate autonomously, uniting compute, network and AI software stacks at the edge [86-89][90-94].


Speaker 2 concluded the segment by announcing the next agenda item – a fireside chat featuring Mr Vivek Kaneja (Executive Director, CDAT), Mr Nitin Bajaj (Director, Sales and Marketing, Intel) and moderated by Mr Aman Khanna (Vice President, Asia Group) – and asked both speakers and the audience to clear the stage [100-104].


Overall, the presentation positioned Fujitsu as a Japanese alternative to U.S. AI vendors, offering a fully sovereign, open and integrated AI infrastructure that spans cutting-edge compute, quantum acceleration, high-capacity networking and domain-specific software, all designed to meet policy-driven demand for data ownership, security and flexibility [S7][S9][S21].


Session transcriptComplete transcript of the session
Speaker 1

AI commerce. What I’m going to talk about is something that was discussed in the plenary session yesterday as well about sovereignty. And I believe something like sovereignty is very, very important for countries like India, which are trying to eke out a path in leading AI and being dominant in AI. Now, what is sovereignty, first of all? For us, it is being flexible. And being secure, right? So you want ownership of your data. You want to control that data. But you also want to have flexibility to manage that data, create models that meet your needs, that doesn’t have to be reliant on third party overwhelmingly. And you can modify and tune that data, right? Modify and tune those models.

So Fujitsu is on a path to – we’ve always been an innovative company, and we have a long history, and I’ll talk briefly. But how do we make that sovereign? And that’s what I’m going to talk about today. So Fujitsu, some of you might not know it. I mean, we have a 90 -year -old history, right? So we are a pretty old company. We have our roots all the way in technology. And if you look at some of the things that are demonstrated here, one megabit DRAM, for example, right? Of course, we were one of the pioneers of mainframe business along with IBM. In recent past, we’ve announced, which we will be shipping very shortly, the world’s first two nanometer servers, ARM -based servers.

We announced for quantum, if you are not aware, which I will be talking about shortly as well, we have the world’s leading quantum roadmap here that we are going to deliver. Same on networks. And our U .S. brand that we created in 2021, that effectively brings all of Fujitsu’s solutions. to be consumed by our customers. Now, how does this work in the context of AI? And why is this relevant in the context of AI? And that’s what I’m going to talk about. So, to effectively drive artificial intelligence, you need three key components, right? You obviously need software, you need compute, and you need three networks, right? If you don’t have those three, you can’t really build an AI platform that will suit your enterprise needs.

And our focus on sovereignty here is really being independent in all of these three areas and give customers a choice. We are a Japanese company. Our technology is made in Japan and that’s where we find ourselves at a very interesting point because we are a choice to a lot of American companies as an alternative. So if you’re looking for leading edge computing technology, leading edge quantum technology, leading edge network technology, leading edge AI software technology, agentic technology, and there’s an end user application on which you can build an AI platform in the area such as defense, government, healthcare, manufacturing, finance, where you do care about privacy, this becomes very, very important. Now how do we actually drive that?

Some of the speakers talked about commerce, which are big, but at the end of the day, if you don’t have a platform that helps you deliver that, you’re never going to be sovereign, you’re never going to control the AI business. Fujitsu has a couple of areas that we are focused on, as I mentioned, computing. If you think about CPUs, you think about AMD, you think about Intel, we were, until two years ago, we had the fastest supercomputer in the world for five years running. And we announced that we will be building a 20 AI exascale AI supercomputer in about two years from now, which will be driving pretty much AI application, AI workloads. This will be powered by our Fujitsu Monaca chip, which is a two nanometer chip.

It’s built in Japan, and it is completely ARM -based, highly power efficient, focused on data centers to reduce power efficiency. Okay, and it has confidential computing built at the hardware level to drive security. Now, this comes out, the servers come out in about two months from now, the test servers. It’s ARM -based. The follow -up of this is a 1 .4 nanometer, and that will also be the world’s first 1 .4 nanometer, which has two versions, 256 -core CPU plus 128 -core CPU plus an NPU to drive exactly what India needs, sovereign AI models focused on inferencing. And this is something that I believe we will drive a lot of value in countries like India as well as Europe. I’m not going to go into this in detail, but this stack is a completely open software stack.

I just want you to remember, it’s a completely open software stack. There’s nothing locked in. There’s nothing. You don’t get locked into a Fujitsu stack. All the software that you see here, it’s completely open. It’s focused on AI. It’s focused on data centers, and it’s focused on HPC. This is what you need for AI, right? All the key areas are a lot of open source software that we have fine -tuned to work on this process. This can help you drive your AI workloads today on the Monaco servers. As I mentioned, what’s coming? There are two versions, the 256 -core CPU plus 128 -core CPU with the NPU on it. The NPU is focused on AI inferencing. You will see a lot of work going into inferencing moving forward.

And especially when you talk about sovereign, this will become extremely important, especially with small language models and medium language models. So you can contain that in a private or a semi -private environment that you can choose. Obviously, if you want large language models, you can choose what is going on on GPUs. And you can obviously choose the Monaco GPU hybrid architecture as well. Now, for those of you who might not be aware, we are extremely highly invested in quantum. We need quantum in Japan. I would say we are probably one of the top three players in quantum worldwide. We have announced a 250 logical qubit roadmap by the end of 2030, which is ahead of any other company in the world that I know of.

We make our own control systems. We are going to focus on driving the cooling systems as well. And this is going to become extremely important as you go ahead. Quantum plus HPC together driving mission -critical AI workloads. The 10 ,000 -qubit machine will go live in about three years from now. Next month, the 1 ,000 -qubit machine goes live in Kawasaki, Japan. As I mentioned, you would have HPC and Quantum working together to drive AI workloads. This is how computing will be consumed moving forward. And the software stack that we are working on, it will make transparent to you and users to use to consume compute and the workload can be optimized to whichever computer you want. Now, I’ll briefly talk about the networks because that’s the other part of the puzzle.

And finally, I’ll talk about the software for AI. photonics and wireless we’re one of the probably want two companies that does both no cares another one right and we are doing a 1 .6 terabyte switch that travels that is highly power efficient that drives about a thousand kilometer this distance distance on this long range transmission low latency low power consumption that’s the beauty of the switch right and and we are we will go on to 3 .2 tera this is very strong implicate implications and data centers that are being built in India as that would be highly highly power hungry and you would need to connect that through optical fibers and same with the wireless mobile systems okay now what we do is we also connect with open RAN and the network orchestration stack to bring the AI workloads move them in a highly efficient manner and we’re going to do that by using the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the This is the third part that really brings everything together, the AI software stack.

Fujitsu, as I mentioned at the very beginning, we are focused on sovereign. When we talk about sovereign, it’s got to be domain specific, something for defense. If you’re making nuclear plants or submarines or healthcare, this is not the data you want to put on public cloud. You want to define and build these domain specific models. Second, you need to have flexibility. You, as a company, should be able to fine tune these models to your own benefit, to your own needs. They need to be highly secure. These are the three key areas that we are focused on using what we call our Takane, the large language model platform, as well as our AI agent tech model from Kozuchi, powered by the security platform that we have built within our own research teams.

It’s a complete platform that you can use to build your own applications. Thank you. Again, I won’t go into details on this. but this is a platform that also uses third -party tools and you see on the extreme right where we have issue with their government manufacturing health care finance applications so Fujitsu has a fairly large business and services which brings all of this together so we are not just selling you pieces of technology we are selling your total solution or we are asking you to use a total solution here from compute all the way to networks and the application stack together and this is our vision that we want to continue to build on this continue to bring this to other to the end customers as well as users now where are we headed right we see all this converge in the physical AI platform space and what we are building is Kozuchi physical OS which will have the intelligence based on the brain intelligence for the robot and what that means is robots tend to forget And what we are working on, some intelligence work and research so that robots can continue to remember.

But then this technology, the compute networks, as well as the AI platform stack, comes together in edge devices. Robots are one example, but even drones or medical devices or your healthcare on your iPhones. That’s where it will all come together. And that’s the world we are aiming for. That will bring together the AI agentic platform together. That will bring the security platform together in the complete platform that could be consumed for our end users, our companies. And you can choose to play in a part that is comfortable for you. And we are obviously going to partner with a lot of different companies on this. So, as I mentioned, the software, compute, network, the three pillars.

And we are going to be able to do that. We announced in October last year, our CEO Tokita and Jensen were on stage together announcing a huge partnership on physical AI, where we’re partnering with different robotics manufacturers. So it’s working with AMD, working in defense with Lockheed Martin, Supermicro. So this is something

Speaker 2

Thank you. Thank you so much. For the next session, we have a fireside chat between Mr. Vivek Kaneja, Executive Director, CDAT, Mr. Nitin Bajaj, Director, Sales and Marketing, Intel, and the session will be moderated by Mr. Aman Khanna, Vice President of the Asia Group. May I request all the speakers to join us on the stage, please? I also request everybody to please clear the pathway. May I request the audience to please clear the pathway?

Related ResourcesKnowledge base sources related to the discussion topics (6)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Speaker 1 linked AI sovereignty to the strategic ambitions of nations such as India that wish to become leaders in artificial intelligence.”

Vivek Mahajan, CTO of Fujitsu India, explicitly highlighted the importance of AI sovereignty for India during the summit, confirming the report’s statement [S7].

Confirmedhigh

“Fujitsu’s 90‑year heritage and recent breakthroughs such as the world’s first two‑nanometre ARM‑based servers and a leading quantum‑technology roadmap.”

The keynote notes that Fujitsu positions itself as a Japanese alternative with 90 years of innovation, including world-first 2-nm ARM-based processors and leading quantum computing capabilities, supporting the claim [S9].

!
Correctionmedium

“Fujitsu operated the fastest supercomputer in the world for five consecutive years, a record that stood until two years ago.”

The TOP500 history shows that Japan’s Fugaku was the world’s fastest for the two years prior to Frontier’s takeover, but it does not confirm a five-year streak; the record lasted until two years ago, not necessarily five years [S36].

Additional Contextmedium

“Sovereign AI requires organisations to own and control their data while retaining flexibility to manage, tune and deploy AI models without excessive reliance on external providers.”

Discussion sources describe AI sovereignty as encompassing data control, legal frameworks, encryption-key ownership and governance, providing broader context to the report’s definition [S28] and [S29] and [S30].

External Sources (36)
S1
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Speaker 2: Role appears to be event moderator or host. Area of expertise and specific title not mentioned.
S2
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S3
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S4
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S5
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S6
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S7
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — Vivek Mahajan defines AI sovereignty as having ownership and control over data while maintaining flexibility to manage, …
S8
Panel #3: « Gouverner les données : entre souveraineté, éthique et sécurité à l’ère de l’interconnexion » — Drudeisha Madhub Merci beaucoup de m’avoir invitée à l’OIF. C’est vraiment un joli atelier depuis hier, c’est une belle …
S9
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — This comment reframes AI sovereignty from a purely nationalistic concept to a practical business and security imperative…
S10
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Professor Ganesh argues that India has barely scratched the surface of AI potential and can achieve significant breakthr…
S11
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — India’s approach, according to the speaker, centers on three pillars of sovereignty: data sovereignty, infrastructure so…
S12
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — 8 year old prodigy: Sharing is learning with the rest of the world. One, an AI that is independent. From large global A…
S13
Multistakeholder Partnerships for Thriving AI Ecosystems — to contextualize it’s not just true for help it’s true for any domain you look at and and yes data is fragmented siloed …
S14
What policy levers can bridge the AI divide? — Ebtesam Almazrouei: Good afternoon, everyone. It’s our pleasure to have you here today with us again and discussing a ve…
S15
Nvidia builds ‘Israel-1’ AI supercomputer, claims to have ended digital divide — The world is at a ‘tipping point of a new computing era’, Nvidia Group has noted at a recent conference at the Computex …
S16
Researchers develop high-frequency, low-power switch to revolutionise 6G communications — Researchers atUAB, theUniversity of Texas at Austinand theUniversity of Lilledeveloped atelecommunications switchthat op…
S17
WS #155 Digital Leap- Enhancing Connectivity in the Offline World — 2. Open RAN technology for interoperability and cost reduction Maria Beebe provided a detailed list of critical skill g…
S18
DC-CIV &amp; DC-NN: From Internet Openness to AI Openness — Anita Gurumurthy: You can hear me, I hope. Yeah. All right. So, thank you very much. I just heard that from Renata, an…
S19
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Marco Zennaro: Sure, sure. Definitely. Thank you very much. So let me introduce TinyML first. So TinyML is about running…
S20
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S21
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — Vivek Mahajan defines AI sovereignty as having ownership and control over data while maintaining flexibility to manage, …
S22
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — This comment reframes AI sovereignty from a purely nationalistic concept to a practical business and security imperative…
S23
Panel Discussion Data Sovereignty India AI Impact Summit — This comment reframes the entire sovereignty debate by distinguishing between isolation and strategic control. It moves …
S24
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Takahito Tokita Fujitsu — Fujitsu’s advanced computing capabilities and quantum technology development K-Computer and Fugaku supercomputers, 1,00…
S25
From High-Performance Computing to High-Performance Problem Solving / Davos 2025 — Georges Olivier Reymond emphasizes that quantum computing will not replace AI or CPUs, but rather complement them. He de…
S26
Multistakeholder Partnerships for Thriving AI Ecosystems — Academic institutions provide evaluation capabilities and technical expertise that build trust in AI systems. Internatio…
S27
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Agar kisi machine ko sir paper clip banane ka alak de diya jaye to wo uska ek kaam ke liye duniya ke saare resources ko …
S28
Agents of Change AI for Government Services &amp; Climate Resilience — Governments can implement strategic sovereignty through data control and governance policies while pursuing longer-term …
S29
Discussion Report: Sovereign AI in Defence and National Security — Faisal outlines six critical dimensions of AI sovereignty that countries must consider: control over data (the fuel of A…
S30
AI as critical infrastructure for continuity in public services — The discussion revealed that data sovereignty encompasses more than simple data localization. As Pramod noted, true sove…
S31
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Takahito Tokita Fujitsu — This discussion features Takahito Tokita, President and CEO of Fujitsu, presenting the company’s vision for artificial i…
S32
Fujitsu launches AI scanner to assess tuna fat — Fujitsu hasdevelopeda new AI-powered inspection device that determines the fat content of frozen albacore tuna with unpr…
S33
Managing Change in Media Space: Social Media, Information Disorder, and Voting Dynamics  — Afia Asantewaa Asare-Kyei:You have three questions, so I’m going to take them all at once. So we have, for the gentleman…
S34
https://app.faicon.ai/ai-impact-summit-2026/how-the-eus-gpai-code-shapes-safe-and-trustworthy-ai-governance-india-ai-impact-summit-2026 — sure and first of all I I’m going to try not to repeat Aparna’s view because I basically agree with everything you just …
S35
https://app.faicon.ai/ai-impact-summit-2026/aligning-ai-governance-across-the-tech-stack-iti-c-suite-panel — We do. And we have to go all of the way to give them the possibility to figure this out for themselves, for their applic…
S36
US supercomputer ranks fastest in the world — The US Department of Energy’s Oak Ridge National Laboratory (ORNL)announcedthat its Frontier supercomputer ranked as the…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
11 arguments157 words per minute1953 words741 seconds
Argument 1
Sovereignty requires flexibility and security, i.e., ownership and control of data (Speaker 1)
EXPLANATION
The speaker defines AI sovereignty as the ability to flexibly manage data while keeping it secure. This means that organisations must own their data, control how it is used, and be able to modify models without dependence on external providers.
EVIDENCE
The speaker states that sovereignty means being flexible [4] and secure [5], emphasizing ownership of data [6] and control over that data [7]. He further explains the need for flexibility to manage data, create independent models, and modify or tune both data and models as required [8-10].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Mahajan’s keynote defines AI sovereignty as ownership and control of data combined with flexibility and security, matching the argument [S7][S9].
MAJOR DISCUSSION POINT
Definition of AI sovereignty
Argument 2
Sovereignty is crucial for countries like India to lead and dominate in AI (Speaker 1)
EXPLANATION
The speaker argues that AI sovereignty is especially important for emerging economies such as India, which seek to become leaders in AI development. Without sovereign capabilities, these nations risk dependence on foreign technology.
EVIDENCE
He explicitly says that sovereignty is “very, very important for countries like India, which are trying to eke out a path in leading AI and being dominant in AI” [2].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of AI sovereignty for India is highlighted in Mahajan’s remarks and further discussed in analyses of India’s AI strategy [S7][S9][S10][S11].
MAJOR DISCUSSION POINT
Strategic importance for India
Argument 3
Development of 2 nm ARM‑based servers with the Monaca chip, featuring hardware‑level confidential computing (Speaker 1)
EXPLANATION
Fujitsu is introducing two‑nanometre ARM‑based servers powered by its in‑house Monaca chip. The hardware incorporates confidential computing capabilities to ensure data security at the silicon level.
EVIDENCE
The company announced the world’s first two-nanometre ARM-based servers [20] and later described that the upcoming servers will be powered by the Fujitsu Monaca chip, a two-nanometre processor built in Japan, which includes hardware-level confidential computing for security [38-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Mahajan announced Fujitsu’s 2-nm ARM-based servers using the Monaca chip with hardware-level confidential computing [S7][S9].
MAJOR DISCUSSION POINT
Secure next‑gen server hardware
Argument 4
Plan to build a 20 exaflop AI supercomputer and future 1.4 nm CPUs with integrated NPU for AI inferencing (Speaker 1)
EXPLANATION
Fujitsu intends to construct a 20‑exaflop AI supercomputer within two years and later release a 1.4‑nanometre processor family that combines high‑core CPUs with a dedicated NPU for inference workloads. These systems are positioned as sovereign AI platforms for markets such as India and Europe.
EVIDENCE
The speaker notes that Fujitsu will build a 20-exaflop AI supercomputer in about two years [37] and that a follow-up 1.4-nm processor will feature 256-core and 128-core CPU variants plus an NPU designed for AI inferencing, targeting sovereign AI models for India [42-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The roadmap includes a 20-exaflop AI supercomputer and a future 1.4-nm processor family with integrated NPU for inference [S7][S9].
MAJOR DISCUSSION POINT
Roadmap for sovereign AI compute
Argument 5
Fujitsu’s quantum roadmap (250 logical qubits by 2030, 10 k‑qubit machine in three years) to complement mission‑critical AI workloads (Speaker 1)
EXPLANATION
Fujitsu positions itself as a leading quantum player, outlining a roadmap to deliver 250 logical qubits by 2030 and a 10,000‑qubit machine within three years. The quantum systems are intended to work alongside high‑performance computing for critical AI tasks.
EVIDENCE
The speaker claims Fujitsu is among the top three global quantum vendors [64] and has announced a 250-logical-qubit roadmap by 2030 [65]. He also mentions a 10,000-qubit machine slated for live operation in three years [70] and a 1,000-qubit machine launching next month in Kawasaki [71], emphasizing the integration of quantum with HPC for mission-critical AI workloads [69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Fujitsu’s quantum roadmap targeting 250 logical qubits by 2030 and a 10 k-qubit system is described in the keynote [S7][S9].
MAJOR DISCUSSION POINT
Quantum‑HPC integration for AI
Argument 6
Creation of high‑capacity photonic switches (1.6 Tbps, scaling to 3.2 Tbps) for low‑latency, power‑efficient data‑center connectivity (Speaker 1)
EXPLANATION
Fujitsu is developing a photonic switch capable of 1.6 Tbps throughput, with plans to double capacity to 3.2 Tbps. The switch is designed for long‑range, low‑latency, low‑power transmission, targeting power‑hungry data‑center deployments such as those in India.
EVIDENCE
In a detailed description, the speaker explains that Fujitsu is building a 1.6-terabit switch that is highly power-efficient, supports long-range (about a thousand-kilometre) transmission with low latency and low power consumption, and that a 3.2-terabit version will follow, noting its relevance for Indian data-centres [76].
MAJOR DISCUSSION POINT
Advanced photonic networking hardware
Argument 7
Use of open RAN and network orchestration to move AI workloads efficiently across optical and wireless links (Speaker 1)
EXPLANATION
Fujitsu leverages open RAN technology together with a network orchestration stack to transport AI workloads across both optical fiber and wireless networks in an efficient manner. This approach aims to reduce latency and improve flexibility for AI services.
EVIDENCE
The speaker states that Fujitsu connects with open RAN and a network orchestration stack to move AI workloads in a highly efficient manner across optical and wireless links [76].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open RAN technology for interoperable, cost-effective AI workload transport is discussed in the Open RAN overview [S17].
MAJOR DISCUSSION POINT
Open RAN for AI workload transport
Argument 8
Fully open, lock‑in‑free software stack fine‑tuned for AI, HPC, and data‑center use (Speaker 1)
EXPLANATION
Fujitsu offers a completely open software stack with no vendor lock‑in, optimized for AI, high‑performance computing, and data‑center environments. The stack incorporates a large amount of open‑source software that has been specifically tuned for Fujitsu’s hardware.
EVIDENCE
The speaker emphasizes that the stack is “completely open” with nothing locked in [44-46], reiterates the openness [48], and notes that many open-source components have been fine-tuned for the platform [52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Mahajan stresses that Fujitsu’s software stack is fully open with no vendor lock-in, tuned for AI and data-center workloads [S7][S9].
MAJOR DISCUSSION POINT
Open, vendor‑neutral software ecosystem
Argument 9
Takane large‑language‑model platform and Kozuchi AI‑agent model provide secure, customizable, domain‑specific AI solutions (Speaker 1)
EXPLANATION
Fujitsu’s Takane LLM platform and Kozuchi AI‑agent model are presented as tools for building domain‑specific AI applications (e.g., defense, healthcare) that can be fine‑tuned and kept secure within a sovereign environment. These solutions are powered by Fujitsu’s own security platform.
EVIDENCE
The speaker outlines three key requirements-domain specificity, flexibility, and security [78-84] and describes the Takane LLM platform together with the Kozuchi AI-agent model, both built on Fujitsu’s internal security platform [84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for secure, domain-specific AI models built on Fujitsu’s internal security platform is outlined in the keynote and reinforced by discussions on sector-specific AI security [S7][S19].
MAJOR DISCUSSION POINT
Domain‑specific, secure AI platforms
Argument 10
Fujitsu offers a total solution covering compute, networking, and applications, partnering with OEMs such as AMD, Lockheed Martin, and Supermicro (Speaker 1)
EXPLANATION
Fujitsu positions itself as a one‑stop provider that integrates compute hardware, networking equipment, and AI software, and it collaborates with major OEMs like AMD, Lockheed Martin, and Supermicro to deliver end‑to‑end solutions. The company stresses that it sells complete solutions rather than isolated components.
EVIDENCE
The speaker references a partnership announced in October with robotics manufacturers and mentions collaborations with AMD, Lockheed Martin, and Supermicro for physical AI initiatives [86-99].
MAJOR DISCUSSION POINT
End‑to‑end ecosystem and strategic partnerships
Argument 11
Vision of a “Kozuchi” physical OS that unites compute, network, and AI stack for edge devices like robots, drones, and medical equipment (Speaker 1)
EXPLANATION
Fujitsu envisions a physical operating system called Kozuchi that integrates compute, networking, and AI capabilities to run on edge devices such as robots, drones, and medical hardware. This platform aims to provide persistent intelligence (e.g., memory for robots) across diverse edge applications.
EVIDENCE
The speaker describes Kozuchi as a physical OS with brain-like intelligence for robots, noting that robots tend to forget and that Fujitsu is researching ways to retain memory [86]. He then extends the vision to edge devices including drones, medical devices, and smartphones [87-90].
MAJOR DISCUSSION POINT
Edge‑focused physical AI operating system
S
Speaker 2
1 argument102 words per minute78 words45 seconds
Argument 1
Announcement of the upcoming fireside chat and request for speakers and audience to clear the pathway (Speaker 2)
EXPLANATION
The moderator introduces the next session, a fireside chat featuring executives from CDAT and Intel, and asks both speakers and the audience to clear the stage area. This serves as a logistical transition between program segments.
EVIDENCE
The speaker thanks the audience, announces the fireside chat with Mr. Vivek Kaneja, Mr. Nitin Bajaj, and moderator Mr. Aman Khanna, and requests that speakers and the audience clear the pathway [100-104].
MAJOR DISCUSSION POINT
Session transition logistics
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Overall Assessment

The transcript contains a substantive presentation by Speaker 1 on AI sovereignty, hardware, software, quantum and networking solutions, while Speaker 2 only performs a procedural hand‑over to the next session. There is no overlap in substantive arguments or viewpoints between the two speakers.

Very low substantive consensus; the only common ground is the procedural nature of the session transition, which has limited relevance to the thematic topics.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The two speakers addressed completely different aspects of the event. Speaker 1 delivered an extensive technical and strategic presentation on AI sovereignty, hardware, software, quantum and networking solutions ([1-99]), while Speaker 2 performed a brief logistical hand-over, announcing the next fireside chat and asking participants to clear the stage ([100-104]). No overlapping substantive claims were made, resulting in no observable disagreement or partial agreement between them.

Minimal – the speakers operated in separate domains (technical presentation vs. session moderation), so there is no conflict affecting the discussion of AI sovereignty or related topics.

Takeaways
Key takeaways
AI sovereignty—defined as flexibility and security (ownership and control of data)—is essential for nations like India seeking leadership in AI. Fujitsu is positioning itself as a sovereign AI provider through independent compute, networking, and software capabilities. Hardware initiatives include 2 nm ARM‑based Monaca servers with built‑in confidential computing, a planned 20 exaflop AI supercomputer, and future 1.4 nm CPUs with integrated NPUs for AI inferencing. Fujitsu’s quantum roadmap (250 logical qubits by 2030, 1,000‑qubit machine launching soon, 10 k‑qubit system in three years) is intended to complement HPC for mission‑critical AI workloads. Networking solutions feature high‑capacity photonic switches (1.6 Tbps scaling to 3.2 Tbps) and open‑RAN orchestration to move AI workloads efficiently across optical and wireless links. The software stack is fully open and lock‑in‑free, with domain‑specific AI platforms such as the Takane LLM platform and the Kozuchi AI‑agent model, emphasizing security and customizability. Fujitsu offers an end‑to‑end solution—compute, network, and applications—and is partnering with OEMs like AMD, Lockheed Martin, and Supermicro to deliver integrated physical AI platforms. Future vision includes a “Kozuchi” physical OS that unites compute, networking, and AI stacks for edge devices (robots, drones, medical equipment). The session concluded with a logistical announcement for an upcoming fireside chat featuring Intel and CDAT executives.
Resolutions and action items
None identified
Unresolved issues
Detailed roadmap for how customers can adopt and migrate to Fujitsu’s sovereign AI stack (e.g., migration steps, timelines, support models). Specifics on partnership models, revenue sharing, and integration responsibilities with OEMs and ecosystem partners. Clarification on the governance and certification processes for domain‑specific, high‑security AI models (defense, healthcare, nuclear, etc.). Performance benchmarks and cost comparisons of Fujitsu’s 2 nm and upcoming 1.4 nm hardware versus competing solutions. Implementation details for the open‑RAN orchestration layer and how it will interoperate with existing carrier networks.
Suggested compromises
None identified
Thought Provoking Comments
For us, sovereignty is being flexible and being secure – you want ownership of your data, you want to control that data, but you also want the flexibility to manage that data, create models that meet your needs without being overly reliant on third‑party providers.
This reframes ‘sovereignty’ from a political buzz‑word to a concrete technical requirement (flexibility + security), setting the agenda for the whole talk and linking it directly to AI deployment challenges faced by nations like India.
It establishes the central theme of the discussion, prompting the rest of the presentation to be organized around how Fujitsu’s hardware, software and network offerings can deliver that flexibility and security. It also signals to the audience that the talk will move beyond marketing into concrete capabilities.
Speaker: Speaker 1
We announced that we will be building a 20 AI‑exascale supercomputer in about two years, powered by our Fujitsu Monaca chip – a two‑nanometer, ARM‑based processor with confidential computing built at the hardware level.
Introducing a concrete, cutting‑edge hardware roadmap (2 nm ARM chip with built‑in confidential computing) provides a tangible illustration of how Fujitsu intends to deliver sovereign AI infrastructure, differentiating itself from traditional x86 vendors.
This shifts the conversation from abstract notions of sovereignty to a specific, high‑impact technology claim, leading the audience to consider the feasibility of national‑scale AI compute that remains under domestic control.
Speaker: Speaker 1
We have announced a 250 logical‑qubit quantum roadmap by the end of 2030, with a 1 000‑qubit machine going live next month in Kawasaki and a 10 000‑qubit machine in three years – positioning quantum together with HPC to drive mission‑critical AI workloads.
Bringing quantum computing into the sovereignty narrative adds a layer of future‑proofing and strategic depth, suggesting that true AI independence will eventually rely on quantum‑enhanced processing.
This introduces a new dimension (quantum) to the discussion, prompting listeners to think about long‑term technology trajectories and how Fujitsu’s integrated roadmap (quantum + HPC) could become a unique selling point for sovereign AI strategies.
Speaker: Speaker 1
Our network solution is a 1.6 TB/s switch that is highly power‑efficient, supports long‑range low‑latency transmission, and integrates with open‑RAN to move AI workloads efficiently across data‑center and edge environments.
Highlighting a next‑generation, high‑capacity, low‑latency network underscores that sovereignty is not just about compute but also about the data‑movement fabric, especially for latency‑sensitive applications like defense and healthcare.
This expands the scope of the discussion to include networking, reinforcing the three‑pillar (compute, software, network) framework and showing how Fujitsu’s end‑to‑end stack can be deployed in sectors that demand strict privacy and performance.
Speaker: Speaker 1
We are delivering a completely open software stack – nothing is locked in – and we provide domain‑specific AI platforms (e.g., Takane LLM platform, Kozuchi AI agent tech) that can be fine‑tuned for defense, nuclear, healthcare, finance, etc., while remaining secure.
Emphasizing openness and domain‑specificity directly addresses concerns about vendor lock‑in and data leakage, positioning Fujitsu’s software as both flexible and secure, which is central to the sovereignty argument.
This comment transitions the talk from hardware to software, reinforcing the earlier claim of flexibility. It also invites the audience to envision concrete use‑cases where sovereign AI can be applied, deepening the practical relevance of the presentation.
Speaker: Speaker 1
We are building Kozuchi, a physical OS that embeds brain‑inspired intelligence into robots, enabling them to remember and operate autonomously at the edge – a convergence of compute, network, and AI software for devices like drones, medical equipment, and smartphones.
This forward‑looking vision ties together all three pillars into a tangible edge‑device scenario, illustrating how sovereign AI can extend beyond data‑centers into everyday devices while maintaining security and control.
It serves as a culminating turning point, moving the discussion from enterprise‑scale infrastructure to consumer‑level applications, thereby broadening the audience’s perception of the potential impact of sovereign AI.
Speaker: Speaker 1
Overall Assessment

The discussion was driven by a series of strategically placed, high‑impact statements from Speaker 1 that progressively broadened the concept of AI sovereignty. Starting with a clear definition of sovereignty (flexibility + security), the speaker introduced concrete hardware (2 nm ARM chips, exascale supercomputer), ambitious quantum roadmaps, advanced networking, and an open, domain‑specific software stack. Each comment acted as a turning point, shifting focus from abstract policy to tangible technology, then expanding the scope from compute to quantum, network, and finally edge devices. This layered approach not only reinforced Fujitsu’s positioning as a comprehensive, non‑US alternative for sovereign AI but also deepened the conversation by linking each technological pillar to real‑world use‑cases (defense, healthcare, finance, robotics). The cumulative effect was to transform a single‑speaker monologue into a compelling narrative that framed sovereignty as an achievable, end‑to‑end technical solution rather than a purely political aspiration.

Follow-up Questions
How can sovereign AI models be developed and deployed for domain‑specific use cases such as defense, nuclear plants, healthcare, and finance?
Speaker 1 emphasized the need for domain‑specific, secure, and fine‑tuned models, indicating further investigation into methodologies, tooling, and compliance requirements.
Speaker: Speaker 1
What mechanisms are needed to ensure flexibility for customers to fine‑tune AI models while maintaining security and data ownership?
Speaker 1 stressed flexibility as a pillar of sovereignty, suggesting research into model‑tuning workflows, access controls, and confidential computing techniques.
Speaker: Speaker 1
How can the open software stack be kept truly open and interoperable across different hardware platforms (Monaco CPUs, GPUs, NPUs, quantum systems)?
Speaker 1 mentioned a completely open software stack but did not detail standards or governance, implying a need for further study of open‑source licensing, compatibility layers, and integration frameworks.
Speaker: Speaker 1
What are the performance, power‑efficiency, and security implications of the upcoming 1.6 TB and 3.2 TB photonic switches for long‑range, low‑latency data‑center interconnects?
Speaker 1 introduced high‑capacity photonic switches but provided limited technical data, indicating a research gap on metrics, deployment scenarios, and cost‑benefit analysis.
Speaker: Speaker 1
How will open‑RAN orchestration be integrated with AI workloads to achieve efficient, secure, and low‑latency data movement?
Speaker 1 referenced extensive use of open‑RAN for AI workload transport without describing the orchestration architecture, pointing to a need for deeper investigation.
Speaker: Speaker 1
What are the practical pathways for combining quantum computing (e.g., 1 000‑qubit, 10 000‑qubit machines) with HPC to accelerate mission‑critical AI workloads?
Speaker 1 outlined a quantum roadmap but did not explain integration strategies, workload partitioning, or software toolchains, suggesting further research.
Speaker: Speaker 1
What are the requirements and challenges for deploying the sovereign AI platform on edge devices such as robots, drones, and medical equipment?
Speaker 1 described a vision of edge deployment but omitted details on latency, power, security, and model size constraints, indicating an area for additional study.
Speaker: Speaker 1
What partnership models and technical integration plans exist with companies like AMD, Lockheed Martin, and Supermicro for physical AI solutions?
Speaker 1 announced collaborations but did not elaborate on joint‑development roadmaps, standards alignment, or co‑marketing strategies, warranting further clarification.
Speaker: Speaker 1
How will confidential computing be implemented at the hardware level in the Monaca 2‑nm ARM‑based chips, and what verification methods will be used?
Speaker 1 claimed hardware‑level confidential computing but gave no specifics on architecture, attestation, or certification, highlighting a need for detailed research.
Speaker: Speaker 1

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.