MedTech and AI Innovations in Public Health Systems

MedTech and AI Innovations in Public Health Systems

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel explored how MedTech and artificial-intelligence (AI) can transform India’s public-health system, focusing on cost-effectiveness, care coordination and operational efficiency [3-5]. The Government of India has launched the SAHI strategy-Strategy for Artificial Intelligence in Public Health-to embed AI across the system [11-14]. AI is already being used to mitigate specialist shortages through automated imaging analysis and tele-consultations, thereby lowering out-of-pocket expenses for patients [15-20]. Digital initiatives such as the eSangevani platform and broader digitisation of records aim to streamline clinical workflows and supply-chain management [18-24][25-26]. Panelists stressed that successful innovation requires institutionalisation: defining problem statements, building a use-case library and establishing policy guardrails [31-56]. TataMD described AI tools that deliver longitudinal patient records, clinical decision support, task-prioritisation for ASHA workers and analytics for health-department planning [62-71][76-84][95-102]. Sanjay Seth illustrated how AI can predict failures in programmes like tobacco control, providing real-time feedback to improve outcomes [108-120][182-191]. Private-sector partners such as the AIM Foundation are creating validation platforms to pilot solutions before handing them to government systems [150-155]. The main implementation barriers identified were poor workflow integration, resistance to change and insufficient incentives for frontline staff [239-259]. At the national level, the Bharat Digital Mission and ABHA IDs are intended to supply representative, high-quality data for AI-driven disease surveillance, imaging triage and administrative automation [207-224][225-229]. Speakers warned that entrenched work culture and data-ownership issues could undermine AI adoption unless incentive structures and data cooperatives are introduced [286-297]. Mental-health screening was highlighted as an emerging AI application, with the state piloting the QPR methodology for student suicide-prevention [327-330]. Consensus emerged that coordinated public-private effort, a focus on preventive care and addressing workflow and cultural challenges are essential for AI to deliver measurable impact in India’s public-health system [300-314]. Finally, ICMR is developing a sandbox to test and scale startup AI solutions nationally [341-345].


Keypoints


Major discussion points


National AI strategy for public health (SAHI) and its expected impact on cost-effectiveness and access – The government has launched the “Strategy for Artificial Intelligence in Public Health” (SAHI) to address specialist shortages, enable AI-driven X-ray and diabetic-retinopathy screening, tele-consultations, and to lower out-of-pocket expenses while digitising records and supply-chain workflows for universal health coverage [13-14][15-18][19-20][21-27].


Institutionalising AI: problem-driven solutions, evidence generation, and policy frameworks – Panelists stressed that AI projects must start from clearly defined health problems, be rigorously tested on the ground, and be catalogued in a use-case library; robust data-sharing policies and guardrails are essential for scaling [31-38][42-48][55-56].


AI-enabled clinical and operational support across the care continuum – AI can provide longitudinal patient data to medical officers, real-time clinical decision prompts, evidence-based treatment guidelines, and task-prioritisation for frontline workers (e.g., ASHA); it also supplies analytics for health-department risk-prediction and wellness scoring [62-70][71-80][95-100].


AI for preventive health programmes and early failure detection – In tobacco-control, adolescent health, and other NCD initiatives, AI is used to flag districts or schools where implementation is lagging, to analyse activity images, and to deliver personalised, language-specific nudges, thereby improving program effectiveness and ROI [108-118][161-176][182-190].


Public-private partnership, scaling barriers, and the need for change-management – Successful adoption hinges on integrating AI into existing workflows, managing resistance through incentives and early adopters, ensuring reliable connectivity, and creating national sandboxes or data-cooperatives to replicate validated solutions across states [239-247][250-258][286-296][341-345].


Overall purpose / goal


The discussion was convened to examine how MedTech and AI innovations can be systematically introduced into India’s public-health system to improve cost-effectiveness, care coordination, and operational efficiency, and to identify pathways for institutionalising, scaling, and sustaining these technologies at both state and national levels.


Overall tone


The conversation began with an optimistic, forward-looking tone, highlighting government initiatives and technological possibilities. As the panel progressed, the tone shifted to a more pragmatic and cautionary one, acknowledging implementation challenges, cultural resistance, and the need for robust policy and change-management. The session closed on a collaborative, solution-oriented note, emphasizing partnership and collective action.


Speakers

Shri Saurabh Gaur


Role/Title: Moderator, senior government official (likely from Andhra Pradesh Health Department)


Area of Expertise: Public health policy, AI integration in health systems


Shri Saurabh Jain


Role/Title: Government of India official, Ministry of Health & Family Welfare (counterpart in the government of India)


Area of Expertise: National health strategy, AI policy, universal health coverage


Citation: [S5]


Mr. Shiv Kumar


Role/Title: Member, Committee on Advanced Technology (CAT) – Andhra Pradesh Government


Area of Expertise: Innovation ecosystem, AI institutionalization, public-sector AI policy


Citation: [S3]


Ms. Saraswathi Padmanabhan


Role/Title: Representative, TataMD (private-sector health-tech partner)


Area of Expertise: AI-enabled clinical decision support, care coordination, public-private health partnerships


Citation: [S1]


Mr. Sanjay Seth


Role/Title: Representative, social-impact organization focused on tobacco control and preventive health programs


Area of Expertise: Public-health program implementation, AI for monitoring & predictive analytics in preventive care


Citation: [S2]


Dr. Rakesh Kalapala


Role/Title: Gastroenterologist, AIG Hospital; involved with AIM Foundation


Area of Expertise: Clinical AI applications, diagnostic AI, workflow automation in private and public hospitals


Citation: [S4]


Audience


Role/Title: Various participants (including mental-health professionals, researchers, and practitioners)


Area of Expertise: Diverse – mental health AI, program evaluation, implementation challenges


Citation: [S8]


Additional speakers:


Dr. Akesh – Mentioned briefly in the closing round; no further details on role or expertise provided.


Full session reportComprehensive analysis and detailed insights

The session opened with Shri Saurabh Gaur welcoming the audience and outlining the three “anchors” for public-health transformation that the panel would explore: the cost of delivery for governments and individuals, care-coordination through longitudinal health records, and operational efficiency that reduces waiting times while preserving quality [3-8].


The Government of India then introduced its national AI roadmap, the Strategy for Artificial Intelligence in Public Health (SAHI)[13-14]. According to Shri Saurabh Jain, SAHI already underpins a range of activities that address the chronic shortage of specialists in rural areas by deploying AI-driven X-ray and diabetic-retinopathy screening, and by linking primary-care doctors with tertiary experts through the eSangevani tele-consultation platform [15-19]. AI is also being explored for supply-chain management to ensure medicines and consumables reach remote facilities [21-26]. These interventions are intended to lower out-of-pocket expenditures and move the country toward universal health coverage [20-27].


A central theme that emerged was the need to institutionalise AI rather than treat it as a peripheral gadget. Mr Shiv Kumar argued that successful projects must start from a clearly defined public-health problem, be rigorously tested on the ground, and be catalogued in a use-case library that records evidence of cost-savings and health-outcome improvements [31-48]. He also called for explicit policy guardrails governing data sharing and monetisation [55-56].


Building on this, Ms Saraswathi Padmanabhan of Tata MD described a suite of AI tools aimed at the entire care continuum. For medical officers, AI aggregates longitudinal vitals and laboratory trends so that a single visit reflects a patient’s disease trajectory rather than an isolated episode [62-70]. Real-time clinical prompts remind clinicians of missed investigations (e.g., foot examinations for diabetics) and surface evidence-based treatment guidelines, while the final decision remains with the doctor [71-84]. On the operational side, AI-enabled bots help ASHA workers prioritise high-risk pregnant women, and a composite wellness score combines patient and environmental data to flag district-level risks for the health department [95-102]. Ms Padmanabhan noted that, although Andhra Pradesh has good connectivity, many states still face power-and-connectivity constraints that hinder AI adoption, and she stressed that incentives for frontline staff are essential to drive uptake [248-259].


The panel then turned to preventive health programmes, where AI can generate the highest return on investment. Mr Sanjay Seth explained that conventional dashboards only highlight what has not been done; AI must instead predict where implementation will fail, identify responsible actors, and trigger corrective actions [108-118][120-121]. In the state-wide tobacco-control initiative, AI analyses image uploads from 20 000 schools, flags districts with low compliance, and delivers personalised, language-specific nudges to teachers, achieving a 98 % accuracy in activity verification and markedly improving programme effectiveness [182-190][191]. He reiterated that such predictive, prescriptive AI should be embedded within the delivery system rather than sit as an overlay [120-121].


Private-sector innovators highlighted complementary contributions. Dr Rakesh Kalapala recounted a need-based AI diagnostic for fatty-liver detection that costs ₹500 versus a ₹1.2 crore scanner charging ₹5 000 per scan, illustrating how software-only solutions can dramatically cut costs while expanding access [140-144]. He also described an AI-driven discharge-summary generator that reduces turnaround from 8-10 hours to half an hour, thereby improving bed-management and patient flow [146-148]. The AIM Foundation, together with IIT Hyderabad and ISB, has created a neutral validation platform that pilots solutions-such as the “Journey Mitra” AI-supported scheduling and priority-setting app for ASHA workers-before handing them over to state health systems, exemplifying a public-private integration model [150-155].


An audience member later asked for guidance on AI tools that could analyse audio or video for suicidal ideation and depression; Dr Rakesh Kalapala acknowledged that robust EMR data and privacy-preserving AI frameworks are still needed to develop such mental-health applications [324-331][327-330].


The discussion also surfaced several implementation barriers. Ms Padmanabhan warned that AI will not be adopted unless it is seamlessly woven into existing workflows, delivers clear value to frontline staff, and is supported by change-management and incentive structures [248-259]. Shri Gaur added that health workers in Andhra Pradesh are already juggling ≈ 25 programmes and face digital-literacy gaps, which impede the uptake of new technologies [232-236]. Mr Kumar further stressed that entrenched work-culture attitudes and the absence of data-ownership incentives are the single biggest obstacles, proposing citizen-run data cooperatives that reward contributors with reverse tokens [286-296].


Data quality and governance were also highlighted as critical. Shri Jain stressed that AI outputs are only as good as the data on which they are trained; therefore, the Bharat Digital Mission’s ABHA health ID must provide representative, high-quality data from every region to enable reliable disease-surveillance, imaging triage, and automatic population of multiple administrative portals [207-215]. Mr Kumar echoed this, noting that many states still lack the basic data infrastructure needed for AI models to function effectively [197-200].


The panel highlighted three differing emphases. (a) Mr Kumar placed work-culture transformation and citizen-centric data cooperatives at the centre of the challenge [286-296]; (b) Shri Gaur and Shri Jain focused on digital-literacy, programme overload, and the need for robust national data platforms as primary hurdles [232-236][207-215]; and (c) Mr Kumar also advocated a bottom-up, problem-first approach with a curated use-case library, whereas Shri Jain described SAHI as a top-down national strategy already driving large-scale deployments [31-48][13-14].


In summary, the panel agreed that AI can substantially improve cost-effectiveness, care coordination, and preventive health outcomes in India’s public-health system, provided that (i) high-quality, representative data are secured; (ii) AI solutions are problem-driven, evidence-based, and integrated into everyday workflows; (iii) robust policy guardrails and a national use-case library or sandbox (as being built by ICMR) are established; and (iv) public-private partnerships are leveraged to validate, hand-over, and scale innovations. Remaining challenges-work-culture inertia, data-ownership models, digital-literacy gaps, and the need for concrete incentive mechanisms-must be addressed through coordinated policy action, stakeholder engagement, and iterative pilots before AI can fulfil its promise across India’s diverse health landscape. The concluding remarks underscored that a holistic, collaborative approach-combining clinical expertise, engineering capacity, and supportive policy-represents the “need of the hour” for translating AI research into tangible health benefits [267-271].


Session transcriptComplete transcript of the session
Shri Saurabh Gaur

Thank you so much. Thank you, ma ‘am. Welcome to all the ladies and gentlemen who have found time to be present here today as we explore the topic of MedTech and AI innovations in public health systems. There’s a lot of AI being branded about here. What we aim to explore during the session is in the public health care and with the three pillars that have been traditionally associated with the public good, public health being a public good. The cost of delivery. That public health care. scales, cost of delivery from the government side and also the cost of public health care for the individual also and what can AI bring in terms of having more cost effectiveness.

The second one will be on the care coordination and how do we ensure that the longitudinal health record get built and the clinicians are better equipped to utilize emerging technologies and AI for better care and the third one is on the efficiency, operational efficiency in terms of how do we ensure that the patient standing the line is treated in the best possible manner and in the lowest possible time with quality being associated. So with these three anchors to public health care, I welcome the panel and let me start with you Saurabh, my counterpart in the government of India. When we talk about population scale deployment of AI systems in health care, how do we ensure that the population is

Shri Saurabh Jain

Thank you, Saurabh. So I would like to inform all of you. Most of you must have also learned about the recent healthcare strategy that has been launched by government of India. It is called SAHI, the Strategy for Artificial Intelligence in Public Health. So as part of that, lots of activities in the field of AI are already happening. So if we see in terms of, we know there is a lot of lack of specialists, even especially in the rural areas. So through these AI techniques, the kind of services that are being provided, whether it is through scanning X -ray images or through diabetic retinopathy, also screening is possible through AI tools. So through that, in the resource constraint settings, we are able to provide good quality healthcare services to the citizens.

We have the eSangevani platform, the teleconsultation where… a person who is there in the PSC, a doctor who is there in the PSC, they can take expert opinions from the tertiary care hospitals. Also, I see artificial intelligence in terms of overall reduction of out -of -pocket expenditure because that is also one of the main important goals of ensuring universal health coverage. So by building more and more such systems, by bringing up trust, safety considerations in the public health system, we are actually also creating public trust in the public health system so that people actually come towards public health system and they rely less on the private health care and thereby reducing the out -of -pocket expenditure.

Similarly, also lots of digitization is happening, lots of records. We have the digital data. And through this digital data also, we can improve upon the overall workflows that are there in the hospitals. We have the resource constraint settings. So in terms of supply chain, management also. So lots of innovations. the health ministry is looking for in terms of ensuring that we will be able to provide the universal health coverage to a person even who is in a remotest of area should get the best quality coverage and also at the least cost. So that is how the strategy of government of India is as far as the adoption of artificial intelligence in healthcare. Thank you.

Shri Saurabh Gaur

So you have talked about innovation emerging as a centerpiece in public health and with the strategy of AI adoption in healthcare, the SAI strategy we may look at a UHI movement just like there was a UPI movement where we have the digital public infrastructure in health being set up all the interface layers getting established but that also means bringing a lot of innovation ecosystem to the healthcare. So with the work that you have done Shivji over a lot of time how do you look at the innovation to institutionalization framework in the sense how do we while we every other day there is a health tech startup coming in, how does it get integrated in a structured manner with the public health system.

Mr. Shiv Kumar

Thank you and good afternoon everyone. One of the important things which we need to recognize at least in AI is currently solutions are looking for problems not the other way around. Therefore it is important to marry what is the problem which is important for the state and then bring the solution together. So the first step of institutionalization is about how do you apply technologies and who sets the agenda and who is setting the priority. Like the way Andhra government has set up the Center for Applied Technology which has put out a call to say these are the problems we solve for our frontline workers. I think that’s the first step of institutionalization. Second is about the whole although I said solution looking for problem and problem looking for solution these are not either or.

I think both are important. We never knew we all needed a smartphone. But at the same time smartphone has become a problem now. Right in some sense. So I think continuous bridging and very critical element is to taking it to the ground and actually sharing the real evidence because every innovator will want to say their technology is fantastic. Have you come across any innovator who says their technology is not good? All of them will say it’s fantastic, it works the best, it is the best. That’s okay. That’s what an innovator is supposed to do. Whereas I think the state has the responsibility to test it on the ground, to look at the feasibility, to see does it actually change health outcomes.

Does it actually save cost, as sir was mentioning. And the third element of institutionalization, sir, is also the use case library that we need to build. I think there is a lot of discussion around this can do that, that can do this. Where’s the evidence? Where’s the use case library? Where has it worked? Has it worked with tribal communities? Does it work for the last poor woman, tribal leader or ordinary person? Right? That becomes very, very critical. And the last part is around the AI policy. Policy and processes. where the guardrails are built so that the state is also able to have a very clear policy towards how do we share the data, how do we ensure that we are able to, all the data that is shared by the community is actually monetized for them.

Shri Saurabh Gaur

Thank you. You started with a great point that most of the time innovators come with solutions and they are looking for problem statements. But in Andhra Pradesh, we have articulated the problem statement clearly in terms of how do you at population scale drive an AI -enabled public healthcare system. And that’s where one of our partners is TataMD, which is represented by Saraswati Imam here. I believe you have set a fantastic stall also on the digital system that you have set up for healthcare delivery. So would you want to talk about your experience and how do you bring up a private sector ecosystem into public health and enable a public healthcare system?

Ms. Saraswathi Padmanabhan

Thank you, sir. As the sir mentioned, I represent TataMD. please visit our stall in hall number 5 where we have showcased what we are doing but I will just explain it in simple terms public health system we are looking at AIS assisting the entire public health system so I will just divide it into 3 or 4 aspects one is for the medical officer how can the medical officer gain from the assistance of AI so what we are looking at is how can the so normally when you go to a PHC the doctor would ask for the vitals to be taken the basic he would ask what is the complaint for which the citizen has come but it tends to be episodic it’s not longitudinal so we are looking at how with the help of AI we can share with the medical officer in a structured manner the entire longitudinal data of the citizen so that the doctor knows this is not just an episodic care we are talking about we are talking about continuum So how can we ensure that we understand the citizens?

So if a NCD patient comes, shows, mentions that he has HbA1c of 8, but has it been the same? Has it come down or is it increasing? So that trend will help the doctor to decide the medication. Or normally they would also ask what is the medicine that you are consuming and they would say either continue or stop. But with the longitudinal data, they will be able to say, okay, is this medicine actually working? Is it not working? How do I ensure that the patient is taken care of better? So I think it helps the data to be structured in a manner in which the doctor can use. Secondly, there are sometimes because of the busyness all of, I mean many of, I don’t know how many of you have actually visited a PHC and seen the workload that a medical officer faces.

Many times they are just rushing through the citizens, right? So they do not have the time. Sometimes they do not have the time. Sometimes they may miss an investigation which is required for a particular. So AI can do that prompt saying that, okay, this is the history, this is the data. Maybe we should get a foot examination done for this diabetic patient. He has not done it for last few days. So basically we are looking at AI as assisting the medical officer with the clinical support system so that nothing is lost, there is no oversight. Plus there is a evidence -based treatment guideline which can be shared with the doctor. And finally, the decision maker is the doctor.

We are not here to say that the AI will decide. The decision maker is doctor. The AI is to assist. So this would be more on the clinical side. Similarly on the operational side, right, all of us know the time that is spent in detailing out what the conversation with the patient is. So we are looking at how that can be made in a meaningful manner in a rural public health system. We all know in a closed room probably the listening can be better, the ambient listening can be easier managed. But in a closed room, we are not here to say that the AI will decide. We are looking at different dialects. We are looking at different dialects.

We are looking at different dialects. We are looking at different dialects. different contexts, how can we make that better? So that’s the second part for the clinician. Looking at the frontline workers, I think some of the AI bots, we are looking at how we can help them with their tasks. So if an ASHA is looking at 50 pregnant mothers, how can she prioritize who is the one she needs to look at, who is the high -risk mother whom she needs to prioritize? Because all of them are loaded with work, but AI can help them in scheduling their tasks, do their tasks in a better manner. And lastly, if I had to look at it from a public health system, the government, public, the health department, we are looking at AIs, how can it provide the analysis in a manner that it makes better sense for the government to see the trends?

How can the data show to them that, this is the… key problem in this particular area. We’ve been talking about with Andhra Pradesh government on creating wellness course, composite wellness course which looks at patients, looks at environment, creates a score which can tell them how to look at where the problems are and provide solutions. So basically this is going to strengthen the health department in identifying the risks, predicting the risks and looking at ways to do a proactive preventive care. So this is the way we are looking at ensuring that AI is providing support to all the stakeholders by using the data that is being provided and there’s a lot of deeper work while what I say is on the surface, the deeper work is how do you understand the data, how do you capture data across different geographies.

So that it’s more meaningful and it’s

Shri Saurabh Gaur

I’ll come back to you in terms of the challenges you face while working with government, especially looking at the fact that government is probably implementing 25 odd public health care programs and the flavor is from preventive health care to maternal child health to genetic care and so on and so forth. But bringing a different stakeholder into the conversation, Mr. Seth, you represent a social impact organization and have been working on tobacco control. So where do you think is the maximum value of AI in health care driven from your perspective while you’ve heard the other people talk about digital platforms and enablements and innovation? What in your mind will be the biggest AI value generation for public health?

Mr. Sanjay Seth

Thank you. Thank you, Dr. Mr. Gaur. You know, large public health programs, TB programs, prevention, tobacco control. adolescent health, the real question is where AI can actually help them day to day rather than in theory. And if you see most of these programs across states, the failure is not because of the design, because they’re not reviewed, but because of variable implementation across areas. Now, the data exists, reviews are also done, dashboards exist, but you find we very often find out what’s going wrong after the event when the failure has already occurred. I have heard so many senior level IS officers lament, the dashboards only tell me what I have not done. They don’t tell me what I am supposed to do.

Now, that is where I think AI can come in and add a huge amount of value. By, you know, helping and telling you where the failure is likely to occur. Identify where it is happening. Who has to take action on it? And then that action, I mean, you know, the person can be informed. And then, of course, you know, where the action is, that’s so important to pushing it. But for this, AI has to exist inside the delivery system, not on top of it. So, in my mind, that is where AI fits into the delivery system.

Shri Saurabh Gaur

Typically, a PHC, a Primary Health Care Center… that supposed to see around 40 patients or doctor ends up seeing 60 patients, at least that is the statistics for Andhra Pradesh. You have all been to institution like AIMS where heavily what do I say, the fact that there is no care coordination or absence of care coordination, so everybody seems to be ending up in a tertiary healthcare setup. And there representing a tertiary care unit Dr. Rakesh and coming from AIG hospital, how do you augment, how do you do clinical augmentation for the doctors and in private healthcare, what are the lessons that have been learned and can be adopted in public healthcare also? Dr.

Dr. Rakesh Kalapala

Thanks, Saurabh. In fact, I would start by saying, A is going to reduce the cost both in public and private health not by replacing the doctors but for the early diagnosis and intelligent triage. See, for example, as my co -speaker has said, this is a 1 .4 billion country. And as of now, I think it is going to be around $20 billion. And I think it is going to be around $20 billion. And I think it is going to be around $20 billion. And I think it is going to be around $20 billion. I mean, it’s growing day by day. So any hospital, it can be a primary, secondary, or tertiary care hospital, has got a huge volume of input. And it’s very difficult for anybody, even a robot also cannot match the human scale.

So these need -based innovations are something which we really have to look upon. See, suppose I, in my hospital, have got something where I have difficulty in doing it, and in other hospitals, something else. So need -based innovation we have to catch and then try to solve it. For example, in private setup, as I told you, there are some use cases which we had a personal experience since the last three, four years. I’ll tell you a little bit of economics on this. There is an algorithm which we developed with a pure AI model, costing 500 rupees to pick up a fatty liver, versus getting a machine which is 1 .2 crore and charging 5 ,000 rupees per piece. So this is something which is a need -based innovation for me as a gastroenterologist.

I’m a hardcore clinician, and I look upon any metabolic disorder which is the crux of the entire metabolism. And if you have tools like this, that will give you a lot of value in terms of fast diagnosis as well as the economics getting scaled down at your level. Then there are other use cases where you have the EMR and ESR. In fact, Sarabhiji, there’s a lot of chaos which happens when you have the admissions in hospital. So in that, we have a use case where the patient, they stand there, and the discharge summaries will take 8 to 10 hours for them to come out of the hospital. So we have an AI -enabled system where the discharge summary will happen in half an hour max once I say my patient has been discharged.

Vis -a -vis when you have electronic medical records and you want to have the patient bed management made. so one patient there’s a huge line where you start it’s all a personal experience in this hall everybody goes and you’ll be standing in the queue and even it’s not that you have to blame the hospital authorities but again that is something where in those areas you need these AI enabled systems so it can be digital health or a clinical related AI system so that’s where we have to concentrate on

Shri Saurabh Gaur

so while you do that my question is again to you only go around the panel in a reverse order now and so while you do that and look at private sector efficiencies being coming out how do you think you can collaborate with state governments or governments at large in terms of the fact that you will be an early deployers of medtech solutions and the fact that you will have built it in your cost economics to use them faster how do you think you can accelerate their adoption in government ecosystem also

Dr. Rakesh Kalapala

it’s a very valid point in fact the the see as a private sector person we have the early adaptation and adoption compared to getting into the public but in fact on that note i would say i couldn’t bring the aim foundation which we have which is working closely with the government of anupadhyay and other governments so what we did is we formed a platform with triple it hyderabad indian school of business iat delhi the fit and it is a neutral platform where anybody can come and then pitch their idea we handhold them nurture them and then make it validated at our clinical level and once we have the products for example the journey mitra which is launched in the government of anupadesh as the co -speaker told so that is something the asha workers can pick up with the a enabled system about the high risk pregnant mothers and then the nutritive value to decrease imr mmr so tools like this which we can do at our level and once we validate and we feel confident then we can give it to the public systems so there should be a public private integration, which is the main strength for this country.

And then only this will scale fast because time is running fast and nobody waits for us. And we have to keep up with that and then try to get the solutions because we cannot adopt the Western world solutions to us. Ours is entirely a different system. So we can never take any Western AI algorithm and then try to adapt. We have to have our own algorithms and we can do fast because of the population we have, the volumes we have, and of course the zeal we have.

Shri Saurabh Gaur

That’s great. In fact, we are working with the AIM Foundation and then looking at setting up a biodesign lab in Andhra Pradesh with the AIM Foundation, with all the other institutes that you’ve talked about. And I see a lot of facilitation of deployment of Meta -X solutions happening through the CAT, the Committee on Advanced Technology. And the biodesign lab. But while we talk about all these Meta -X solutions, the core is something that… we believe as a state also that preventive health care has to be strengthened. And that’s where for prevention as an entry point, where do you see AI playing a role in terms of preventive health care being strengthened, Sanjay ji?

Mr. Sanjay Seth

So I think prevention programs, as we all agree, prevention is better than cure and preventive programs will have the highest ROI. Unfortunately, preventive programs are not politically supported. Right. And that is where AI, I mean, if you take adolescent health, you know, student health, nutrition, you know, and I mean, if you take non -communicable diseases, we are talking about behavior change across entire populations, and that’s become the most important, you know, today maximum number of deaths are taking place is because of NCDs. That’s where preventive. So, you know, health comes in. Now where AI can really, I mean why AI really fits into this, because these programs operate at scale. They require continuous and repetitive activities to be done.

And they also show very predictable gaps during implementation. Okay, where the drop -offs are taking place, where the failures are taking place, and actually, you know, prevention, or sorry, the program implementation being done. And the number of variables are also very large, because as soon as you talk of behavior change, you are talking about, you know, huge amount of different cultures taking place. Different cultures respond differently to behavior. And if you take the mass of data, this is where AI can really support the programs and bring the, you know, not just the cost down, but the effectiveness of the programs to take place. But as I said earlier, AI needs. You know, I’m going to do this, and I’m going to do that.

And I’m going to do that. be within the delivery thing, not as a layer on top. And if you then, if we focus on, you know, how these schemes or programs result in outcomes, all right, this is where I feel that AI can give a very vast feedback. All these different entities, facilities, units are feeding a huge amount of program data coming in. AI can analyze where the likely failure rates are. It can escalate it to the appropriate level, bring it to the attention of the senior people, and that will result in far, far better delivery outcomes.

Shri Saurabh Gaur

So can you be more specific in terms of, for example, the tobacco control program that you run, Sanjay ji? Right. Where is it, do you think that the fact that if we are doing it across, say, 20 ,000 schools and I do not know how many schools you are doing it with, are you able to do those kind of predictive outcomes? Outcomes or prediction in terms of where the program is. is probably bound to fail or is looking at a failure condition and the actions that need to be taken.

Mr. Sanjay Seth

Oh, yes, we are getting, we are running tobacco control program in Antara, as you know, more than 20 ,000 schools. And each school is supposed to do a standard set of nine activities. All right. So very early, we are able to see which, you know, schools are not doing some certain activities. All right. And if we manually, if we start analyzing across 20 ,000 schools, there’s no way we can do it. All right. So AI helps us to analyze the data and say this block, this district, this area, there is a failure taking place. These schools are not taking action. All right. And then when we in terms of after the analysis, what we are doing is also the schools which are acting when they do the activities, they upload the activities.

Now we do image recognition. And decide whether the activity is done correctly or not. And we’re very high, 98%. accuracy we are able to see whether the activity has been done correctly or not and that tables enables us to give feedback immediately look within I mean as soon as he enters it within shortly after that he gets feedback you haven’t done this properly please repeat it all right then once we are talking about you know informing people so we are now sending out in Andhra Pradesh for instance 40 ,000 teachers get messages from us personalized messages you know for each person in the language he prefers in the language you know in the tone he prefers and that makes the motivation or the you know the way they act much faster than they used to earlier so we are seeing orders of magnitude improvement in terms of effectiveness of the program taking place I believe is you know works for I mean not in Andhra but in other states we are seeing this same thing happening across other programs which we have been working on

Shri Saurabh Gaur

This is very heartening to see. So while for example you may be doing a tobacco control program with us in Andhra Pradesh, there is a cancer care program happening in Tamil Nadu that we got exposed to in one of the workshops. There are other states doing fantastic work. I saw Odisha for example, the stall today. So while and I probably picture to you Shivkumar ji that while we have all these islands of excellence and innovations, what is it that prevents them from scaling up and what are the structural barriers that probably government is not able to while we talk about ease of doing business, what is that ease of doing governance, public governance and public health care system that will make them scale up?

Mr. Shiv Kumar

in place and the data quality actually improves. AI models really can’t work on top of it. And there is an exception in terms of, you know, you’re doing surveys and various other data points are there. Most states don’t have it. Right? And therefore, the processes unless they throw data out, I think we can all dream about AI, but really having the kind of value that we are talking about is going to be very, very difficult.

Shri Saurabh Gaur

Thank you. Thank you. That’s a great point. And while we are at a certain maturity level in state government of Andhra Pradesh, there are other states which do equally well and there are states which probably are lagging. But with the national framework being put and with the ISHMA and Bharat Digital Mission, I bring it to Saurabh, my colleague. Where do you think in government of India, how do you facilitate all state governments to at least come on par and how do you see AI within the national health systems which become say gold standards or standards at least for all the other states to follow?

Shri Saurabh Jain

So as we know that health is a state subject, so ultimately government of India works in collaboration with the state government. And we understand that the kind of AI systems, the algorithms, the applications that have been developed for AI, ultimately the quality of output that comes out from those systems depends upon the data on which it is trained. And that is why it is very, very important that the data should be representative. It should be from every region because every region has a different kind of disease profile, every kind of various kind of demographic profiles. So this is very, very important that the data should be representative. Data quality, as was mentioned, should be very good.

And in fact, through this Aishwarya Bharat Digital Mission with more and more of digitization, we are and with every person now being provided with the ABAID, which is a… Actually, an ID which is linked with the health record so that the health records can move. with the person. So with all this digitization, with all the data that is being generated, we are able, we can do lots of usage of AI in terms of disease surveillance. We can use it for modeling of various diseases. We can use it for imaging. Lots of MRI because as I have mentioned earlier, still there is a lot of issue about the availability of specialist doctors, especially in the rural settings.

So at least if the AI solutions are available, the basic, at least 90 % of the imaging can be taken care of by the AI. So that only the most suspected cases can be referred to the tertiary care hospitals and the basic healthcare can be managed at the facility level. And as my colleagues have also mentioned, one of the issues in public health delivery is totally preoccupied with lots of administrative work. Lots of data entry, lots of portals that they have to enter data into the portals. That takes a lot of time beyond what is expected out of them, which is their clinical duties. So with this more and more, the AI application and more and more systems getting digitized, we can have a system where the data which is fed into one portal can be automatically populated all across the portals and the administrative work of these healthcare workers, which are our frontline healthcare workers, can be substantially reduced so that they can focus more and more upon the actual clinical work.

So AI is ultimately, it’s about improving efficiencies. It’s about improving the workflows. We have the supply chain management also. It is about optimizing of supply chain management. And in this entire journey, in the adoption of AI, we take states as our partners. Because ultimately, when both government of India and states work together, only then we can have a very robust AI system which can actually deliver quality care to our people.

Shri Saurabh Gaur

We are here to work with the government of India very closely and establish those models. But the point you made, Kher, I actually cannot think of working as a NM myself also, despite being, adding a state health department. The sheer fact that a poor NM or MPHA male, the multi -purpose health assistant or the nurse on the field has to work with 25 programs. And while there is this national architecture coming up, there is so much of digital literacy challenge, not just digital literacy but adoption and using all these apps. This is a real challenge we face at the state level also. And with TATA, when we are building the digital backbone through project Sanjeevani that we are doing together in a collaborative approach, in an example of public -private partnership, I would want, Saraswati, you to play the devil’s advocate role and tell us the three key technology integration challenges that you see.

And please be critical of the system. But tell us that what is it that you would want to see. what are the challenges that you face day in day out when you look at building this care coordination oriented digital backbone for public health in Andhra Pradesh

Ms. Saraswathi Padmanabhan

Thank you sir, tough question to answer especially in a public forum but I will do my best, so one of the things like we have spoken about in a PHC there are lot of things to be done and like you mentioned sir, all of them have a lot of activities lot of programs, lot of reporting that they are doing introducing AI as something like Sanjay ji said as something additional or bringing technology as something outside is definitely a challenge, so our aim and what we have realized is if it is not integrated in their workflow, if it is not something that they find value in, the adoption is going to be a challenge like Shukumar ji mentioned that people are collecting data and just sending data But is that data really helping them?

Is it helping the people who are collecting the data? Is it helping them to do their work better? Are they able to benefit from the work that they are doing? If the answer is no, definitely they will not take it up. So one of the things is how do we integrate in the workflow? And they find value for what is being introduced. The moment that we are able to reach that sweet spot, I think they will start utilizing it. So one is how it’s integrated in the workflow. Second, I think, is it’s less of technology management and more of change management. Whenever there is something introduced, people look at it. I mean, there’s a cycle of adoption similar to what even we all face whenever we get introduced to anything new.

There will be a set of people who are ready to adopt it and are forthcoming. Then there will be a lot of people who are resisting. Slowly, the pattern changes. And people start seeing benefits. So what we are trying to… get at is who are those people who are seeing value for it who are those early adopters how can they bring in and probably with them we train the model like shikumaji rightly said if we do not train the model correctly you’re not going to get the good benefits so who are those people who can be utilized to train the model and who will not resist and then you give the trained model to people who are resisting so that they can see value so i think it’s a lot of change management related resistance which is what we need to address and lastly while i mean andhra is a very progressive state and we see this not as a challenge here but generally the connectivity the power availability all these tend to be one of the other challenges that for doing it a system wide change how these could probably be the this as i said in andhra thankfully those are not issues that we have seen but finally i think it’s about the incentives, right?

What are the incentives for people to adopt? If there are incentives for them to adopt both from the state side and from their personal side, the adoption tends to be easy. So, it’s a lot of work that we need to do to make sure that this is taken at scale, sir. Thank you.

Shri Saurabh Gaur

So, I think we have a round time for one more quick round and I want to keep it short. Thinking aloud, what do you think? And the question is to all of you in the panel. What is the one maximum impact zone or maximum impact innovation that you feel based on your engagement with public health or with healthcare that should be happening? And I start with Dr. Akesh. What do you think will be the one most impactful thing that we can do? That’s a

Dr. Rakesh Kalapala

very difficult question. There are many things to do. But what I would say in a nutshell is with the current scenario where we are, so the clinical insights from the doctors, the engineering capability from the bioengineers or the AI engineers, and the policy support from people like you, so this is something which will make the AI -related or metric -related innovation to go from the lab to lives. So that should be a collective holistic approach which we have to do and join hands together. I would say in that way that’s the need of the hour. Thank you.

Shri Saurabh Gaur

I’ll make it simpler for you, Sanjayji. In preventive healthcare, which is one most impactful innovation that should happen?

Mr. Sanjay Seth

Since I’m working in that area, I guess that is where I will obviously state, but if you look at it, Andhra Pradesh, 48 ,000 deaths every year because of tobacco usage. If you take any of the adolescent health, the future of our youth is how well our adults are. lessons grow. As I said, preventive is the highest return on investment, and it is not glamorous. It is very dull. It requires enormous amount of day -in, day -out discipline. But as a state, if you’re looking at what can really give you the maximum amount of benefit, I’d argue for preventive health. Thank you so much.

Shri Saurabh Gaur

To you, Saraswati ji, in terms of engaging, and in the public -private partnership board, which is the most impactful thing that can be done?

Ms. Saraswathi Padmanabhan

I would probably respond slightly differently. I think in terms of bringing back the trust in the public health system, that would probably be the focus, and that hinges on quality of care that we are able to provide in the primary care, and that is what is going to ensure that the need for tertiary, secondary, and the disease burden that we are envisaging, that would probably… probably be managed if we strengthen the primary care with the trust in the public primary care. Thank you.

Shri Saurabh Gaur

And Shivji, with you heading our committee on advanced technologies and doing all the work with innovation ecosystem, which do you think is the most standout innovation that you have seen that can be impactful for public health?

Mr. Shiv Kumar

Sir, I’m going to be a little controversial on this. I think technology is just an enabler. I think our single biggest problem is going to be work culture. Nature. Work culture, incentives, and today every officer feels that they need to see a dashboard and tell their team what to do. I think if we have to really make AI help everybody decide, I think the work culture around evidence, the work culture around data is going to be the biggest one. But I will answer your question. The biggest innovation should be people’s data should be owned by data cooperatives. Nellore is a district. In Andhra, Nellore people should own the data through a data cooperative. and we should have reverse tokens where people pay for their data.

And we are feeding the AI engines and I think our people should gain from that. When we reverse that, sir, and when we reverse the incentives and the work culture of use of data, I think automatically you will find people coming and telling you this is how I am using it. Thank you.

Shri Saurabh Gaur

That’s very interesting. And what about you? What do you think that can be the most impactful at a national scale also for the health innovation that can be there?

Shri Saurabh Jain

I would just also like to address the work culture issue that you have mentioned. In fact, if we can just sensitize, if we can make our doctors, our health workers confident that the outcomes are predictable, outcomes from the AI systems are good. And actually by adoption of AI systems, their productivity is improving. The kind of work that they have to do in less number of hours, in less time, they are able to use that. Use this. Use the same. Do the same kind of work. And they can do better in terms of their clinical approach and their productivity approach. I think. I think our health workers, they have adopted very swiftly to the technology. And if we can show them the reliability, the outcome that is certain, and overall improvement in their productivity, definitely workforce will adapt to this technology.

And as far as coming to your question, I think diagnostics will play a very, very important role in terms of the adoption of AI. And we are seeing it in the area of tuberculosis and also in diabetic retinopathy, where through the scanning of these images, the doctors can make a very evidence -based decision in a very less time. So in the same time, if they were seeing 10 patients, now with the support of AI, they can see 20 patients or 30 patients with much more accuracy. So the kind of shortage of doctors we have and the kind of patient load we have, especially in the tertiary. I think diagnostics will be playing a very important role. huge role in the field of AI.

Thank you.

Shri Saurabh Gaur

I think that’s all the time we have. We still have time for one or two questions from the audience. If somebody would want to, the gentleman at the back actually raised his hand first or I spotted him first. There’s a mic behind you.

Audience

For mental health perspective. Because that requires additional safety, security as well as sensitivity. But I have not seen anyone touching on yesterday also as well as today. Mostly we talk about medical imaging and that takes because radiology as well as radiology all the innovation. there are developments also but I was thinking I will get some insight but so far

Shri Saurabh Gaur

Dr. Rakesh you want to take it?

Dr. Rakesh Kalapala

I think I have a point on that it’s a very nice question so on the mental health there are people in the western world who have got some apps and they are doing it but in India unfortunately there is no robust system to collect the data in fact if you have suppose you are working in a private hospital you have a robust EMR, EHR then you must be having a questionnaire on which you can build up these things but that uniformity to come it takes a little more evolution but there are people who are working on it in Indian sector probably it will take a little more time in fact you will be the one who can start that at your level

Audience

no we are working actually we are struggling I am at Eames Bhopal and what we have audio recording like we don’t have medical imaging either you have mental status examinations or video recording and based on that voice recording similar like detection of suicidal ideations, detection of depressions, anxiety and those kind of things. So there I was seeking if some assistance or guidance can

Shri Saurabh Gaur

So I will probably just respond to this. What we have done in Andhra Pradesh is we have worked with psychiatrists and so there is a methodology called QPR, question, persuade, refer, which is actually proven in the sense that it is patented, where we worked with them and said that okay, especially with let’s say our students who go through high pressure, people who are in intermediate education, 11th and 12th, and there is pressure to perform an examination, parents are pushing them. So out of those 10 lakh students who are appearing for examination, which are those and our estimate is say around 15 % need to have special focus being paid. So taking all of them through this QPR methodology, working with this organization called Suicide Prevention Foundation of India, SPFI, and working with them, we have been able to at least look at which are those students who need to be given specific focus who are having those kind of ideations or having those kind of vulnerabilities and what kind of messaging needs to go for them and while it is a challenge and I am not saying there is a lot of AI into that because there is lot of what do I say privacy issue also associated with this but the other point you said about a scribe or essentially since people are talking you can actually get into the behavioral insight and understand whether what kind of ideation is happening and getting answers out of that I think that is a great point and would love to work with any innovator who would want to do it as a sandbox with us.

Thank you. One more question. Yeah. No, no, we will go to. You are an in -house person. Yeah, please go ahead.

Audience

So first of all thank you, Saurabh Gaur, sir. The first thing about that MedTech challenge I think this is the first time we have seen state government or government is opening up and telling that why don’t you guys innovate as come with your solutions including small startups like us. and we will give you a platform to pilot it and then finally help us in scaling up. My question more is to Saurabh Jain sir because we need to replicate something like this at a central government level. At the start -up, we definitely cannot go to all states and keep on doing pilot while MedTech is a segment where almost zero VC or private investment is there.

So it’s largely we running on either government grants or our own save money or loan or everything. So is it central government can create a platform which Saurabh Gaur sir or other state government can take those validated solutions and scale up them and we don’t just keep on repeating the same thing?

Shri Saurabh Jain

I think yes. ICMR is developing this kind of sandbox in which the start -ups can come up with their innovations. You can test it in the sandbox. So ICMR is actually developing this kind of mechanism to test the models. And ultimately, it’s about replication as you have mentioned. So once it is tested and it is tested across various settings depending upon the… outputs… Definitely it can be scaled up.

Shri Saurabh Gaur

Thank you so much. The audience, you deserve a round of applause for being very patient audience. And I thank all the panelists also for their very valuable insights. Thank you. Thank you, everyone. There is a moment to give also. We can quickly hand over the moment to Dursu. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (5)
Additional Contexthigh

“The Government of India introduced its national AI roadmap, the Strategy for Artificial Intelligence in Public Health (SAHI).”

The knowledge base notes that India released a white‑paper in December 2025 outlining a national AI strategy that treats AI compute, datasets and models as a Digital Public Good, indicating a broader national AI roadmap but not naming SAHI specifically.

Additional Contextmedium

“AI is being explored for supply‑chain management to ensure medicines and consumables reach remote facilities.”

AI’s role in optimizing supply‑chain logistics is discussed in the knowledge base, highlighting its use for trade and logistics efficiency, which adds context to the claim about medical supply chains.

Additional Contextmedium

“These interventions are intended to lower out‑of‑pocket expenditures and move the country toward universal health coverage.”

The knowledge base emphasizes that health systems must shift toward universal health coverage, providing supporting context for the report’s goal.

Additional Contextmedium

“Mr Shiv Kumar called for explicit policy guardrails governing data sharing and monetisation.”

A source in the knowledge base highlights the need for multi‑stakeholder approaches and concerns about data‑sharing governance, reinforcing the call for policy guardrails.

Additional Contextmedium

“Successful AI projects must start from a clearly defined public‑health problem, be rigorously tested on the ground, and be catalogued in a use‑case library that records evidence of cost‑savings and health‑outcome improvements.”

The knowledge base stresses a problem‑driven rather than technology‑driven approach to AI, aligning with the report’s emphasis on starting from defined health problems.

External Sources (88)
S1
MedTech and AI Innovations in Public Health Systems — – Dr. Rakesh Kalapala- Ms. Saraswathi Padmanabhan- Shri Saurabh Jain – Dr. Rakesh Kalapala- Shri Saurabh Jain- Ms. Sara…
S2
MedTech and AI Innovations in Public Health Systems — -Mr. Sanjay Seth- Social impact organization representative, works on tobacco control and preventive healthcare programs
S3
MedTech and AI Innovations in Public Health Systems — – Dr. Rakesh Kalapala- Mr. Shiv Kumar – Shri Saurabh Jain- Mr. Shiv Kumar
S4
MedTech and AI Innovations in Public Health Systems — -Dr. Rakesh Kalapala- Gastroenterologist from AIG Hospital, represents tertiary care and private healthcare sector, invo…
S5
MedTech and AI Innovations in Public Health Systems — – Dr. Rakesh Kalapala- Ms. Saraswathi Padmanabhan- Shri Saurabh Jain – Dr. Rakesh Kalapala- Shri Saurabh Jain- Ms. Sara…
S6
MedTech and AI Innovations in Public Health Systems — – Audience- Shri Saurabh Gaur
S7
The Foundation of AI Democratizing Compute Data Infrastructure — -Saurabh Garg: Secretary in the Ministry of Statistics and Program Implementation in the Government of India
S8
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S9
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S10
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S11
https://dig.watch/event/india-ai-impact-summit-2026/medtech-and-ai-innovations-in-public-health-systems — And as far as coming to your question, I think diagnostics will play a very, very important role in terms of the adoptio…
S12
TABLE OF CONTENTS — She presented a number of challenges to the delegates: do everything to get governments to introduce reforms that moved …
S13
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — But ultimately, from a mobile operator point of view, I would say there’s four pillars in this combating scam thing. The…
S14
Building the Next Wave of AI_ Responsible Frameworks & Standards — And this is, you can see up here on the screen, the QR code, and you can scan the QR code and then you’ll get access to …
S15
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — DataSphere Initiative methodology focusing on responsible design, effective communication and engagement, and ensuring p…
S16
https://dig.watch/event/india-ai-impact-summit-2026/scaling-ai-for-billions_-building-digital-public-infrastructure — and also the sectoral regulators also come up with sandboxing regulations in the sense that if you want to try out somet…
S17
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — Capacity building in digital health was identified as a significant ongoing challenge in the healthcare sector. The need…
S18
Equi-Tech-ity: Close the gap with digital health literacy | IGF 2023 — Addressing systemic issues is crucial for improving health in Sub-Saharan Africa. Currently, Sub-Saharan Africa has 10% …
S19
Digital Health Strategy — Despite these achievements, several challenges affected successful implementation of the Strategy. These includes: …
S20
Multistakeholder Partnerships for Thriving AI Ecosystems — That the right to privacy of their data is properly maintained. So I think the policy makers, the people who are there, …
S21
Responsible AI in India Leadership Ethics & Global Impact — “One size doesn’t fit all”[111]. “See, it is a very diverse element and there is a different kind of templates which we …
S22
WS #214 Youth-Led Digital Futures: Integrating Perspectives and Governance — – Creating regional data cooperatives that balance local needs with broader policy frameworks Dana Cramer introduced th…
S23
(Jail) time ahead for the cryptocurrency industry  — The cryptocurrency and digital asset industry has once again been the focus of the worldwide media. This time, it is not…
S24
E-Commerce Legal and Regulatory Framework for Data Governance in Developing Countries ( Nigeria Customs Service) — Alex, who works with the United Nations Commission on International Trade Law, expressed curiosity regarding whether the…
S25
Connecting open code with policymakers to development | IGF 2023 WS #500 — Helani Galpaya:I mean, from a development perspective, understanding where we are in whatever those development objectiv…
S26
https://dig.watch/event/india-ai-impact-summit-2026/building-the-future-stpi-global-partnerships-startup-felicitation-2026 — In areas like textiles, pharmaceuticals, etc. The question now is, how do we reliably move from ideas to impact and be m…
S27
HealthAI: The Global Agency for Responsible AI in Health — Some countries, mainly those with the highest gross domestic product (GDP) and the most advanced technology sectors, hav…
S28
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Audience: Hello. I’m also a researcher in the AI policy lab. And I also want to comment on this. I also want to comment …
S29
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — An audience member emphasizes the importance of research and continuous stakeholder engagement in policy formulation. Th…
S30
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — The discussion revealed a common theme across different contexts: the gap between policy ambition and implementation cap…
S31
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Development | Infrastructure Roy Jakobs argues that AI provides clinicians with fast and accurate data to support daily…
S33
TIMELINE — Early disease detection through the analysis of medical images.
S34
REDUCED MORTALITY — – ƒ Healthcare, nursing and long-term care is increasingly based on digital data and its provision is personalised and t…
S35
Global Enterprises Show How to Scale Responsible AI — The implementation challenge extends beyond organisational commitment to practical tooling and automation. Gurnani empha…
S36
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Additionally,public-private partnershipsare essential for scaling sustainability initiatives. Companies invest in on-sit…
S37
WS #462 Bridging the Compute Divide a Global Alliance for AI — Fabro notes that 81 countries have national AI plans according to observatory rankings, with Brazil releasing its plan r…
S38
Secure Finance Risk-Based AI Policy for the Banking Sector — Embedded governance is not regulatory burden.It is strategic imperative.It ensures that innovation is sustainable, trust…
S39
MedTech and AI Innovations in Public Health Systems — And I’m going to do that. be within the delivery thing, not as a layer on top. And if you then, if we focus on, you know…
S40
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Assurance must be built into system development lifecycle rather than bolted on at the end
S41
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — – Thomas Schneider- Abhishek Singh Collective organization for data rights Cooperative models allow users to collectiv…
S42
WS #214 Youth-Led Digital Futures: Integrating Perspectives and Governance — – Creating regional data cooperatives that balance local needs with broader policy frameworks – High costs of data coll…
S43
Enhancing CSO participation in global digital policy processes: Roles, structures, and accountability — In response, SMEX conducted a policy analysis of the government’s ‘Impact’ platform, uncovering the absence of a privacy…
S44
WS #100 Integrating the Global South in Global AI Governance — Nibal Idlebi: I believe there are some initiatives that there are some practice in a way or another to have this to en…
S45
Scaling AI for Billions_ Building Digital Public Infrastructure — This comment introduced the concept of motivational asymmetry that influenced later discussions. Dharshan later built on…
S46
AI Governance Dialogue: Steering the future of AI — This comment addresses a fundamental flaw in top-down governance approaches, highlighting that trust cannot be imposed e…
S47
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — This observation prompted several panelists to emphasize problem-first rather than technology-first approaches. It influ…
S48
Building Scalable AI Through Global South Partnerships — This comment deepened the discussion by introducing the concept of ‘pull vs. push’ in technology adoption, which became …
S49
Digital Health at the crossroads of human rights, AI governance, and e-trade (SouthCentre) — In conclusion, the discussion shed light on various aspects related to data governance in the healthcare sector. The inf…
S50
Fixing Healthcare, Digitally — Data ownership could differ depending on whether data is anonymised or not. It is also noted that data ownership may di…
S51
Building Inclusive Societies with AI — So the adoption depends on the profile of the workers inside and how far they have adopted. And typically we design one …
S52
Global Enterprises Show How to Scale Responsible AI — The implementation challenge extends beyond organisational commitment to practical tooling and automation. Gurnani empha…
S53
Advancing Scientific AI with Safety Ethics and Responsibility — And also, very importantly, how we have to also see it from the context of, you know, people doing their own thing, DIY …
S54
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S55
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S56
AI as critical infrastructure for continuity in public services — “Data is siloed, data is not ready for AI scale.”[71]. “So almost 80 % of those pilots don’t make it to production.”[98]…
S57
How Small AI Solutions Are Creating Big Social Change — But having said that, I also want to say one point. I mean, just to respond to devices that can do computing, devices ar…
S58
Panel Discussion AI in Healthcare India AI Impact Summit — “One of the things that are shifting a lot of use cases … diagnostic technology evolve so fast that we can take it to …
S59
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — And I think that’s the biggest barrier. And companies, every company is impacted. And the barrier is the resistance. An…
S60
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S61
MedTech and AI Innovations in Public Health Systems — Government has launched SAHI (Strategy for Artificial Intelligence in Public Health) to address specialist shortages and…
S62
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Social and economic development | Artificial intelligence
S63
Conversational AI in low income & resource settings | IGF 2023 — Examples include cancer screening or diabetic retinopathy screening programs
S64
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — An audience member emphasizes the importance of research and continuous stakeholder engagement in policy formulation. Th…
S65
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Policy needs to be at a principle level because if it becomes too detailed, it becomes hard to maintain, especially with…
S66
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — The discussion revealed a common theme across different contexts: the gap between policy ambition and implementation cap…
S67
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Development | Infrastructure Roy Jakobs argues that AI provides clinicians with fast and accurate data to support daily…
S68
CONCEPT — To improve the diagnosis of diseases and the selection of the most effective treatment methods, the main priority is the…
S70
Artificial Intelligence: — AI will bring preventive healthcare to the next level, while advancing diagnosis and treatment procedures, for instance …
S71
Revolutionising medicine with AI: From early detection to precision care — It has been more than four years since AI was first introduced intoclinical trials involving humans. Even back then, it …
S72
WS #462 Bridging the Compute Divide a Global Alliance for AI — Fabro notes that 81 countries have national AI plans according to observatory rankings, with Brazil releasing its plan r…
S73
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Additionally,public-private partnershipsare essential for scaling sustainability initiatives. Companies invest in on-sit…
S74
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — The participant explains that India is following the same successful approach used for DPI development, where basic buil…
S75
Global Enterprises Show How to Scale Responsible AI — The implementation challenge extends beyond organisational commitment to practical tooling and automation. Gurnani empha…
S76
New plan outlines how India will democratise AI infrastructure — Indiais moving to rebalance access to AI infrastructureas part of a new national push to close gaps in computing power a…
S77
Keynote-Rishad Premji — Healthcare applications include earlier disease screening and strengthened rural care, while education benefits include …
S78
The CyberseCuriTy sTraTegy of LaTvia 2023-2026 — 10 Resilience, Deterrence, and Defence: Building Cybersecurity in the EU, available: https://eur-lex.europa.eu/legal-con…
S79
AI and the future of digital global supply chains (UNCTAD) — In conclusion, AI has emerged as a powerful tool that can significantly impact trade logistics. It can optimize routes a…
S80
Shaping the Future AI Strategies for Jobs and Economic Development — Telemedicine and remote healthcare delivery can serve dispersed populations effectively
S81
Artificial Intelligence & Emerging Tech — Kamesh Shekar:Thanks for that question, Jennifer. And some great points have come out from diverse regions. I try to not…
S82
Working Group Members: — Health systems must move toward universal health coverage and shift…
S83
What is it about AI that we need to regulate? — A consistent theme was the need for multi-stakeholder approaches rather than purely state-centric processes. TheWorkshop…
S84
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — A critical theme throughout the discussion was the need for problem-driven rather than technology-driven approaches. Gho…
S85
Driving Social Good with AI_ Evaluation and Open Source at Scale — Kumar argued that organizations should begin with red teaming to identify specific vulnerabilities before creating bench…
S86
https://dig.watch/event/india-ai-impact-summit-2026/regulating-open-data_-principles-challenges-and-opportunities — Sir Humphrey says, Prime Minister, when privacy, innovation, geopolitics, and economic growth are all mentioned in the s…
S87
Parliamentary Session 5 Parliamentary Exchange Enhancing Digital Policy Practices — Ashley Sauls from South Africa provided multilingual greetings and highlighted his country’s multi-faceted legislative r…
S88
Building Public Interest AI Catalytic Funding for Equitable Compute Access — The panelists challenged the narrow focus on compute ownership, with Martin Tisné warning against potential “white eleph…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Shri Saurabh Jain
4 arguments159 words per minute1216 words456 seconds
Argument 1
SAHI strategy for universal health coverage (Shri Saurabh Jain)
EXPLANATION
The speaker outlines the government’s SAHI (Strategy for Artificial Intelligence in Public Health) as a national framework to leverage AI for achieving universal health coverage. The strategy focuses on deploying AI tools to address specialist shortages and improve service quality, especially in rural areas.
EVIDENCE
He introduces SAHI as the AI strategy launched by the Government of India and notes that AI is already being used for tasks such as X-ray scanning and diabetic retinopathy screening, enabling quality care in resource-constrained settings [13-14][15-18]. He also highlights that AI can reduce out-of-pocket expenditures, thereby supporting universal health coverage goals [19-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The SAHI framework is described as the national AI strategy for health aimed at universal coverage and specialist shortage mitigation in [S1]; AI applications such as X-ray and diabetic retinopathy screening that underpin the strategy are cited in [S11].
MAJOR DISCUSSION POINT
National AI strategy for public health
DISAGREED WITH
Mr. Shiv Kumar
Argument 2
Bharat Digital Mission & national data platform for AI (Shri Saurabh Jain)
EXPLANATION
The speaker describes how the Bharat Digital Mission (BDM) creates a unified digital health identity and a national data infrastructure that can feed AI applications. Representative health data linked to a unique ID will enable disease surveillance, modeling, and imaging analytics across the country.
EVIDENCE
He explains that health is a state subject but the central government is collaborating with states to ensure representative, high-quality data from every region, and that the ABAID (unique health ID) will allow health records to move with the individual, supporting AI-driven disease surveillance and imaging use cases [207-215].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bharat Digital Mission and the unique health ID (ABAID) that enable a unified health data infrastructure are detailed in [S1] and further explained in [S11].
MAJOR DISCUSSION POINT
National data platform for AI
DISAGREED WITH
Mr. Shiv Kumar, Ms. Saraswathi Padmanabhan
Argument 3
AI‑driven tele‑consultation & out‑of‑pocket cost reduction (Shri Saurabh Jain)
EXPLANATION
The speaker highlights tele‑consultation platforms such as eSangevani that connect primary‑care doctors with tertiary specialists, reducing the need for expensive private care. This model lowers out‑of‑pocket spending for patients while expanding access to specialist advice.
EVIDENCE
He mentions the eSangevani tele-consultation system where a doctor at a Primary Health Centre can obtain expert opinions from tertiary hospitals, and notes that AI-enabled services help cut out-of-pocket expenditures for citizens [18-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-enabled tele-consultation is linked to reduced out-of-pocket spending in the national strategy overview in [S1]; broader financial constraints in health digital programmes are discussed in [S19].
MAJOR DISCUSSION POINT
Cost reduction through tele‑consultation
Argument 4
ICMR sandbox for testing and replicating startup innovations nationally (Shri Saurabh Jain)
EXPLANATION
The speaker confirms that the Indian Council of Medical Research (ICMR) is establishing a sandbox environment where health‑tech startups can pilot AI solutions, evaluate performance, and then scale successful models across states. This mechanism aims to create a repeatable pathway for national rollout.
EVIDENCE
In response to an audience query, he states that ICMR is developing a sandbox for startups to test their models, and that once validated, these solutions can be replicated and scaled nationally [341-346].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sandbox approaches for health-tech testing and scaling are outlined in the AI sandbox literature [S13], the responsible innovation overview [S15], regulator-led sandbox models [S16], and the startup scaling framework [S26].
MAJOR DISCUSSION POINT
Central sandbox for scaling innovations
S
Shri Saurabh Gaur
2 arguments164 words per minute2087 words761 seconds
Argument 1
Emphasis on preventive health as highest ROI (Shri Saurabh Gaur)
EXPLANATION
The speaker stresses that preventive health interventions deliver the greatest return on investment for the health system. He asks panelists to identify where AI can most effectively strengthen preventive care.
EVIDENCE
During his moderation he notes that “prevention is better than cure and preventive programs will have the highest ROI,” framing the discussion around AI’s role in preventive health [159-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of reforms that move health systems toward universal coverage and emphasize preventive impact is highlighted in the policy discussion of [S12].
MAJOR DISCUSSION POINT
Preventive health as priority
Argument 2
Digital‑literacy gaps and overload of multiple health programmes for frontline staff (Shri Saurabh Gaur)
EXPLANATION
The speaker points out that frontline health workers must manage dozens of programmes, creating a burden that hampers AI adoption. He also highlights challenges related to digital literacy and the ability of staff to use multiple digital tools effectively.
EVIDENCE
He remarks that a multi-purpose health assistant or nurse must handle 25 programmes, and that digital-literacy and adoption challenges are a real issue at the state level [232-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building challenges and digital-literacy gaps for health workers are documented in [S17] and [S18]; infrastructure constraints affecting frontline adoption are listed in [S19].
MAJOR DISCUSSION POINT
Frontline capacity and digital literacy
DISAGREED WITH
Mr. Shiv Kumar, Shri Saurabh Jain
M
Mr. Shiv Kumar
3 arguments185 words per minute670 words216 seconds
Argument 1
Problem‑first approach & use‑case library for AI adoption (Mr. Shiv Kumar)
EXPLANATION
The speaker argues that AI solutions should be driven by clearly defined public‑health problems rather than technology looking for a problem. He advocates building a library of validated use cases to demonstrate impact before scaling.
EVIDENCE
He explains that innovators often present solutions looking for problems and stresses the need for the state to set agendas and priorities, citing the Andhra government’s Center for Applied Technology as an example, and calls for a use-case library to show evidence of impact [32-38][48-54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for a problem-driven AI agenda and a use-case library is mentioned in the national AI strategy overview [S1] and reinforced by sandbox and responsible innovation discussions in [S13] and [S15].
MAJOR DISCUSSION POINT
Problem‑driven AI institutionalization
DISAGREED WITH
Shri Saurabh Jain
Argument 2
Data cooperatives & work‑culture transformation (Mr. Shiv Kumar)
EXPLANATION
The speaker proposes that citizens own their health data through cooperatives and receive token‑based incentives, thereby reshaping the work culture around data sharing and AI usage. This model aims to align incentives for both the public and the state.
EVIDENCE
He suggests that people in districts like Nellore should own data via cooperatives, receive reverse tokens for data use, and that this incentive reversal would change work culture and data usage practices [286-296].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of data cooperatives for citizen-owned health data is presented in [S22]; privacy and multistakeholder partnership considerations are discussed in [S20].
MAJOR DISCUSSION POINT
Data ownership and cultural shift
DISAGREED WITH
Shri Saurabh Jain, Ms. Saraswathi Padmanabhan
Argument 3
Data quality, governance & availability constraints (Mr. Shiv Kumar)
EXPLANATION
The speaker highlights that AI models cannot function effectively on poor‑quality or incomplete data, and that many states lack the necessary data infrastructure. He stresses the need for robust data governance to enable AI impact.
EVIDENCE
He notes that AI models struggle when data quality is low, most states do not have adequate data, and that without proper processes AI value will remain limited [197-200].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data infrastructure gaps and governance challenges are outlined in [S19]; legal and regulatory frameworks for data governance are examined in [S24]; the importance of data for development is emphasized in [S25].
MAJOR DISCUSSION POINT
Data quality as a barrier
M
Ms. Saraswathi Padmanabhan
5 arguments171 words per minute1528 words533 seconds
Argument 1
Workflow integration, change management & incentive design (Ms. Saraswathi Padmanabhan)
EXPLANATION
The speaker emphasizes that AI tools must be seamlessly integrated into existing health‑worker workflows and that change‑management strategies, including incentives, are essential for adoption. Without perceived value, staff will resist new technologies.
EVIDENCE
She describes the need for AI to be embedded in daily tasks, cites resistance among staff, the importance of early adopters for training models, and stresses incentives and connectivity as factors influencing uptake [239-259].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building and integration of digital tools into health-worker workflows are highlighted in [S17]; connectivity and power barriers that affect integration are listed in [S19].
MAJOR DISCUSSION POINT
Integration and change management
Argument 2
AI‑assisted longitudinal patient records & decision support for clinicians (Ms. Saraswathi Padmanabhan)
EXPLANATION
The speaker outlines how AI can compile and present a citizen’s longitudinal health data to primary‑care doctors, enabling better medication decisions and continuity of care. AI also provides prompts for missed investigations and evidence‑based treatment guidelines.
EVIDENCE
She explains that AI can structure a patient’s full history, show trends such as HbA1c changes, and alert clinicians to necessary investigations, thereby supporting decision-making while keeping the doctor as the final decision-maker [62-70][71-80].
MAJOR DISCUSSION POINT
Clinical decision support
DISAGREED WITH
Mr. Shiv Kumar, Shri Saurabh Jain
Argument 3
Population‑level risk scoring & wellness composite analytics (Ms. Saraswathi Padmanabhan)
EXPLANATION
The speaker describes a composite wellness score that combines patient data and environmental factors to identify high‑risk areas and individuals. This analytics tool helps health departments prioritize preventive interventions.
EVIDENCE
She mentions collaboration with the Andhra Pradesh government to create a wellness composite score that aggregates patient and environmental data to predict risks and guide proactive care [100-101].
MAJOR DISCUSSION POINT
Risk scoring for public health
Argument 4
Integration into existing workflows, connectivity, power & incentive issues (Ms. Saraswathi Padmanabhan)
EXPLANATION
The speaker points out practical barriers such as unreliable connectivity, power supply, and lack of incentives that hinder AI deployment across the health system. She stresses that these infrastructural challenges must be addressed for successful scaling.
EVIDENCE
She notes that while Andhra Pradesh faces fewer connectivity and power issues, many other states struggle with these constraints, and that incentives are crucial for adoption [245-258].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Infrastructure challenges such as unreliable connectivity and power supply, as well as the need for incentives, are documented in [S19]; workforce capacity concerns are also noted in [S17].
MAJOR DISCUSSION POINT
Infrastructure and incentive barriers
Argument 5
TataMD collaboration on digital backbone and care‑coordination tools (Ms. Saraswathi Padmanabhan)
EXPLANATION
The speaker highlights TataMD as a private‑sector partner contributing to the digital backbone for Andhra Pradesh’s public health system, focusing on care coordination and AI‑enabled tools for clinicians and frontline workers.
EVIDENCE
She references TataMD’s presence at the exhibition, its role in building AI-assisted longitudinal records, decision-support prompts, and tools for ASHA workers to prioritize high-risk pregnancies [59-61][62-70].
MAJOR DISCUSSION POINT
Public‑private partnership for digital health
M
Mr. Sanjay Seth
2 arguments150 words per minute1017 words404 seconds
Argument 1
Predictive detection of program failures and targeted action (Mr. Sanjay Seth)
EXPLANATION
The speaker argues that AI can analyze program data in real time to predict where implementation failures are likely to occur, allowing timely corrective actions. This predictive capability moves dashboards from reporting past failures to preventing future ones.
EVIDENCE
He explains that AI can identify likely failure points, pinpoint responsible actors, and trigger alerts so that appropriate personnel can intervene before a program collapses [110-118].
MAJOR DISCUSSION POINT
AI for program monitoring
Argument 2
AI‑driven tobacco‑control monitoring, image verification & personalized messaging (Mr. Sanjay Seth)
EXPLANATION
The speaker details how AI is used in a tobacco‑control program across 20,000 schools to automatically verify activity completion via image recognition and to send personalized, language‑specific messages to teachers, dramatically improving compliance.
EVIDENCE
He reports that AI image recognition achieves 98% accuracy in confirming activity execution, and that personalized messages are sent to 40,000 teachers in their preferred language, leading to orders-of-magnitude improvement in program effectiveness [182-191].
MAJOR DISCUSSION POINT
AI‑enabled program implementation
D
Dr. Rakesh Kalapala
3 arguments193 words per minute1035 words320 seconds
Argument 1
Cost‑effective AI diagnostics (e.g., fatty‑liver detection) (Dr. Rakesh Kalapala)
EXPLANATION
The speaker presents an AI‑based diagnostic algorithm for fatty‑liver detection that costs only 500 rupees per test, compared with a conventional machine costing 1.2 crore rupees and 5,000 rupees per scan. This illustrates how AI can dramatically lower diagnostic costs while maintaining accuracy.
EVIDENCE
He describes the development of a pure-AI model that detects fatty liver for 500 rupees, versus a traditional machine costing 1.2 crore and charging 5,000 rupees per use, highlighting the economic advantage for gastroenterology practice [140-142].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-based diagnostic tools that lower costs, such as TB and diabetic retinopathy screening, are described in [S11]; broader AI diagnostic cost benefits are referenced in the national AI strategy summary [S1].
MAJOR DISCUSSION POINT
Affordable AI diagnostics
Argument 2
AI‑enabled discharge summaries & bed‑management automation (Dr. Rakesh Kalapala)
EXPLANATION
The speaker explains that AI can automate the generation of discharge summaries, reducing processing time from 8‑10 hours to about half an hour, and can assist with bed‑management in hospitals, improving operational efficiency.
EVIDENCE
He notes that traditional discharge summaries take 8-10 hours, whereas an AI-enabled system can produce them within 30 minutes, and that AI can also support electronic medical record-based bed management to streamline patient flow [146-148].
MAJOR DISCUSSION POINT
Operational efficiency through AI
Argument 3
Private‑sector validation, hand‑over of solutions and public‑private platforms (Dr. Rakesh Kalapala)
EXPLANATION
The speaker describes a collaborative platform involving the AIM Foundation, Triple I, and academic partners that validates health‑tech solutions in a clinical setting before handing them over to public health systems. This model accelerates adoption by leveraging private‑sector innovation within a neutral framework.
EVIDENCE
He explains that the platform brings together innovators, provides hand-holding and validation, and cites the example of the “Journey Mitra” tool deployed with ASHA workers for high-risk pregnancy monitoring, illustrating the public-private hand-over process [150-155].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A framework for validating health-tech startups and transferring solutions to the public sector is discussed in the startup ecosystem guidance [S26]; sandbox and responsible innovation mechanisms provide additional context in [S13] and [S15].
MAJOR DISCUSSION POINT
Public‑private validation pathway
A
Audience
1 argument172 words per minute302 words105 seconds
Argument 1
Central sandbox for scaling startup solutions (Audience)
EXPLANATION
An audience member asks whether the central government can create a sandbox platform that validates health‑tech startups and enables their solutions to be scaled nationally, reducing the need for repeated pilots in each state.
EVIDENCE
The participant raises the question about a national sandbox to test and replicate startup innovations, noting the current reliance on state-level pilots and limited private investment in MedTech [335-340].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of a national sandbox to test and scale health-tech innovations is covered in the AI sandbox literature [S13], the responsible innovation overview [S15], regulator-led sandbox models [S16], and the startup scaling framework [S26].
MAJOR DISCUSSION POINT
National sandbox proposal
Agreements
Agreement Points
AI is seen as a key tool to strengthen preventive health programmes and to predict and prevent implementation failures.
Speakers: Shri Saurabh Gaur, Mr. Sanjay Seth
Emphasis on preventive health as highest ROI Predictive detection of program failures and targeted action
Both speakers highlighted that preventive health delivers the highest return on investment and that AI can be used to anticipate where programmes are likely to fail, enabling timely corrective actions [159-162][110-118].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with calls to embed AI within health delivery to generate outcome-focused feedback loops, as highlighted in MedTech and AI Innovations in Public Health Systems [S39].
High‑quality, representative health data is essential for effective AI deployment.
Speakers: Shri Saurabh Jain, Mr. Shiv Kumar
Bharat Digital Mission & national data platform for AI Data quality, governance & availability constraints
Jain stressed the need for representative, high-quality data from all regions via the Bharat Digital Mission, while Kumar pointed out that poor data quality hampers AI models and many states lack adequate data infrastructure [207-215][197-200].
POLICY CONTEXT (KNOWLEDGE BASE)
The necessity of robust data mirrors concerns about data silos and readiness for AI scale noted in AI as critical infrastructure for continuity in public services [S56] and the emphasis on built-in assurance throughout the AI lifecycle [S40].
Successful AI adoption requires seamless workflow integration, change‑management and incentive structures.
Speakers: Ms. Saraswathi Padmanabhan, Mr. Shiv Kumar
Workflow integration, change management & incentive design Problem‑first approach & use‑case library for AI adoption
Both emphasized that AI tools must be embedded in existing health-worker workflows, that early adopters are needed to train models, and that incentives and change-management are critical for uptake [239-259][32-38][48-54].
POLICY CONTEXT (KNOWLEDGE BASE)
These requirements echo the recommendation for embedded governance rather than bolt-on solutions in the Secure Finance Risk-Based AI Policy [S38] and the need for AI to be integrated within delivery systems rather than as an overlay [S39].
A sandbox or use‑case library is needed to test, validate and scale AI solutions across states.
Speakers: Mr. Shiv Kumar, Shri Saurabh Jain
Problem‑first approach & use‑case library for AI adoption ICMR sandbox for testing and replicating startup innovations nationally
Kumar called for a curated library of validated use cases, while Jain confirmed that ICMR is building a sandbox to pilot and then replicate successful health-tech innovations nationally [48-54][341-346].
POLICY CONTEXT (KNOWLEDGE BASE)
The concept matches the evidence-based use-case library approach advocated in Open Forum #53 AI for Sustainable Development [S47] and the pull-vs-push scaling discussion in Building Scalable AI Through Global South Partnerships [S48].
AI can markedly reduce out‑of‑pocket expenditures and lower diagnostic costs.
Speakers: Shri Saurabh Jain, Dr. Rakesh Kalapala
AI‑driven tele‑consultation & out‑of‑pocket cost reduction Cost‑effective AI diagnostics (e.g., fatty‑liver detection)
Jain highlighted tele-consultation and AI-enabled services that cut patient expenses, while Rakesh described a low-cost AI algorithm for fatty-liver detection that is far cheaper than conventional equipment [18-20][140-142].
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence from AI-driven diagnostic technologies shows dramatic cost reductions and home-based screening, directly supporting this claim [S58].
Similar Viewpoints
Both agree that without representative, high‑quality data the AI ecosystem cannot deliver reliable outcomes, and that data governance is a prerequisite for scaling AI in health [207-215][197-200].
Speakers: Shri Saurabh Jain, Mr. Shiv Kumar
Bharat Digital Mission & national data platform for AI Data quality, governance & availability constraints
Both stress that AI solutions must be problem‑driven, integrated into daily workflows, and supported by change‑management and incentives to achieve adoption [32-38][48-54][239-259].
Speakers: Mr. Shiv Kumar, Ms. Saraswathi Padmanabhan
Problem‑first approach & use‑case library for AI adoption Workflow integration, change management & incentive design
Both see affordable AI‑driven diagnostics as a cornerstone for expanding universal health coverage and reducing costs for patients and the system [140-142][13-14][15-18].
Speakers: Dr. Rakesh Kalapala, Shri Saurabh Jain
Cost‑effective AI diagnostics (e.g., fatty‑liver detection) SAHI strategy for universal health coverage
Both argue that preventive health programmes deliver the highest ROI and that AI can be leveraged to anticipate and mitigate failures in such programmes [159-162][110-118].
Speakers: Shri Saurabh Gaur, Mr. Sanjay Seth
Emphasis on preventive health as highest ROI Predictive detection of program failures and targeted action
Both underline that AI must be embedded within the delivery system to provide actionable, real‑time insights rather than being a detached reporting layer [120-121][120-121].
Speakers: Mr. Shiv Kumar, Mr. Sanjay Seth
Data quality, governance & availability constraints Predictive detection of program failures and targeted action
Unexpected Consensus
AI must be embedded inside the delivery system rather than operating as a separate overlay.
Speakers: Mr. Shiv Kumar, Mr. Sanjay Seth
Data quality, governance & availability constraints Predictive detection of program failures and targeted action
Both speakers, coming from policy and programme-implementation perspectives respectively, explicitly stated that AI should exist inside the delivery system and not be a top-layer dashboard, a point not raised by other participants [120-121][120-121].
POLICY CONTEXT (KNOWLEDGE BASE)
This recommendation is explicitly stated in MedTech and AI Innovations in Public Health Systems, which urges AI to be “within the delivery thing, not as a layer on top” [S39].
Overall Assessment

There is strong consensus that AI can enhance preventive health, reduce costs, and improve program effectiveness, but its success hinges on high‑quality data, workflow integration, and structured testing (sandbox/use‑case library). Public‑private collaboration and incentive mechanisms are also widely endorsed.

High consensus on data quality, preventive focus, and need for integration; moderate consensus on scaling mechanisms and PPP. The alignment suggests a solid foundation for coordinated AI policy and implementation in India’s public health system.

Differences
Different Viewpoints
Primary barrier to AI adoption – work culture & data ownership versus data quality, infrastructure and digital literacy
Speakers: Mr. Shiv Kumar, Shri Saurabh Gaur, Shri Saurabh Jain
Data cooperatives & work‑culture transformation (Mr. Shiv Kumar) Digital‑literacy gaps and overload of multiple health programmes for frontline staff (Shri Saurabh Gaur) Bharat Digital Mission & national data platform for AI (Shri Saurabh Jain)
Shiv Kumar argues that the biggest obstacle is work culture and proposes citizen data cooperatives with token incentives to reshape behaviour [286-291][292-296]. Gaur points to frontline staff being overwhelmed by 25 programmes and lacking digital literacy, which hampers AI uptake [232-236]. Jain stresses that high-quality, representative data via the national health ID and digitisation are essential for AI to work [207-213][214-215]. The speakers therefore disagree on which barrier is primary and on the remedy.
POLICY CONTEXT (KNOWLEDGE BASE)
Cultural resistance as the chief obstacle is highlighted in the World Economic Forum panel on AI adoption barriers [S59] and reinforced by governance challenges noted in AI critical infrastructure reports [S56].
Approach to scaling AI solutions – problem‑first, evidence‑based use‑case library versus top‑down national strategy rollout
Speakers: Mr. Shiv Kumar, Shri Saurabh Jain
Problem‑first approach & use‑case library for AI adoption (Mr. Shiv Kumar) SAHI strategy for universal health coverage (Shri Saurabh Jain)
Shiv Kumar stresses that AI should be driven by clearly defined public-health problems and that a curated use-case library is needed before scaling any solution [32-38][48-54]. Jain describes the SAHI strategy as a central, government-led framework that is already deploying AI tools at scale to achieve universal health coverage [13-14][207-215]. The two positions differ on whether scaling should be evidence-driven from the ground up or driven by a national policy roadmap.
POLICY CONTEXT (KNOWLEDGE BASE)
The problem-first, evidence-based scaling model is championed in Open Forum #53 AI for Sustainable Development [S47] and contrasted with top-down “push” models in Building Scalable AI Through Global South Partnerships [S48].
Data ownership model – citizen‑owned data cooperatives with token incentives versus a centralized health ID system
Speakers: Mr. Shiv Kumar, Shri Saurabh Jain, Ms. Saraswathi Padmanabhan
Data cooperatives & work‑culture transformation (Mr. Shiv Kumar) Bharat Digital Mission & national data platform for AI (Shri Saurabh Jain) AI‑assisted longitudinal patient records & decision support for clinicians (Ms. Saraswathi Padmanabhan)
Shiv Kumar proposes that health data be owned by citizens through district-level data cooperatives that reward data sharing with reverse tokens [292-296]. Jain outlines a centralized unique health ID (ABAID) that links records across the system but does not address ownership rights, focusing on data availability for AI [214-215]. Saraswathi describes using aggregated patient data for clinical decision support without mentioning ownership, assuming data is centrally accessible [62-70]. The differing visions of data governance constitute a disagreement.
POLICY CONTEXT (KNOWLEDGE BASE)
Cooperative data-rights frameworks with token incentives are discussed in Open Forum #64 Local AI Policy Pathways [S41] and Youth-Led Digital Futures on regional data cooperatives [S42].
Unexpected Differences
Work culture identified as the single biggest barrier to AI impact
Speakers: Mr. Shiv Kumar, Other panelists (e.g., Shri Saurabh Gaur, Shri Saurabh Jain)
Data cooperatives & work‑culture transformation (Mr. Shiv Kumar) Digital‑literacy gaps and overload of multiple health programmes for frontline staff (Shri Saurabh Gaur) Bharat Digital Mission & national data platform for AI (Shri Saurabh Jain)
Shiv Kumar’s claim that ‘our single biggest problem is going to be work culture’ [286-291] is unexpected because the rest of the discussion focuses on technical, data-quality, and infrastructure issues rather than organisational culture.
POLICY CONTEXT (KNOWLEDGE BASE)
This assessment is corroborated by the World Economic Forum discussion that identifies cultural resistance as the primary barrier to AI scaling [S59].
Proposal of citizen‑owned data cooperatives with token incentives
Speakers: Mr. Shiv Kumar, Other panelists (e.g., Shri Saurabh Jain, Ms. Saraswathi Padmanabhan)
Data cooperatives & work‑culture transformation (Mr. Shiv Kumar) Bharat Digital Mission & national data platform for AI (Shri Saurabh Jain) AI‑assisted longitudinal patient records & decision support for clinicians (Ms. Saraswathi Padmanabhan)
The suggestion that health data be owned by citizens through cooperatives and monetised via reverse tokens [292-296] does not appear elsewhere in the panel, making it an unexpected divergence from the more conventional centralized data-sharing approach.
POLICY CONTEXT (KNOWLEDGE BASE)
The proposal aligns with cooperative models that enable collective negotiation and ownership stakes, providing incentive mechanisms, as outlined in Open Forum #64 Local AI Policy Pathways [S41] and further examined in Youth-Led Digital Futures [S42].
Overall Assessment

The panel shows broad consensus that AI can improve public‑health efficiency, cost‑effectiveness and preventive care. However, there are clear disagreements on the primary barriers (work‑culture vs data‑quality/digital literacy), on the optimal scaling pathway (ground‑up evidence‑driven use‑case library vs top‑down national strategy), and on data governance (citizen‑owned cooperatives vs centralized health ID).

Moderate – while all participants share the same overarching goal of leveraging AI for public health, the differing views on cultural, technical and governance levers indicate that policy design and implementation will need to reconcile these perspectives. Failure to address the work‑culture and data‑ownership issues could limit the effectiveness of otherwise technically sound AI deployments.

Partial Agreements
All three agree that AI can substantially improve public‑health outcomes and reduce costs – Gaur highlights preventive ROI [159-162], Jain points to tele‑consultation lowering out‑of‑pocket expenses [18-20], and Saraswathi explains AI‑enabled longitudinal records and decision prompts for clinicians [62-70]. They differ on the primary pathway (prevention, tele‑consultation, or workflow integration) to achieve the shared goal.
Speakers: Shri Saurabh Gaur, Shri Saurabh Jain, Ms. Saraswathi Padmanabhan
Emphasis on preventive health as highest ROI (Shri Saurabh Gaur) AI‑driven tele‑consultation & out‑of‑pocket cost reduction (Shri Saurabh Jain) AI‑assisted longitudinal patient records & decision support for clinicians (Ms. Saraswathi Padmanabhan)
Both recognise the need for public‑private collaboration. Rakesh describes a neutral platform (AIM Foundation, Triple I, etc.) that validates solutions before handing them to the public system [150-155]. Gaur stresses the practical challenges of frontline staff capacity and digital literacy that such collaborations must overcome [232-236]. They agree on partnership but differ on the operational focus.
Speakers: Dr. Rakesh Kalapala, Shri Saurabh Gaur
Private‑sector validation, hand‑over of solutions and public‑private platforms (Dr. Rakesh Kalapala) Digital‑literacy gaps and overload of multiple health programmes for frontline staff (Shri Saurabh Gaur)
Takeaways
Key takeaways
India’s SAHI strategy and the Bharat Digital Mission provide a national framework for AI‑enabled universal health coverage, focusing on cost‑effectiveness, data digitisation and reduced out‑of‑pocket spending. Successful AI adoption requires a problem‑first approach, a curated use‑case library, and clear policy guardrails; solutions should be matched to identified public‑health problems. Integration of AI into existing clinical and operational workflows, supported by change‑management, incentives and training, is essential for frontline acceptance. AI can deliver tangible clinical benefits: low‑cost diagnostics (e.g., fatty‑liver detection), automated discharge summaries, bed‑management, tele‑consultation and longitudinal patient records. Population‑level analytics (risk‑scoring, disease surveillance, wellness composites) can improve care coordination and enable proactive preventive interventions. Preventive health programmes (tobacco control, adolescent health, NCD behaviour change) offer the highest ROI; AI can predict programme failures, prioritize actions and personalize messaging. Key barriers to scaling include data quality and governance, connectivity/power constraints, digital‑literacy gaps, overload of multiple health programmes for frontline staff, and entrenched work‑culture attitudes. Public‑private partnerships (TataMD, AIM Foundation, private hospitals, ICMR sandbox) are critical for rapid validation, hand‑over and national scaling of AI solutions.
Resolutions and action items
Government of India will work with state governments to ensure representative, high‑quality health data for AI model training (as part of SAHI and Bharat Digital Mission). A use‑case library for AI in public health will be created and maintained (proposed by Mr. Shiv Kumar). Data cooperatives with reverse‑token incentives for citizens’ health data are to be explored (suggested by Mr. Shiv Kumar). TataMD will continue development of the digital backbone (Project Sanjeevani) and coordinate with the state for care‑coordination tools. The AIM Foundation will set up a biodesign lab in Andhra Pradesh to foster AI‑driven health innovations. ICMR will establish a sandbox platform for testing and scaling startup AI solutions nationally (mentioned by audience and Shri Saurabh Jain). Early‑adopter clinicians and frontline workers will be identified to pilot AI tools, provide feedback and train models before wider rollout (suggested by Ms. Saraswathi Padmanabhan). AI‑enabled monitoring of the tobacco‑control programme (image verification, predictive failure alerts, personalized teacher messages) will be expanded across districts (implemented by Mr. Sanjay Seth). Private‑sector validation platforms (e.g., collaboration with IIT Hyderabad, ISB) will be used to hand‑over vetted solutions to public health systems (mentioned by Dr. Rakesh Kalapala).
Unresolved issues
Concrete mechanisms for ensuring consistent data quality, standardisation and governance across all states remain undefined. Specific incentive structures and change‑management road‑maps for frontline health workers have not been finalised. A clear, repeatable process for scaling successful pilot AI solutions from individual states to a national level is still lacking. Approaches to AI‑driven mental‑health screening (voice/video analysis, privacy safeguards) were raised but no concrete plan was presented. How to address digital‑literacy and connectivity challenges in less‑advanced states was discussed but no solution was agreed upon. Sustainable financing models for MedTech startups, given limited VC interest, were highlighted without a definitive funding framework.
Suggested compromises
Adopt data cooperatives with reverse‑token incentives, balancing citizen data ownership with the need for large training datasets (Mr. Shiv Kumar). Integrate AI tools directly into existing workflows and use early‑adopter clinicians to demonstrate value before broader deployment, mitigating resistance (Ms. Saraswathi Padmanabhan). Create a neutral public‑private validation platform (involving IIT Hyderabad, ISB, AIM Foundation) that allows private innovators to test solutions and then hand them over to the public system, aligning speed of innovation with public‑sector scalability (Dr. Rakesh Kalapala).
Thought Provoking Comments
Solutions are looking for problems, not the other way around. We must marry the problem important for the state with the solution, build a use‑case library, and have the state test feasibility, outcomes and cost‑savings.
This reframed the usual startup‑centric narrative, emphasizing a problem‑first approach and institutional mechanisms (use‑case library, state‑led testing) needed for scaling AI in public health.
Set the agenda for the rest of the panel, prompting others to discuss how to align innovations with state‑identified priorities and to consider evidence generation before adoption.
Speaker: Mr. Shiv Kumar
Dashboards only tell me what I have not done; they don’t tell me what I am supposed to do. AI must be embedded inside the delivery system, not sit on top of it, to predict failures and trigger actions.
Highlighted a practical limitation of current data tools and introduced the concept of predictive, prescriptive AI that can guide real‑time actions, shifting the conversation from descriptive analytics to actionable intelligence.
Redirected the discussion toward operational integration of AI, leading to deeper talks about early warning systems, real‑time alerts, and the need for AI to be part of frontline workflows.
Speaker: Mr. Sanjay Seth
We developed an AI model that detects fatty liver for ₹500 versus a ₹1.2 crore machine charging ₹5,000 per scan. This need‑based innovation dramatically cuts cost and scales diagnosis.
Provided a concrete, cost‑effective example of AI delivering value in a resource‑constrained setting, illustrating how AI can replace expensive hardware with software‑only solutions.
Grounded the abstract discussion in a tangible use‑case, prompting other panelists to consider similar low‑cost AI applications and reinforcing the theme of cost‑effectiveness.
Speaker: Dr. Rakesh Kalapala
Through the ABHA digital ID, health records travel with the person, enabling AI for disease surveillance, imaging triage, and automatic population of multiple portals, thus reducing administrative burden for frontline workers.
Connected national digital infrastructure to AI potential, emphasizing data representativeness and interoperability as foundations for scalable AI solutions.
Shifted the conversation to the national policy layer, linking state initiatives to a broader digital health ecosystem and underscoring the importance of data quality and standardization.
Speaker: Shri Saurabh Jain
Technology is just an enabler; the biggest problem is work culture, incentives, and data ownership. People should own their data through cooperatives and receive reverse tokens for its use in AI.
Introduced a provocative governance model that challenges existing data‑centric policies and brings ethics, incentives, and community ownership into the AI debate.
Created a turning point by moving the dialogue from technical implementation to socio‑economic structures, prompting others to reflect on incentives, trust, and sustainable data ecosystems.
Speaker: Mr. Shiv Kumar
If AI is not integrated into the workflow and does not add clear value, adoption will fail. Change management, early adopters, training the model, connectivity, and incentives are critical for scale.
Synthesized practical barriers—workflow integration, change management, incentives—offering a realistic checklist for successful deployment in public health settings.
Deepened the analysis of implementation challenges, leading the moderator to ask for “three key technology integration challenges,” and steering the conversation toward actionable steps.
Speaker: Ms. Saraswathi Padmanabhan
We have piloted the QPR (Question‑Persuade‑Refer) methodology for suicide prevention among 10 lakh students, identifying ~15 % at risk, and are open to sandbox collaborations for AI‑driven mental‑health screening.
Extended the scope of AI applications to mental health, acknowledging privacy concerns while offering a concrete program and a willingness to co‑develop AI tools.
Broadened the thematic coverage of the panel, showing that AI’s role is not limited to imaging or diagnostics but also behavioral health, and invited future collaborations.
Speaker: Shri Saurabh Gaur (responding to audience)
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from high‑level policy aspirations to concrete implementation realities. Shiv Kumar’s problem‑first framing and later emphasis on work culture and data cooperatives set the structural lens through which all participants evaluated AI initiatives. Sanjay Seth’s critique of dashboards and call for embedded, predictive AI redirected focus toward actionable, real‑time solutions. Dr. Kalapala’s low‑cost diagnostic example and Ms. Padmanabhan’s workflow‑integration checklist provided tangible evidence and practical roadmaps, while Shri Jain’s linkage of national digital IDs underscored the foundational role of data infrastructure. The audience’s mental‑health query and Gaur’s response further expanded the domain of AI application. Collectively, these comments introduced new ideas, challenged assumptions, and deepened the dialogue, steering the panel toward a nuanced understanding of the technical, operational, cultural, and governance dimensions needed to scale AI in India’s public health system.

Follow-up Questions
How can we ensure equitable population coverage and inclusion when deploying AI systems at scale in public health?
The opening question about guaranteeing that the population is reached by AI solutions was not fully answered, highlighting the need for strategies to achieve universal, equitable access.
Speaker: Shri Saurabh Gaur
What framework and processes are needed to build a comprehensive, evidence‑based use‑case library for AI in health across diverse settings (e.g., tribal, low‑income, urban)?
Shiv Kumar emphasized the lack of a use‑case library and evidence, indicating a research gap in cataloguing and validating AI applications across varied populations.
Speaker: Mr Shiv Kumar
How can data quality and completeness be improved across states to enable reliable AI model training and deployment?
Both highlighted that poor data quality hampers AI effectiveness, pointing to the need for systematic studies on data collection standards, interoperability, and governance.
Speaker: Mr Shiv Kumar; Shri Saurabh Jain
What are effective approaches to embed AI directly within health service delivery workflows rather than as a separate overlay?
Sanjay stressed that AI must be part of the delivery system to be actionable, suggesting research into integration models, real‑time decision support, and workflow redesign.
Speaker: Mr Sanjay Seth
How can AI be safely and ethically applied to mental health screening (e.g., suicide ideation, depression) using audio/video data while ensuring privacy and data security?
The audience raised a need for guidance on mental‑health AI tools; responses were preliminary, indicating a need for deeper investigation into algorithms, consent, and regulatory frameworks.
Speaker: Audience member (mental health focus); Dr Rakesh Kalapala; Shri Saurabh Gaur
What should a national sandbox or platform look like for startups to test, validate, and scale AI‑based MedTech solutions across states?
The audience asked about a central mechanism to avoid repeated pilots; Jain mentioned ICMR sandbox, but further research is required on design, governance, and scaling pathways.
Speaker: Audience member; Shri Saurabh Jain
What change‑management strategies, incentives, and digital‑literacy programs are most effective for frontline health workers to adopt AI tools?
She identified integration, change management, and incentives as barriers, highlighting a research need on behavior change, training, and motivation for health workers.
Speaker: Ms Saraswathi Padmanabhan
How can data cooperatives and token‑based incentive models be designed to give citizens ownership and benefit from their health data used in AI?
Shiv proposed data cooperatives and reverse tokens, suggesting a novel governance and economic model that requires exploration of feasibility, legal, and ethical aspects.
Speaker: Mr Shiv Kumar
What is the measurable impact of AI‑enabled diagnostic tools on patient throughput, out‑of‑pocket costs, and clinical outcomes in resource‑constrained settings?
Jain mentioned potential cost reductions and efficiency gains but did not provide data, indicating a need for impact evaluation studies.
Speaker: Shri Saurabh Jain
How can standardized mental‑health data (questionnaires, EMR fields) be incorporated into public health information systems nationwide?
The audience highlighted lack of uniform mental‑health data collection, pointing to a research gap in standardization and integration into existing health IT.
Speaker: Audience member; Dr Rakesh Kalapala
How can AI models be trained on region‑specific, representative datasets to account for diverse disease and demographic profiles across India?
Jain stressed the importance of representative data for AI accuracy, suggesting research into data sampling, regional model adaptation, and bias mitigation.
Speaker: Shri Saurabh Jain
What AI‑driven solutions can optimize supply‑chain management for medicines and consumables in public health facilities?
Jain mentioned AI for supply‑chain optimization but did not detail approaches, indicating a need for pilot studies and evaluation of logistics AI.
Speaker: Shri Saurabh Jain
How can AI interventions be designed to reduce out‑of‑pocket expenditure for patients, especially in rural and underserved areas?
While Jain linked AI to lower out‑of‑pocket costs, concrete mechanisms were not discussed, warranting research on cost‑benefit analyses and patient‑level financial impact.
Speaker: Shri Saurabh Jain

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Leveraging AI4All_ Pathways to Inclusion

Leveraging AI4All_ Pathways to Inclusion

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel at the AI Impact Summit examined how artificial intelligence can be made inclusive by focusing on three inter-linked pillars-design, access and investment-identified in the report presented by Nirmal Bhansali [26]. Bhansali warned that simply deploying technology does not guarantee inclusion because access is multi-layered, with last-mile connectivity, language localisation and institutional capacity all posing barriers [2-4][21][22-24]. He highlighted the business case for assistive technology, noting India’s $150 billion “purple economy” and the need to treat it as a market rather than charity [11-13][16-18]. Concrete product examples illustrated these points: the Shishumapin tool lets ASHA workers capture newborn measurements offline [40-43], the Reban glasses combine AI with a “Be My Eyes” feature for visually impaired users [44-48], and the YesSense app crowdsources accessibility audits of buildings to inform policy [49-52].


Rutuja Paul reiterated the three pillars and asked panelists to show how they translate into practice [66-68]. Arghya Bhattacharya described Adalat AI’s two-track approach- a WhatsApp chatbot that provides case information in any language [91-96] and a multilingual courtroom transcription system that has boosted productivity two- to three-fold and is now mandated in Kerala courts [104-110][275-277]. He emphasized that operating as a non-profit removed data-privacy concerns and helped shape procurement specifications, accelerating adoption across nine Indian states [264-268][280-283]. A representative from Rwanda’s AI Scaling Hub explained that its mission is to align AI deployments with national development goals through two pillars: scouting proven solutions and building an ecosystem to sustain them, while simultaneously creating Kinyarwanda language datasets to enable localised AI [135-142][152-154].


Archana Joshi gave sectoral examples, noting that humanitarian-aid AI must function offline in crisis zones [166-174], that financial-literacy videos require captioning and sign-language overlays for hearing-impaired audiences [175-180], and that insurance chatbots that ignore regional languages risk alienating 70 % of users [181-199]. She argued that positioning inclusion as a CSR initiative yields limited budgets, whereas demonstrating a clear business ROI and leveraging public data resources such as India’s AI Kosh can make inclusive AI economically viable [311-314][317-324][329-333]. A speaker on procurement described how traditional public-procurement cycles are too slow for AI, proposing agile “innovation-friendly” processes that bring together key players and allow rapid, iterative development [224-233][234-242].


Agustya Mehta stressed that inclusive design is good design, citing the evolution of Meta’s Ray-Ban glasses from a “painkiller” focus to a user-driven product and advocating a “nothing about us without us” mindset that hires diverse teams and builds universal solutions [291-300][341-352]. He also noted that many breakthrough technologies (e.g., scanners, OCR) originated from accessibility needs, reinforcing the argument that accessibility drives broader innovation [353-356]. The discussion concluded that embedding design, ensuring real-world access, and aligning investment incentives are essential for scaling inclusive AI, and that coordinated efforts across governments, NGOs and industry are beginning to shift AI deployments toward durable, equitable outcomes [30-31][202-204].


Keypoints

Major discussion points


Three-pillar framework for inclusive AI (design, access, investment).


Nirmal’s report stresses that AI for inclusion must embed participatory design from the outset, ensure real-world usability (connectivity, low-bandwidth, multilingual interfaces), and align procurement, capital and incentives so governments can act as anchor buyers [26-34].


Language and local context are essential for reach.


The panel repeatedly highlighted that AI must operate in users’ native languages and in low-resource environments – from the need for multilingual voice chatbots for Indian courts [22-24][90-98] to Rwanda’s effort to build Kinyarwanda data sets and “build the plane as we fly it” [122-155].


Non-profit models and agile procurement can break the pilot-stage dead-lock.


Adalat AI’s nonprofit structure helped it gain court trust, avoid data-privacy concerns, and influence RFP drafting, leading to deployment in nine Indian states and a mandatory mandate in Kerala [250-258][266-277][279-284].


Business case for inclusion: early-stage integration vs. post-hoc ROI.


Archana illustrated three corporate scenarios – a humanitarian aid tool built for offline use, a financial-literacy video-captioning project, and a banking chatbot that initially ignored Hindi, forcing a costly redesign. She argued that treating inclusion as a CSR add-on limits budgets, whereas inclusive design reduces long-term risk and unlocks market share [163-170][176-184][191-199][311-324].


Concrete inclusive-AI use cases showcase the framework in action.


Examples cited include the Shishu-Mapin tool for ASHA workers that works offline [39-43], Meta’s Reban glasses with “Be My Eyes” integration [44-48], the YesSense accessibility-mapping app [49-52], and the multilingual WhatsApp court-info chatbot [90-98]. Each follows the design-access-investment pillars [51-53].


Overall purpose / goal


The session was a launch of the AI-Inclusion report and a knowledge-exchange forum. Its aim was to translate the report’s three-pillar recommendations into concrete practice by showcasing real-world pilots, discussing policy-procurement levers, and persuading both public institutions and private firms that inclusive AI is both a social imperative and a viable business strategy.


Overall tone


The conversation began with a formal, data-driven presentation (Nirmal’s overview) and moved into a collaborative, solution-focused panel where speakers shared successes and challenges. A brief tension emerged when corporate ROI pressures clashed with inclusion goals (e.g., the banking chatbot debate) [191-199]. By the end, the tone shifted to optimistic and forward-looking, emphasizing that the ecosystem is beginning to embed inclusion as a standard design principle and that the report’s recommendations are already being operationalised [53-56][385-388].


Speakers

Nirmal Bhansali – Area of expertise: AI for inclusion, author of the summit report [S5]


Moderator – Role: Conference moderator [S6]


Rutuja Pol – Role: Partner at Ikigai Law (panelist) [S2]


Arghya Bhattacharya – Area of expertise: AI solutions for the justice system, founder of Adalat AI [S4]


Archana Joshi – Area of expertise: AI-driven inclusive solutions for enterprises [S9]


Agustya Mehta – Area of expertise: AI-powered hardware (Meta Ray-Ban glasses) [S1]


Speaker 1 – Role: Representative of the Rwanda AI Scaling Hub (discussed the hub’s mission and work) [S1]


Additional speakers:


Rahil – Mentioned briefly in the moderator’s opening; no further role or expertise identified.


Olivier – Referred to in the dialogue (e.g., “Olivier, I wanted to come to you”); appears to be the same individual as Speaker 1, representing the Rwanda AI Scaling Hub.


Full session reportComprehensive analysis and detailed insights

Opening remarks – Nirmal Bhansali


Nirmal Bhansali opened the AI Impact Summit by outlining its four focus areas – health-care, finance, education and urban planning – before narrowing to the challenge of inclusive artificial intelligence [1]. He described access as a “multi-layered problem” and warned that technology alone does not guarantee inclusion; deploying AI without first removing underlying barriers can even deepen exclusion [2-4]. He highlighted the persistent “last-mile gap” and called for coordinated action on connectivity, skilling and user-friendly interfaces [5-7]. Bhansali then introduced the “purple economy”, noting that India alone has the potential of 150 % and a $150 billion market for assistive-technology products, and argued that this should be treated as a commercial opportunity rather than a charitable one [11-18]. He illustrated the report with three use-case examples – the Shishumapin platform [84-90], Meta’s Ray-Ban glasses (later called Ray-Ban Stories) [91-98], and the YesSense Access app [99-105] – to show how inclusive design can drive impact. The core of his report is a three-pillar framework: (i) design – embed inclusion through participatory methods from the outset; (ii) access – build offline-first, low-bandwidth solutions in users’ native languages; and (iii) investment – align procurement, capital and incentives so governments act as anchor buyers and reward accessibility [26-34][35-38]. He concluded that the report will be published online shortly and posed the question of whether ecosystems will choose to build durable, equitable systems [53-56].


Moderator – Rutuja Pol


Rutuja Pol opened the panel with a brief photo-call and invited the speakers to translate the three pillars into practice [66-68].


Panelist 1 – Arghya Bhattacharya (Adalat AI)


Arghya Bhattacharya described a two-track approach to improve access to justice. The first track is a WhatsApp chatbot that, in any language, returns case status, next-hearing dates and prior orders from a simple name-and-PIN query [91-96]. He stressed that the bot is deliberately limited to information provision and does not dispense legal advice [97-100]. The second track is a multilingual transcription system that recognises Indian accents, replaces handwritten notes and has lifted courtroom productivity by two- to three-fold [104-110]. Operating as a non-profit, Adalat AI avoids data-privacy concerns, aligns incentives with the judiciary and has helped shape procurement specifications, leading to deployment in nine states and a mandatory mandate in Kerala courts [266-277][280-284].


Panelist 2 – Speaker 1 (Rwanda AI Scaling Hub representative)


The Rwanda AI Scaling Hub representative explained a complementary scaling model. The hub’s mission is to drive AI implementation that aligns with national socioeconomic priorities, using two pillars: (i) scouting proven solutions worldwide and adapting them locally; and (ii) building an ecosystem of innovators, institutions and stakeholders to sustain impact [135-142]. Because Rwanda has a single dominant language, Kinyarwanda, the hub is simultaneously creating text and voice datasets while deploying pilots – a “building the plane as we fly it” approach that recognises the need for iterative data-generation in low-resource settings [152-155].


Panelist 3 – Archana Joshi


Archana Joshi highlighted sector-specific inclusion challenges. In humanitarian-aid scenarios AI must function offline when connectivity collapses, requiring careful architectural decisions about which components run locally versus in the cloud [166-174]. She cited an insurance-company chatbot that initially launched only in English, risking alienation of the 70 % Hindi-speaking user base; the client’s insistence on proving ROI before adding Hindi exemplified the tension between short-term financial metrics and inclusive design [181-199]. Joshi argued that positioning inclusion as a CSR initiative limits budgets, whereas demonstrating clear business ROI and leveraging public data resources such as India’s AI Kosh can make inclusive AI economically viable [311-324][329-333].


Panelist 4 – Agustya Mehta (Meta)


Agustya Mehta converged with Bhansali on the mantra “nothing about us without us” [345-350]. He stressed that accessible design is good design, noting that universal solutions such as curb cuts benefit wheelchair users, parents with prams and anyone with luggage. He described Meta’s Ray-Ban glasses evolution: the first iteration focused on photo capture, later shifted to music and audio quality after observing real-world usage, illustrating the need for nimble investment decisions that follow user behaviour rather than sunk-cost plans [291-303][304-305].


Panelist 5 – Speaker 1 (procurement focus)


A second speaker (identified as “Speaker 1”) criticised traditional public-procurement cycles – e.g., three years to buy ten phones – as too slow for the rapid evolution of AI. He advocated an agile, innovation-friendly process that brings together key players, runs small-step developments and iterates quickly to keep pace with technology [224-242].


Panelist 6 – Arghya Bhattacharya (procurement & non-profit model)


Returning to procurement, Bhattacharya argued that the non-profit model aligns incentives, reduces data-privacy concerns and streamlines procurement with courts, enabling faster adoption of inclusive tools [266-284].


Panelist 7 – Agustya Mehta (investment alignment)


Mehta reinforced the need for investment alignment: governments should act as anchor buyers, embed accessibility standards in contracts and reward suppliers that meet inclusive criteria [31-34].


Panelist 8 – Archana Joshi (board-room dynamics)


Joshi described corporate board-room dynamics where ROI pressures lead to phased roll-outs that postpone localisation (e.g., language) until after an English pilot proves profitability, a practice she warned could backfire and increase long-term costs [191-199][311-324].


Panelist 9 – Agustya Mehta (accessibility-first design)


Mehta reiterated that accessibility-first design yields broader innovation, citing historical examples where scanners, OCR and text-to-speech originated from accessibility work [353-356].


Panelist 10 – Speaker 1 (agri-AI case)


The Rwanda hub representative presented an agri-AI pilot that uses Kinyarwanda voice assistants to deliver weather and market advice to smallholder farmers, demonstrating how language-localised, offline-first tools can generate immediate socioeconomic impact [135-142][152-155].


Panelist 11 – Arghya Bhattacharya (design & training)


Bhattacharya highlighted the importance of designing datasets such as the Shishumapin platform, where participatory data-collection and multilingual annotation were central to creating a robust model for low-resource languages [84-90].


Panelist 12 – Archana Joshi (AI by Her case)


Joshi concluded with the AI by Her initiative, which trains women in rural India to build and maintain inclusive AI solutions, leveraging public repositories like AI Kosh to lower data costs and create sustainable livelihoods [311-324][329-333].


Closing remarks – Nirmal Bhansali


Bhansali reiterated that AI will undoubtedly expand access and opportunity, but the lingering question is whether ecosystems will choose to build durable, equitable and sustainable systems [55-56]. He expressed optimism that the summit had moved inclusion from a peripheral discussion into board-room agendas [380-386][202-204][385-388].


Consensus & Takeaways

1. Multi-layered solutions are required, addressing connectivity, skills and user-friendly interfaces [2][6][165-174][366-367].


2. Language localisation is foundational for impact, whether in Indian court chatbots, Kinyarwanda-based agricultural advisors or insurance-company bots [22-24][91-93][152-155].


3. Participatory design from the outset is non-negotiable; judges co-design courtroom tools and Meta’s “nothing about us without us” ethos exemplify this [26][371-374][345-350].


4. Pilot-trap avoidance requires scaling hubs, agile procurement and non-profit vehicles that can bridge the gap between proof-of-concept and market deployment [18-21][135-148].


5. Inclusive AI is a sizable business opportunity, not merely CSR, as shown by the $150 billion “purple economy” and ROI-driven cases [16-18][311-314][349-351].


6. Operationalising the three-pillar framework means applying participatory methods, offline-first technology and aligned procurement incentives [26-34][35-38].


7. Concrete actions: publish the inclusive-AI report online; governments act as anchor buyers; adopt innovation-friendly procurement; encourage NGOs and non-profits to mediate public-sector AI; scale Rwanda’s hub model; integrate inclusive-design training (e.g., Adalat AI Academy) into curricula; and leverage public data repositories such as AI Kosh to lower training-data costs.


These points capture the panel’s collective vision for scaling inclusive AI worldwide.


Session transcriptComplete transcript of the session
Nirmal Bhansali

healthcare, finance, education, urban planning, but I’m going to only focus for a few for this particular evening. First, access is a multi -layered problem. Good technology by itself does not bring in or include people. By adding AI, you’re automatically not going to include more. The last mile gap is still a problem. You need to be able to focus on connectivity, in skilling, in the interfaces that people use. You must take into account the needs and wants of multiple communities. One of the other key observations that was important for this was understanding the power of the purple economy. The market of assistive tech products for people of persons with disabilities and people with special needs. These are often perceived to be on the margins of our reality, but they are not.

As one of the largest populations of people with disabilities in India, India alone has the potential of 150%. We have $150 billion just in this space. These are people who can purchase. These are people who can access these products. We need to be building for them. It’s not a charitable cause. It’s a simple business proposition. Second, a lot of AI products are stuck in the AI in the pilot stage. You often have a great idea, but you’re not able to execute them. These are for a lot of reasons, but fundamentally, they’re usually around the surrounding system. Like I mentioned, last -mile diffusion, funding, or limited support to be able to scale them up. Third, and this is something you have seen across the summit, language is foundational for enabling inclusion.

Whether it is a banking system which is using a voice AI for credit facilities or an educational AI tutor which you made for a rural village in India, all of them require to be understood in that local context where it’s operating. And this is something you would have seen across the summit in various. Exhibition halls over the past few days. And the last one is institutional capacity. this is a break or it can make a variable as well what you’re going to see is a lot of governments need to build technical expertise in the space of AI we need departments to understand this further this is already happening and once you see this you will see this reflected in procurement standards in technical specifications that these departments are making and this will lead to increasing adoption as a result of these findings then what do we have to suggest at the report there are three interconnected pillars like I mentioned in the beginning design, access and investment anything around AI and inclusion needs to take this into account first, looking at design you need to ensure that you’re embedding inclusion from the start a lot of AI systems are shaped very early and at that stage our recommendation is to have participatory design involve the people as you’re building it out if you’re making something for ASHA workers and you don’t involve them that happens that product is bound to fail in the last minute access.

This is where you have to make sure AI is usable in real world conditions. I know we’re in the AI Impact Summit, but something which you need to know is at least 33 % of the world, that’s 2 .6 billion people, still don’t have access to the internet. So when you’re thinking about building AI tools, you need to take into account those real world contexts, low bandwidth environments, not everyone has high speed internet or a full fledged smartphone. The third is investment. We need to align procurement, capital and incentives. Governments here can play a crucial role by acting as anchor buyers for these kind of products. By embedding standards which reward accessibility and open standards, you will be able to shape market incentives.

Creating these incentives we believe is very important to be able to scale inclusion through AI deployments. The last part of our report, and this is something which is my favorite, are these use cases. And And our report documents a bunch of them. Over the past few days, you would have seen a lot more than we could even account for. I’m just going to focus on two of them, two, three of them, which I really like. One is Shishumapin from Badbani AI. This is a very small tool which allows ASHA workers, frontline community healthcare workers, to take a photo or a video of a newborn baby and get accurate measurements. And this is very important and this is a very simple tool.

It can be used with low internet and can be used offline as well. Second, and you will hear from Augustia soon, I really like the reban glasses. I even tried them out at the Meta stall here. The Be My Eyes feature of that is something which a lot of people with visual impairment are using across the world. Something which helps them navigate the world around them. This is something which Meta has built by involving these people in their design process, involving them as they took decisions. And lastly, this is a shout out to the YesSense. To access app, you may have seen them in installs here. This is a very interesting tool where… you go around, take photos of buildings and physical spaces and understand whether they can be accessed by people with disabilities or not, creating a database which then allows for future greater policymaking in the future.

The crucial thing to note in all of these use cases is that all of these products and tools follow the principles which I talked to you about. They look at design, they have been supported by different government departments and finally they are looking at low resource context environments to be deployed. I am sure at the end of these five days we know that AI is going to expand access and opportunity. The question or doubt really isn’t that. It’s whether ecosystems will now choose to build systems that are required to make this expansion durable, equitable and sustainable. Our report will be out online soon. Thanks. Thanks so much.

Moderator

Thank you so much Nirmal for those insightful findings. May I request? Now everyone at the panel to please come for a photograph. this is the launch of the report as well so we’ll just take a quick photograph up so if you could come ahead with the report up front Nirmal please the project team who worked on it Yes Thank you very much We’re now going to move to a very interesting part of the event which is hearing from people who actually build these products. To take us through that we have Rutija Paul who’s a partner at Ikigai Law at the panel Rutija over to you.

Rutuja Pol

Thanks Rahil and thank you Nirmal for that wonderful presentation and to the audience for staying back for so long on a Friday evening. So thank you so much. Panelists, incredibly grateful for your time. I know it’s been a very hectic week for all of you. So thank you for taking out the time. And I think Nirmal, he set up a really good context about the three things that we thought were important from our findings. Design, access, and investment. And how do we sort of, you know, use them interchangeably and together to ensure that inclusion is not just a concept but really becomes, you know, really common in the conversations and all of our products. So I’ll start with actually Aragya.

Help us understand how has your product, tell us first about your product and how did you go about designing it, but also how has it enabled access to justice in a country as big as India and all of the issues that it has in the justice system.

Arghya Bhattacharya

Yeah, sure. Firstly, thank you so much for having me here. I’ll probably start by painting a picture of a district court. A lot of you, I’m sure, have been to a district court. by virtue of your profession, but there’s towers and towers of paper everywhere. I’m not a lawyer. The first time I went there, that was the most surprising thing for me. I saw more people writing with typewriters and not computers. And then there were people spending a lot more time looking for the right files than actually going through them and understanding what’s written in them, right? And so when you look at all of these things, it becomes quite clear that justice in these settings is really not a question of law.

It’s become a question of logistics. And that’s where Adalat AI comes in. We build AI and technology to make courts more efficient at a daily and weekly level. And the hope is that when you do this at scale, you can affect the case pendency problem in a rather positive manner. Now, coming to your question of how does AI actually enable access, I think what we are seeing is that there are two tracks. One is the more direct track, and then there is the indirect track as well. When it comes to the direct track, which is how does it enable communities to access justice better, I think there is a huge information darkness problem in the country.

It’s very hard to access judicial information about your cases. If you are in one, what’s going on with it? When is your next date of hearing? And there’s always multiple layers of middlemen that you need to sort of go through to access justice. I think the one use case of AI which we feel is quite safe now is to access information easily. And to that extent, at Adalat AI, we’ve built a WhatsApp chatbot which any citizen can access. They can talk to it in any language that they want. You can just give your name and your PIN code and it’s going to tell you if you have a case. And if you do have a case, what’s going on with it?

When is your next date of hearing? What happened in the previous order? This is not suggestive by any manner. In fact, we discourage any sort of legal advice using AI models at this point. I don’t think. That’s the right use. This is more around. given the information that already exists in the systems behind rather broken, you know, sort of websites can be kind of sort of bridge the last mile access. The more indirect sort of opportunity is by making the institutions of justice be more efficient, which is what we do with our core judicial product. We try to make courts more productive, you know, so writing everything down by hand. And in a courtroom is a big pain point.

Ninety percent of India’s courts don’t have stenographers. So we built a legal transcription tool, which is multilingual. You could understand the legal jargon that lawyers love to use, like rest your decata and whatnot. I’m not exactly a lawyer. It understands Indian accents and dialects. And what we are seeing is that courts that do use technology like this are able to improve judicial productivity two to three X. So if someone was recording two witness depositions per day, now they’re able to record four. to six. Now, when you do this at scale, you can get a lot more done at a daily, weekly level and then hopefully that helps the case pendency problem. We’re also sort of tackling a lot of other different judicial tasks like going through thousands and thousands of pages.

Can we help them navigate it? Can we digitize the entire workflow so that you don’t have to go through a lot of bundles of paper? What we are steering away from at this point is anything that involves legal intelligence. For example, something as simple as summarization too. We don’t think it’s safe enough right now because the summary for a citizen looks very different from the summary that you need for a judge versus a summary for a lawyer. And so that’s something that I would advise everyone to tread with caution on.

Rutuja Pol

Alright, that’s interesting. Thanks. I’m going to come back to you on the aspect of what has been safest to access information. But, Olivier, I wanted to come to you. next. One, very curious to know about Rwanda’s AI scaling hub. And second, Kinyarwanda, if I’m pronouncing it rightly, it’s your go -to language, right? But it’s also a very low resource language. So when you look at using an AI tool based on that language, how has it been? Has it been incredibly difficult? What have been your learnings? And just everything about the hub, please.

Speaker 1

Thank you. I hope everyone can hear me. And thank you, first, for having me here. And I’m happy to share. So, as she said, I come from the Rwanda AI scaling hub. And you wonder, she asked me a question when we were out there. She said, why the scaling hub and not just the AI hub? But the whole idea is we, as Rwanda, took the approach of thinking of working on solutions that can be scattered. so that we do not end up just having pilots and we stay in pilot mode, if I can say. So in that case, the AI Scaling Hub has one main mission that has two key pillars. And the mission is really to drive the AI implementation while ensuring that those implementations are aligned with the national priorities for socioeconomic development.

We focus on mainly AI solutions. And then we have two pillars. One is to encourage or accelerate the adoption by basically looking, scanning the world, and find those use cases. Those solutions that have succeeded elsewhere. and see which one inspires that should be brought to Rwanda, adapted to the context of the country, and then implemented to be scared and do the impact in the society. That’s one pillar. The other pillar is now build the ecosystem all around it to make sure that, one, those implementations can be scared and sustained. Two, they open up the door of possibility to actually be able to, I would say, create much more than this. That basically the ecosystem of innovators and all the other institutions and key stakeholders that really needs to make sure that this movement does not stop.

So that’s because we look at AI as, you know, Rwanda as a country have taken the direction of making sure that the country becomes… African hub for AI research and innovation. So that requires now to really go into this thing, and we are the scaling hub because we are also powered to really move as fast as possible in order to show the impact. So that’s in summary what we do, and we have three key sectors that we focus on, but we are not limited on this. Since we talk about the ecosystem, we really drive this whole thing as much as possible in a very agile way. We are the startup -ish type of institution. If I can say it like this, we find a way to make things happen.

So that’s why. And now talking to King Aruanda when it comes to AI solutions, there is something that in India many people may find or take for granted. But which is not somewhere everywhere. when the AI revolution started India had mature DPI which means that the focus has been more to actually implementing the AI already on existing and mature and trusted DPI that are in place it’s not a scenario in many places the Rwandan approach is actually building the plane as we fly it there is a lot of advancement into DPI I would say if I look at it from a technical standpoint everything is at least at 80 % but not necessarily at 100 % it’s more of plugging into things as we go, the DPI stack is being completed but the AI also needs to take off and go into this so there comes basically with that approach that’s why looking at it holistically is key and when it comes to Kinyarwanda definitely Rwanda is a small country compared to India in terms of size and in terms of people but it’s also a country with a high density population when you look at the way it is and the entire population speak one language which is Kinyarwanda as one of the languages that other we speak, basically which means that actually a solution for it to be adopted, it needs to be speaking Kinyarwanda and AI did not originate in Rwanda so AI does not speak Kinyarwanda originally so as we build our plane, there is the time of also now building the models, building the data set for the language be it the text be it the voice in order to get to perfection so we are doing this as we go and there is improvement every day.

I think that a couple of years from now, we have, I would say, a full stack data set of Kinyarwanda language that can now operate all this. But even right now, we are doing things. That’s the approach.

Rutuja Pol

That’s very fascinating. I think building a plane as you fly is going to stay with me. Thank you for that. I’m going to come to Archana next. I’m just going to pivot a little to a B2B conversation. You help businesses across the spectrum, be it healthcare, BFSI, education, scale up and transform digitally. What does access and inclusion mean in these rooms? How is it that you really convince your clients that inclusion and even access needs to be really embedded in the first thought of your transformation journey?

Archana Joshi

Thanks for that question, Rituja, and thanks for having me here on this panel. I’m going to take three examples. Recent ones. The first example, we were working with a humanitarian agency which deals with refugee crisis. So they had approached us to develop an AI solution for the field workers who operate on the field when a refugee crisis is happening to look at real -time where should the aid go. Because when refugee crisis happens, assume a blast happens, something happens, there’s a lot of aid that flows in. But is it reaching the right places? For that, you need to process real -time information. For that, you need to look at what is happening there on the ground, which you could be getting bits and pieces from the representatives who are there.

You need to be able to access information that’s flowing around the media. So there’s a lot of data crunching intelligence that needs to be baked in. And typically before AI, a lot of this was relied on telephone calls. That’s manually done. with AI this is something which helps but in this kind of situation most of the time your internet doesn’t work most of the time the connectivities are down because in this situation the connections go away and your AI still has to work you cannot say that I don’t know where to give the aid because my cloud connection went down or my net didn’t work or the connection was down by the government at that point in time so when you design an AI system like this you need to be able to figure out what needs to work offline what should work online where to bring in how to architect it and that becomes crucial so that’s first example where AI needs to be accessible inclusive by design I’ll take a second example so second example a global bank one of the largest bank in the world approached us and their request was, hey, I have a lot of financial literacy videos on my website.

Typically, those are in English and from an accessibility standpoint, there are some captions in English which come in but those don’t necessarily serve hearing impaired because for them, their first sign language, first language is sign language, not English. What can AI do here, right? So the question was, can we use for a little bit technical terms like the vision LLMs and some of the processes that are out there, technology, to create videos which probably were not accessible initially to a large set of population and make it accessible. So again here, something existed but you are using AI to put and add a wrapper on top of it. of it. So you are not accessible by design in this case, but you are trying to use AI to make it accessible.

Whereas in the first case it was accessible by design. And let me take a third case, which I was getting into quite a bit of heated conversation with the CTO of that insurance company, where they did a small POC with AI, where it was a conversational thing. Somebody calls at the insurance help desk and the AI kind of response on what queries the person has called in. And of course, like in all demos and POCs do, it worked beautifully. And the second question was, hey, let’s scale it up. And immediately the person with whom we were working, the CTO said, you know what, let’s do it in English for phase one. And let’s look at other languages later.

Now, my argument was that if you do it this way, most of the folks who are calling you are the ones who speak Hindi because you are operating in that region. If you don’t do that, you are alienating 70 % of the people and your customers. And why are you then putting this bot for? Why are you even attempting it, right? And their answer to that was, you know what, I have to show ROI from AI. And I have to show that quickly. And hence, hence, please go and still do the English one first. Let’s look at Hindi in Phase 2. Right? And you can imagine what kind of heated conversations I was trying to explain them. That’s not the right approach.

You need to be thinking of Hindi right from the start. Because if you do this, it will work beautifully in demo because it was all English. It was a sample data set with which you were working. It may still work in your Phase 1 a little bit. but in phase 2 it’s going to fail miserably and it will bite you even bad when it comes and fails at that point in time but it was a hard conversation we finally convinced them but to get to that there was a lot of education that’s needed so what I’m saying is if you look at these 3 examples where in certain cases due to the virtue of the business that humanitarian agency was you had to be accessible by design in the second case because it made good business sense the company said make it accessible whatever financial solutions we have whereas in the third case it was a very difficult conversation on accessibility because somebody wanted to prove a point to their management that AI gives the ROI which is there now if I look at various cases where most of the corporates are today of the businesses which actually are dealing with this economy and responsible for bringing AI out there, most of them are still hovering in the bucket three, which is the last one, where it is still not inclusive by design, still they feel, I have a POC, I can scale it up without being as inclusive with the data, with languages, with other things, and I can do that in later phases.

So this was the story till the entire last year. This year, and thanks to the summit and more and more forums like these, businesses are appreciating the fact that if they don’t do inclusive by design, they are leaving money on the table, and it’s just plain, smart, good business. So I think now the conversations in the boardrooms and the rooms and in corporates are shifting, where the question is not necessarily, get me the ROI and prove and show that AI works. but make AI sustainable and working for me for a long term, which means I have to be inclusive. So that’s what I would say.

Rutuja Pol

That’s wonderful. I think, I mean, kudos to the summit. It certainly made the conversation inclusive, really common and very boardroom, entered into the boardroom finally. So I think that’s a good takeaway. I think moving from the third bucket to now, Agastya, I wanted to come to you to just help us understand the way we, what we’ve seen in the research findings of our report has been that in many ways, AI is a force multiplier. It is going to enable at a much faster, at a much larger scale, right? So tell us a little more about the, at the back end of the design team in Meta, how when you look at designing a particular device, what are the instructions you give your team that this is what you need to follow A, B, C, D, so that the device you’re creating is definitely inclusive.

It respects the idea of the people that it’s going to be useful for.

Agustya Mehta

which are the divots on sidewalks that allow wheeled devices to transition from a sidewalk to a street to cross the street, they are ubiquitous in the United States due to regulatory pressure to protect the rights of people with disabilities that use wheelchairs. But anyone who’s encountered them while using a pram or stroller or a trolley, shopping cart, or luggage has benefited. They just make cities better. And so taking an extra step and thinking holistically rather than just being pressured by regulation, which of course is still an important component, is critical to making the end result good. I don’t think anyone’s perfect, but I’m doing my best to instill this mindset within Meta.

Rutuja Pol

All right. I mean, yes, I don’t think anyone’s perfect, and we’re all trying our best. It’s a good takeaway from the summit and everything that we’ve learned from here. I wanted to pivot to the conversation around investments and just, you know, how do you make inclusion and creating sustainable pathways? For inclusive AI, really, you know, in the context of India. or even globally for that matter. And I first wanted to actually come to Olivier again. Could you help us, just give us some idea of how did you go about making the procurement policy, which I understand is very innovation -friendly, for the national AI strategy? What were the considerations that went behind it, and how have you seen it pan out on ground so far?

Speaker 1

That’s a good question. So, Remy, paint a picture a little bit. So you see the whole journey to get to there. So procurement is normally most seen in public sector. And, you know, we are in a country where accountability is something expected from everyone. And when it comes to public funds, it’s even to another level. which leads that the classic procurement, if I can say, it takes a lot of time because in order to really avoid any way of any conflict of interest in the process but when it comes to the ICT space most innovation products look at the journey, he’s talking about about graphic user interface and you know the touch screens, look at the social media, he’s from meta you know, Facebook before before it become meta but just if you look at the journey you will see that normally into this space there is a change, there is a new thing every three years 2023 we were talking more about DPI, DPG and people were even having hard time to differentiate the two And now, three years later, we are talking more about AI as if it’s a new thing, but it’s basically the large language models that are new because of the revolution of social media that gets a lot of data sets and creates something that we can interact with.

If you go into the all -time procurement, you can try to buy 10 phones, and it takes you three years, which means basically by the time you follow this process, things have changed. You may have the right process, but not the right product because things have changed. That’s how the idea of now having the public procurement for innovation concept. Which was put in there, let’s say, in some space to some categories. Let’s consider a way where instead of really going through the classical time, how about… we bring together key players, potential institutions that can deliver to XYZ solution that we see is needed. And then give them a chance. That is a bit like, you know, they compete to see the best, who can deliver to this, and then they are empowered to do this.

So we go more into the agile mode of having these, you know, small step development along the way that can adapt to the change instead of waiting for that long process and end up getting a product that is no longer relevant to the market or to what we need to respond. Or maybe it’s relevant, but it’s way too old. So imagine trying to get, now we are at iPhone, what, 17? You know, how many times have we seen these basically these evolutions? So think about the process that started five years ago. It works for building roads. but not necessarily for technology projects. So that’s a bit of the picture of how we end up to this.

Rutuja Pol

That’s interesting. Even for us in India, it’s been that oftentimes the law and the policies is playing catch up with the tech. So you really need to find a creative way of finding solutions that you can smartly look at regulating the emerging tech. Aragi, I know you have a lot of thoughts on this one, especially around the procurement rules and how do courts adopt your product. Please do come in. We’d love to hear more about how do you think the existing procurement rules have shaped the way you’ve been able to access the courts and deploy your product there? And what do you think needs to change so that it’s faster and more usable for the courts?

Arghya Bhattacharya

Yeah, I think I’ll take a more solution -oriented. We could talk a lot about the problems of policy playing catch up with tech, but I’ll take a rather solution -oriented approach to how we… We’ve worked with the courts at Adalati. So when we started Adalat AI, which is about two years back, AI was very new. Courts are still, you know, working to adopt generic software technology. And so AI is extremely new, right? I think a couple of things worked really well for us. Number one was to build painkillers before vitamins when it comes to solutions. So we actually went for a very big pain in courts, which is judges are having to write everything down by hand.

And so when we say that, hey, there is this new technology, but it solves a really big pain point of yours. This is not a vitamin. This is something that you are all struggling with. There are a lot more open to adopting technology. But in terms of the creative solution around procurement, I think I want to emphasize that nonprofits as a pathway to creativity, creating impact are highly underrated. specifically in the space of justice and law. You know, there are all these non -profits that work with education and with healthcare to support doctors and teachers, but not enough non -profits doing this to support our court staff and justices in the country. And so, Adalat AI is exactly that.

Now, what do I mean by non -profit as a vehicle? Being non -profit helped us align incentive with the courts better. It automatically took away a lot of the stress around, oh, what are they going to do with my data? Are they going to profile the judges? It took away a lot of stress around, okay, are they going to charge me? Where am I going to, how am I going to evaluate the new technology? Now, so this helped us get into courts initially. And, you know, within two years, we are now in nine Indian states. We are in one out of every five courts in the country. And as of a historic mandate by Kerala, it actually became, mandatory to use Adalat AI in every courtroom in the state to record witness depositions.

It’s absolutely not allowed to do this by hand. And I do think that, you know, sort of this impact vessel vehicle really help us do that. In terms of sort of the other side, which is that at the end of the day, eventually courts and all institutions need RFPs. They need to sanction budgets and they need to sort of make sure that they pick the right player for it. Being a non -profit, some of the ways in which we are seeing we are able to influence this process is that now that they’ve been able to work with us, they have a lot more experience of what it means to scale these products. Their tech teams have a lot more experience of working with us in knowing what do they actually need out of these products.

And so they have a lot more in terms of ideas of how to draft the RFPs. And so I think that’s the other big benefits. If it’s that coming from being a non -profit, you know, all these non -profits in the ecosystem, they’re able to help these institutions. sort of design better RFPs when they actually do go and procure solutions.

Rutuja Pol

Right. That’s interesting. I love the Kerala example. I wish to see that happening across all states sooner in the country. But I wanted to now move to Agastya. I know that Meta Ray -Ban Glasses represent a significant investment for Meta in terms of the AI -powered hardware that you’ve created, right? From the inside, help us understand how do investment priorities shape the design journey for that product?

Agustya Mehta

That’s a good question. And I think in reality, sometimes the plan or the intent doesn’t necessarily match with where things land. For example, the Ray -Ban Stories, which were the first iteration of smart glasses we shipped, they were great. They had some really cool features. When we built them, we initially thought that the use cases would be around taking pictures and audio would just be used for making phone calls. While myself and a couple other engineers were doing hackathon projects, combining multimodal AI to help blind and low vision people, this was before the AI hype had caught the zeitgeist of the industry. And then the next iteration, the big focus we put was we found that people were using them for music much more than we expected.

And so we thought the biggest use case, the biggest investment would be on making the speakers better for Ray -Ban Meta version 1. And we did that, and the music and audio quality was much better. But you’ll notice something missing from the product plan that I mentioned for both of those products, AI, which is now not only front and center, but it’s literally how we market these glasses. They’re AI glasses. I say this not to drive cynicism, but nobody has a crystal ball. And so I think the key thing is learning to be nimble and understand the direction things are going and being able to jump on trends versus being too fixated on what the original plan was and maybe giving a sunk cost fallacy.

I love the painkiller. Vitamin analogy. And maybe adding to that. the really important thing to do is to avoid the temptation of eating the candy before either of those two. That’s my take on it.

Rutuja Pol

That’s interesting. Thanks so much. Arshna, I wanted to come to you. Same question, and I think you touched upon it in your earlier remarks that the executive wanted to show ROI. So really the question is when you have these routine discussions in boardrooms with your enterprise clients, did you start with positioning inclusion as a CSR initiative or just a good -to -have thing in your strategy, or has that shift significantly changed? I mean, of course, barring the summit and the two months of change in thinking, but in the past, how did the pitch start for you, and how was the reaction really like from the leaders?

Archana Joshi

If you, and this is my personal view based on what I’ve seen, and my experience, If you position inclusion as a CSR initiative, you are also going to get budgets which match the CSR initiatives, which don’t necessarily translate to good products or make good economic sense. So that never works. Don’t do it. That’s first. The second is that when you are positioning these kind of conversations, remember that in corporate world or any business for that matter, it’s always a trade off. How much you are willing to spend versus the returns that you are getting. Now, if you want to be it more and more inclusive, especially in an AI context, you can do that. If you have more and more diverse data sets feeding into that.

Do those exist today? at a cost which is palatable to all enterprises? The answer is no. So first thing what enterprises look at is great, I want to be inclusive, nobody wants to say no. But if they don’t have those data sets or if the cost of getting those data sets or cost of cleaning those data sets to make it more inclusive is going to be much higher. So typically in AI we say $1 spent on AI, you have to spend $3 on data. So if that’s the kind of economics you are dealing with, there is definitely going to be a point where the company says inclusion is going to come later because economically it stops becoming as viable to them.

Now if you look at the inflection point that AI is there today, it’s hyped up. it’s yet to show tangible outcomes across all the sectors. Yes, it’s shown great promises and results in some, but has it universally shown those promises? No, we are yet to see those. So when you are dealing with clients which are in those areas where they are yet to see those, you will see inclusion taking a backseat, not because of the intent, but because of the cost in certain cases. Whereas everybody realizes that inclusion is plain good business, but those trade -offs is what they look at. Increasingly, with the data being made more accessible, governments taking initiative. In fact, at India, we have AI Kosh, which the government of India has put in where you get diverse data sets of India and you can feed in those data.

And you can use those data sets to make your AI systems more inclusive. more tuned to custom local traditions, you will see the cost of this implementation going to come down. So economies of scale kicks in. Moment that happens, you will automatically see corporates and companies adopting this because now while there always was intent, now that intent is also becoming financially viable for them. So I would say it’s a combination of these kind of different facets which play together when certain decisions get made.

Rutuja Pol

That’s helpful to know that CSR is not the go -to route to see, but a bunch of things that determine the decision -making. I think in the interest of time, I’m going to move to the last segment of our panel discussion and my favorite, which is design. So I think I’m going to first come to Agastya again. Tell us about how can AI devices really drive accessibility first innovation? And I remember reading this at the Metastore, as well, earlier in the… weeks. So just help us understand the company thoughts behind it and how have you gone about executing it across different devices, including the glasses?

Agustya Mehta

Sure, thank you. Accessible design is good design. Universal design is good design. I think opening with that mindset that if you build things in an inclusive way, you make the product better for everyone, people with and without disabilities. I think that’s the critical factor. I think the second thing tied into that is the notion of nothing about us without us. On this panel, we discussed that a model is only as good as its data set. The same is true for a development team, for an organization. So I think it’s critical to hire people from all sorts of different backgrounds, not be stuck in your own bubble because you’re building products for people with all sorts of backgrounds.

It’s not just good karma. It’s not just charity. It’s good business. So I think those are kind of the two philosophies I’d push on, is that hammer home that innovation actually is seeded by accessibility. There’s so many innovations that started from accessibility efforts. The flatbed scanner, text -to -speech synthesis, OCR, these started as efforts to read books for blind people. They didn’t start as just industry -wide things. And yet here we are. So I think working with your leadership teams to call those examples out, show concrete examples of how things get better, and ensure that you are building with everyone.

Rutuja Pol

That’s incredible. Thanks, thanks. I know in the interest of time, I’m just going to quickly come to all three of our panelists to help me understand, of course, obviously with your own case study, but, Olivia, one case study from your country that you think the design aspect of it where, you know, from the very initial you’ve looked at, and inclusion has just been visible, and that’s helped in many ways. So just give us one example. and the same thing for you, maybe perhaps from the jury that you looked at on AI for her, that would be helpful. So, Olivier, then again, then Archana.

Speaker 1

All right. A quick one. We don’t have so many AI -powered solutions that are there, but just an example, we are working on an AI -powered advisory solution for agriculture. And right from the beginning, we need to think about the end user before we even think about the technology, because what AI is doing to us actually makes the tech easy. You know, a chat bot and a robot, it’s like even a code bot can make the code. But the end user now, in this case, we are talking about a smallholder farmer who does not use a software. He doesn’t use a smartphone, but uses a future phone, who may be in a place where the connectivity is shaky.

but who only speak Kinyarwanda. So going from that angle now basically there is that inclusivity right in the stage so that if we can deliver to this then the technology can work. That’s one example I can set in there and a couple months from now I should tell more success story because we are beginning into now those solutions to scare.

Rutuja Pol

That’s good. I look forward to a couple of more months and then some more case studies from your country. Raghav, do you want to go next?

Arghya Bhattacharya

Yeah, I think I’ll talk about two things. Number one is design and design of product and the second is and I want to contrast this is design of the intervention itself the entire solution and with respect to the problem that you’re trying to solve at Adalat AI with respect to design of the product there’s one thing that we’ve done from the start that has helped us we force our engineers, designers, everyone to go to court sit with judges, show them the designs, get an in -person approval from them before any piece of code is written, before they come back and touch their laptops, right? And that’s one thing that has helped us tremendously in being able to make sure that design is extremely inclusive.

The second is when it comes to design of intervention itself, it’s not enough to build technology. You know, we build transcription solutions, but if the judge doesn’t understand that they need to turn on the mic at the podium when they’re kind of dictating, then the mic just becomes a very expensive paperweight, right? It’s of no good use. And so we do extensive trainings. In fact, we have something called the Adalat AI Academy. As part of that, we go to courts, we teach them how to use the technology, and we had a very interesting insight. We were trying to teach them AI. But what we learned was a lot of judges don’t know how to update their Chrome browser.

And so that helped us then understand what exactly is needed to drive that intervention forward and make sure that impact is actually realized on ground. And I think now a lot of Adalat AI Academy has become a part of the official curriculum of becoming a judge in a lot of states. And so that’s kind of helped a lot in terms of design.

Rutuja Pol

That’s great. I think moving into the curriculum always helps that you’re planting the seeds early on for the training. Archana, the last word.

Archana Joshi

I’ll be real quick. So as part of the jury for AI by Her came across several startups, which were, of course, led by women and conceptualized and supported with AI. One of the startups which kind of stuck with me is a startup in fashion tech. And the interesting piece was that that startup. Helps the designers to show and envision how the finished product could look like. and what it does is not just show it so that you can reduce the time it would take to develop certain samples and then discard them so it’s not just sustainable fashion and sustainable designing, but it also shows in different shapes and sizes. So that makes it even more better and inclusive.

So some of these kind of things is what I found in the solutions which were there in AI by Her, which kind of makes you think that, yes, these are truly being sustainable and inclusive by design.

Rutuja Pol

That’s wonderful. All right, do we have time for questions? No? All right, cool. So we’re going to… Sorry about that, audience. Thank you so much. But probably we can catch all of the panelists once you’re done with the last segment. Thank you so much. Thank you. Thank you so much for a very insightful panel. I think everyone who stayed back has at least the last hour has been more informative as well and we were left with something. from all of you. So thank you so much. Please do catch the panelist. Thank you everyone for staying here. I know it’s been a long week. This is the last session at the AI Impact Summit, so just thank you all for being here.

And a big shout out to Metta who’s partnered with us for this project, so thank you for your continued support and we look forward to engaging further work. Thank you all. We do have some mementos from the India AI Summit for all the participants. So Rutuja, if you would please give them Yes, there is. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (20)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The AI Impact Summit’s four focus areas are health‑care, finance, education and urban planning.”

The summit’s agenda aligns with the broader AI impact narrative that highlights healthcare, agriculture, education and urban planning as key sectors for AI-driven growth [S88].

Confirmedhigh

“Access to AI is a “multi‑layered problem” and good technology alone does not guarantee inclusion.”

The knowledge base explicitly states that “access is a multi-layered problem” and that “good technology by itself does not bring in or include people” [S1].

Additional Contextmedium

“There is a persistent “last‑mile gap” that requires coordinated action on connectivity, skilling and user‑friendly interfaces.”

Multiple sources highlight ongoing gaps in connectivity and digital inclusion, describing them as a “last-mile” issue and calling for innovative, locally-tailored solutions and continued investment [S91], [S92], [S93], [S94].

Confirmedhigh

“India alone has the potential of 150 % and a $150 billion market for assistive‑technology products.”

The same figures are quoted in the knowledge base: “India alone has the potential of 150%… $150 billion just in this space” [S20].

Confirmedmedium

“Meta’s Ray‑Ban glasses (later called Ray‑Ban Stories) were cited as an inclusive‑design use‑case.”

Meta’s partnership with Ray-Ban on smart glasses (codenamed Hypernova) is documented, confirming the existence of such a product line [S95] and [S96].

External Sources (104)
S1
Leveraging AI4All_ Pathways to Inclusion — – Arghya Bhattacharya- Agustya Mehta- Speaker 1 – Nirmal Bhansali- Agustya Mehta
S2
Leveraging AI4All_ Pathways to Inclusion — – Archana Joshi- Rutuja Pol – Nirmal Bhansali- Rutuja Pol
S3
Subrata K. Mitra Jivanta Schottli Markus Pauli — An analysis of India’s foreign policy over seven decades will inevitably reveal evidence of both change and continuity i…
S4
Leveraging AI4All_ Pathways to Inclusion — – Arghya Bhattacharya- Agustya Mehta- Speaker 1 – Nirmal Bhansali- Arghya Bhattacharya- Speaker 1 – Speaker 1- Arghya …
S5
Leveraging AI4All_ Pathways to Inclusion — – Nirmal Bhansali- Agustya Mehta – Nirmal Bhansali- Archana Joshi- Speaker 1 – Nirmal Bhansali- Arghya Bhattacharya- S…
S6
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S7
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S8
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S9
Leveraging AI4All_ Pathways to Inclusion — – Nirmal Bhansali- Speaker 1- Archana Joshi – Speaker 1- Archana Joshi
S10
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S11
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S12
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S13
Leaders TalkX: The Connectivity Imperative: Laying the Foundation for Inclusive Information Access — Nur Sulyna Abdullah:We have the mic. Yes, we do. Thank you. Thank you very much, Mr. Moderator. Now, we’ve heard this fi…
S14
Policy Network on Meaningful Access: Meaningful access to include and connect | IGF 2023 — Audience:Thank you, Chair. Highlight on capacity building, that technical skills are needed to understand emerging techn…
S15
Lightning Talk #173 Artificial Intelligence in Agrotech and Foodtech — The speaker addressed practical challenges in implementing AI solutions for farmers in low-income countries. She stresse…
S16
Let’s design the next Global Dialogue on Ai & Metaverses | IGF 2023 Town Hall #25 — Contextualising information according to local needs and languages fosters engagement and response. In India, in-person …
S17
Building Population-Scale Digital Public Infrastructure for AI — Irina Ghose from Anthropic reinforced this perspective, arguing that AI deployment failures rarely stem from technical c…
S18
Media and Education for All: Bridging Female Academic Leaders and Society towards Impactful Results — ### Participatory Design Approaches Several speakers emphasised the importance of involving target users in the design …
S19
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — And share this further, enabling a safer patient care and also less burnout in our staff. I’ve been sharing lots of hosp…
S20
https://dig.watch/event/india-ai-impact-summit-2026/leveraging-ai4all_-pathways-to-inclusion — As one of the largest populations of people with disabilities in India, India alone has the potential of 150%. We have $…
S21
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Audience:Thank you. First of all, I would like to express sincere appreciation to Ambassador Francesca for the, and MIKT…
S22
DC-Gender Disability, Gender, and Digital Self-Determination | IGF 2023 — Furthermore, assistance tools like ‘Be My Eyes’ have proven to be invaluable resources for visually impaired individuals…
S23
Re-envisioning DCAD for the Future — Additionally, participants mentioned that Excel files posed a significant challenge for visually impaired individuals. T…
S24
Executive summary — It is important for parliaments to make their documents available and understandable, but equally important to design an…
S25
WSIS Action Line: C3 Access to information and knowledge: “Investing in Equitable Knowledge Access: Diamond Open Access” — – Anthony Wong- Maria de Brasdefer Varoglu argues that access to scientific knowledge is not a luxury but an essential …
S26
AI as critical infrastructure for continuity in public services — “If they don’t know if they can work with some solutions… they will step back and they will go to the more trusted loc…
S27
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Diana Nyakundi:Yeah, thanks Fadi. So with regards to opportunities, there are a lot of AI pilot projects that are coming…
S28
Revitalising trust with AI: Boosting governance and public services — AI is reshaping public governance, offering innovative ways to enhance services and restore trust in institutions. The d…
S29
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — ### Framework for Inclusive Development This panel discussion, moderated by Valeria Betancourt, examined pathways for d…
S30
Future-Ready Education: Enhancing Accessibility & Building | IGF 2023 — Digital literacy and competence development are deemed indispensable in digital education. The analysis highlights the n…
S31
Knowledge Café: WSIS+20 Consultation: Strenghtening Multistakeholderism — Access to information is essential and it has to take linguistic diversity into account, location, context, etc. Multi-…
S32
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — The importance of language and context in technology creation was also emphasized. The analysis pointed out that discuss…
S33
WS #323 New Data Governance Models for African Nlp Ecosystems — Economic | Legal and regulatory | Development Government Role and Policy Frameworks He mentions that procurement provi…
S34
DC-OER The Transformative Role of OER in Digital Inclusion | IGF 2023 — In conclusion, sustainable funding is crucial for the success of OER initiatives, and partnerships and donations from fo…
S35
How to believe in the future? — The analysis also recognises the concerns raised by Robert Beamish, who expresses dissatisfaction with executives avoidi…
S36
Frontiers of inclusive innovation: Formulating technology and innovation policies that leave no one behind — The UN Economic and Social Commission for Asia and the Pacific (ESCAP)publisheda new report that explores the opportunit…
S37
The impact of regulatory frameworks on the global digital communications industry — Ms Ellie Templeton is a Cyber Security Research Assistant at the Geneva Centre for Security Policy. She has an Internati…
S38
AN INTRODUCTION TO — The concept of policy’s ‘long tail’ is inspired by viral marketing and refers to the possibility of harnessing a wide va…
S39
AI That Empowers Safety Growth and Social Inclusion in Action — This discussion revealed both significant progress and substantial challenges in implementing responsible AI governance….
S40
AI & Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Steven:Thanks, Vicky. And good afternoon, everyone. Good morning to those online. It’s a pleasure to be here. So I’m a d…
S41
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Another important point emphasized in the analysis is the significance of involving users and technical experts in the p…
S42
Driving Social Good with AI_ Evaluation and Open Source at Scale — These key comments fundamentally shaped the discussion by establishing inclusive frameworks, providing concrete real-wor…
S43
Science as a Growth Engine: Navigating the Funding and Translation Challenge — Low to moderate disagreement level. The speakers largely align on core issues like the importance of long-term investmen…
S44
https://dig.watch/event/india-ai-impact-summit-2026/setting-the-rules_-global-ai-standards-for-growth-and-governance — Hello. Jocelyn, Google DeepMind, where I also work on issues of AI standards, governance, and policy. building on what’s…
S45
Setting the Rules_ Global AI Standards for Growth and Governance — Hello. Jocelyn, Google DeepMind, where I also work on issues of AI standards, governance, and policy. building on what’s…
S46
Critical Infrastructure in the Digital Age: From Deep Sea Cables to Orbital Satellites — Low to moderate disagreement level. Most conflicts are methodological rather than philosophical, focusing on whether to …
S47
Future-Ready Education: Enhancing Accessibility & Building | IGF 2023 — Inclusion is identified as a vital aspect of ensuring no one is left behind in digital education. The analysis argues th…
S48
Open Forum #37 Her Data,Her Policies:Towards a Gender Inclusive Data Future — Importance of inclusive data policies and practices Role of Technology Companies Tech companies should prioritize incl…
S49
Leveraging AI4All_ Pathways to Inclusion — Business Case and Economic Incentives for Inclusion Product development must stay nimble, allowing investment decisions…
S50
Building Population-Scale Digital Public Infrastructure for AI — “we are looking for a more policy -oriented and looking at the outcomes and not only the lowest price thing.”[48]. “we h…
S51
WS #323 New Data Governance Models for African Nlp Ecosystems — Economic | Legal and regulatory | Development Government Role and Policy Frameworks He mentions that procurement provi…
S52
Foreword — AI Procurement in a Box is a practical guide that helps governments rethink the procurement of artificial intelligence (…
S53
AI in justice: Bridging the global access gap or deepening inequalities — At least5 billion people worldwide lackaccess to justice, a human right enshrined in international law. In many regions,…
S54
Judiciary engagement — AI implementation in judicial systems has wide-ranging effects on various stakeholders including lawyers, litigants, and…
S55
The Future of AI in the Judiciary: Launch of the UNESCO Guidelines for the use of AI Systems in the Judiciary — Dr. Juan David Gutierrez Rodriguez:So, Juan David, the floor is yours. Thank you very much, everyone. It’s a pleasure to…
S56
Safe and responsible AI — – A flexible legal system capable of adapting rapidly to changes due to technological developments, including possible …
S57
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Demands on policy exist without the building blocks to support its implementation Factors such as restricted access to …
S58
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner highlighted that connectivity challenges extend beyond infrastructure availability – many regions have technical …
S59
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S60
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Joanna Bryson: Hi, yeah, sure. Thanks very much and sorry not to be in Oslo. I wanted to come specifically to your quest…
S61
AI as critical infrastructure for continuity in public services — “Data is siloed, data is not ready for AI scale.”[71]. “So almost 80 % of those pilots don’t make it to production.”[98]…
S62
How AI Is Transforming Diplomacy and Conflict Management — She notes that many organizations are stuck in pilot projects without scaling and that leaders often lack hands‑on exper…
S63
Collaborative AI Network – Strengthening Skills Research and Innovation — Janet Zhou highlighted the persistent challenge of “pilotitis”—technologies remaining stuck in pilot phases rather than …
S64
Operationalizing data free flow with trust | IGF 2023 WS #197 — The analysis also emphasizes the significance of open and engaged discussions involving a wide range of stakeholders. It…
S65
WS #271 Data Agency Scaling Next Gen Digital Economy Infrastructure — Wendy Seltzer: Thank you, and I’ll try to keep it short so that we can get to those questions, even though it’s a deep a…
S66
Dynamic Coalition Collaborative Session — Rights of persons with disabilities | Development | Human rights principles Security by design must be embedded from th…
S67
Leveraging AI4All_ Pathways to Inclusion — Three interconnected pillars needed: design, access, and investment
S68
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — According to Moroccan Strategy Digital 2030, we consider AI as long -term strategic choice, reshaping competitiveness, s…
S69
Driving Indias AI Future Growth Innovation and Impact — Dr. Vivek Mohindra from Dell Technologies presented a comprehensive AI blueprint built upon three foundational pillars d…
S70
Knowledge Café: WSIS+20 Consultation: Strenghtening Multistakeholderism — Access to information is essential and it has to take linguistic diversity into account, location, context, etc. Multi-…
S71
Future-Ready Education: Enhancing Accessibility & Building | IGF 2023 — Digital literacy and competence development are deemed indispensable in digital education. The analysis highlights the n…
S72
Multilingual Internet: a Key Catalyst for Access & Inclusion | IGF 2023 Town Hall #75 — Nodumo Dhlamini:Nodumo, over to you. Thank you. Yes, thank you very much. Thank you for having me on this panel. Yes, Af…
S73
The Power of Satellites in Emergency Alerting and Protecting Lives — This vivid example illustrates the critical gap between having technology and effective communication. It highlights how…
S74
WS #225 Bridging the Connectivity Gap for Excluded Communities — Community participation and local context are essential for successful connectivity initiatives
S75
https://dig.watch/event/india-ai-impact-summit-2026/leveraging-ai4all_-pathways-to-inclusion — And so when we say that, hey, there is this new technology, but it solves a really big pain point of yours. This is not …
S76
S77
WS #323 New Data Governance Models for African Nlp Ecosystems — He mentions that procurement provides an opportunity for developer communities and notes that people in remote areas can…
S78
Transforming technology frameworks for the planet | IGF 2023 — Kemly Camacho:models and feminist economy proposals. Not only for our own business, but also, as I said before, to creat…
S79
Contents — In many ways, the apparently logical search for value seems to be one of the more paralysing aspects of IoT adoption. Co…
S80
Thinking Big on Digital Inclusion — Promoting diversity in AI tool creation and business practices leads to better outcomes. Involving students from underre…
S81
FOREWORDS — MBDS participants recognize a need for closer communication between vertical programs within the health sector, and with…
S82
AI That Empowers Safety Growth and Social Inclusion in Action — I mean, the high impact use case can have more investment, more focus versus a low risk, right? I think that’s the first…
S83
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Larissa Zutter:Yeah, so I think that’s a super loaded question because, yeah, I think, of course, there’s definitely pos…
S85
Inclusive AI_ Why Linguistic Diversity Matters — Again, ultimately, I’ll go back to the end objectives. What is the purpose for which we are sharing the data? Is it serv…
S86
From data to impact: Digital Product Information Systems and the importance of traceability for global environmental governance — UNECE has implemented practical pilot projects in collaboration with the World Bank to test traceability and transparenc…
S87
Announcement of New Delhi Frontier AI Commitments — The minister initially referenced “two significant commitments” but then outlined four areas of focus, with some repetit…
S88
Press Conference: Closing the AI Access Gap — Moreover, the speakers argue that AI can drive productivity, creativity, and overall economic growth. It has the capacit…
S89
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — And growing enterprise adoption. Anthropic announced its partnership with Infosys, Tata, OpenAI. I’m sure you’re all wat…
S90
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Inequality and limited inclusivity in the implementation of accessibility and inclusivity practices are identified as pe…
S91
CSTD – Eighteenth Session — There is a growing gap in the quality of connectivity and ability to use ICTs
S92
Closing Session  — Connectivity gaps persist in underserved regions and markets, requiring continued attention and investment
S93
Building an Enabling Environment for Indigenous, Rural and Remote Connectivity — According to Carlos, existing approaches have reached a plateau and no longer suffice to bridge the digital divide for t…
S94
Day 0 Event #154 Last Mile Internet: Brazil’s G20 Path for Remote Communities — 1. A suggestion to create a Last Mile Coalition within the UN Internet Governance Forum to focus on the specific needs o…
S95
Meta’s metaverse push with AI and digital assistants — Meta CEO Mark Zuckerberg is delving into digital assistants, smart glasses, and AI, accompanied by new AI tools and cele…
S96
Meta’s Hypernova smart glasses promise cutting-edge features and advanced display technology — Metais preparing to launch an advanced pair of smart glassesunder the codename Hypernova, featuring a built-in display a…
S97
Bridging the Digital Divide: Inclusive ICT Policies for Sustainable Development — This discussion, led by Dr. Hakikur Rahman from International Standard University and Dr. Anujit Kumar Dutta from City U…
S98
How Switzerland can shape AI in 2026 — Switzerland is heading into 2026 facing an AI transition marked by uncertainty, and it may not win a raw ‘compute race’ …
S99
Digital democracy and future realities | IGF 2023 WS #476 — Finally, the analysis advises policymakers to be mindful of the diversity of the internet ecosystem. It suggests that po…
S100
Cooperation for a Green Digital Future | IGF 2023 — In conclusion, the analysis advocates for harnessing digital technologies to achieve green objectives and emphasises the…
S101
Rethinking the digital landscape at IGF 2023’s sustainability and environment session — TheMain Session on Sustainability and Environmentat theIGF 2023brought together a panel of experts and thought leaders t…
S102
Opening Ceremony — Henna Virkkunen: Honourable participants, ladies and gentlemen, it’s a great pleasure to be here and welcome you to Euro…
S103
Open Forum #50 Digital Innovation and Transformation in the UN System — Fui Meng Liew: Thank you, Dino. Dino, because we are hearing online from the room a bit choppy, the voice, so please …
S104
Al and Global Challenges: Ethical Development and Responsible Deployment — Anuja Shukla was scheduled to be the first remote speaker, but technical problems, particularly with audio, prevented he…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
N
Nirmal Bhansali
10 arguments178 words per minute1041 words350 seconds
Argument 1
Multi‑layered access needs (connectivity, skilling, interfaces) – Nirmal Bhansali
EXPLANATION
Nirmal stresses that access to AI is not a single issue but consists of several layers, including reliable connectivity, appropriate skill development, and user‑friendly interfaces. Without addressing all these layers, technology alone cannot ensure inclusion.
EVIDENCE
He states that “access is a multi-layered problem” and emphasizes the need to focus on “connectivity, in skilling, in the interfaces that people use” [2][6].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The scale of connectivity gaps (2.6 billion people offline) is highlighted in [S13], while the need for technical skills and localized training is emphasized in [S14].
MAJOR DISCUSSION POINT
Access barriers
AGREED WITH
Archana Joshi, Speaker 1
Argument 2
Three pillars: participatory design, low‑bandwidth usability, procurement incentives – Nirmal Bhansali
EXPLANATION
Nirmal proposes a framework of three interconnected pillars—design, access, and investment—to embed inclusion in AI. The design pillar calls for participatory design, the access pillar stresses low‑bandwidth and offline usability, and the investment pillar recommends aligning procurement standards to reward accessibility.
EVIDENCE
He outlines the three pillars, urging “participatory design involve the people as you’re building it out” and noting the need for AI tools to work in low-bandwidth environments and for governments to act as anchor buyers with standards that reward accessibility [26-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Low-bandwidth, mobile-first design is advocated in [S15]; participatory design principles are discussed in [S18]; and the importance of government procurement standards is noted in [S26].
MAJOR DISCUSSION POINT
Inclusive AI framework
Argument 3
Language is foundational; AI must operate in local languages and contexts – Nirmal Bhansali
EXPLANATION
Nirmal argues that language is a core prerequisite for inclusive AI, as systems need to understand and operate in the local linguistic context to be effective. This applies across sectors such as banking and education.
EVIDENCE
He notes that “language is foundational for enabling inclusion” and gives examples of voice AI in banking and educational AI tutors needing local language support [22-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of local language support for AI tools is underscored in [S15] and further contextualised in [S16].
MAJOR DISCUSSION POINT
Localization
AGREED WITH
Arghya Bhattacharya, Speaker 1
Argument 4
Assistive‑tech market (“purple economy”) is a $150 billion business opportunity, not charity – Nirmal Bhansali
EXPLANATION
Nirmal frames the assistive‑technology market for people with disabilities as a sizable economic sector rather than a charitable cause, highlighting its commercial potential for businesses.
EVIDENCE
He describes the market as “the market of assistive tech products for people with disabilities” with “$150 billion just in this space” and emphasizes that these users can purchase and access products [9-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The $150 billion market size for assistive technology is documented in [S20].
MAJOR DISCUSSION POINT
Business case for assistive tech
Argument 5
Embed inclusion from the start through participatory design with target users – Nirmal Bhansali
EXPLANATION
Nirmal stresses that inclusion must be built into AI systems from the earliest design stages by involving the intended users directly, ensuring the product meets real needs and avoids later failure.
EVIDENCE
He recommends “participatory design involve the people as you’re building it out” and gives the example of designing for ASHA workers without their involvement leading to failure [26].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Participatory design with end-users is highlighted as a best practice in [S18].
MAJOR DISCUSSION POINT
Participatory design
Argument 6
Shishumapin: low‑bandwidth tool for ASHA workers to measure newborns – Nirmal Bhansali
EXPLANATION
Nirmal presents Shishumapin as a simple AI‑enabled application that lets frontline health workers capture a photo or video of a newborn and receive accurate measurements, even in low‑connectivity settings.
EVIDENCE
He describes the tool as allowing ASHA workers to “take a photo or a video of a newborn baby and get accurate measurements” and notes that it works offline and with low internet [39-43].
MAJOR DISCUSSION POINT
Healthcare use case
Argument 7
Reban glasses with “Be My Eyes” feature for visually impaired navigation – Nirmal Bhansali
EXPLANATION
Nirmal highlights the Reban glasses, which incorporate a “Be My Eyes” feature that assists visually impaired users in navigating their surroundings, showcasing inclusive hardware design.
EVIDENCE
He mentions trying the glasses at the Meta stall and explains that the “Be My Eyes” feature helps people with visual impairment navigate the world [44-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The “Be My Eyes” assistive tool for visually impaired users is described in [S22].
MAJOR DISCUSSION POINT
Assistive hardware
Argument 8
YesSense app maps building accessibility to inform policy – Nirmal Bhansali
EXPLANATION
Nirmal describes the YesSense app, which enables users to photograph buildings and assess their accessibility for people with disabilities, creating a database that can guide future policymaking.
EVIDENCE
He explains that the app lets users “take photos of buildings and physical spaces and understand whether they can be accessed by people with disabilities,” generating data for policy [49-52].
MAJOR DISCUSSION POINT
Accessibility data collection
Argument 9
AI projects often remain stuck in the pilot stage due to systemic constraints, requiring mechanisms to scale them up
EXPLANATION
Nirmal points out that many AI solutions never move beyond pilots because the surrounding ecosystem—such as last‑mile diffusion, funding, and limited support—hinders execution. He argues that addressing these systemic barriers is essential for broader impact.
EVIDENCE
He notes that a lot of AI products are stuck in the pilot stage and are not able to execute for many reasons, fundamentally around the surrounding system, including last-mile diffusion, funding, or limited support to scale them up [18-21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Challenges of pilots hindering scale are analysed in [S27] and the fragmentation risk of pilot projects is noted in [S17].
MAJOR DISCUSSION POINT
Scaling AI solutions
Argument 10
Building institutional capacity within governments is crucial for AI adoption and procurement standards
EXPLANATION
Nirmal emphasizes that governments need to develop technical expertise in AI to create effective procurement standards and technical specifications. This institutional capacity will drive wider adoption of inclusive AI technologies.
EVIDENCE
He mentions that many governments need to build technical expertise in AI, develop departments that understand it, and embed standards in procurement and technical specifications to increase adoption [26-27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for government AI expertise and procurement standards is emphasized in [S26].
MAJOR DISCUSSION POINT
Institutional capacity building
A
Arghya Bhattacharya
7 arguments176 words per minute1634 words554 seconds
Argument 1
Non‑profit model aligns incentives and eases court adoption – Arghya Bhattacharya
EXPLANATION
Arghya argues that operating as a non‑profit aligns his company’s incentives with those of the courts, reducing concerns about data misuse, costs, and evaluation, thereby facilitating adoption.
EVIDENCE
He explains that being a non-profit “took away a lot of the stress around… data… charging… evaluation” and helped them get into courts, now operating in nine Indian states and mandated in Kerala [266-284].
MAJOR DISCUSSION POINT
Non‑profit procurement advantage
AGREED WITH
Nirmal Bhansali, Speaker 1
Argument 2
Multilingual WhatsApp chatbot provides case‑status information to citizens – Arghya Bhattacharya
EXPLANATION
Arghya describes a WhatsApp‑based chatbot that lets citizens, in any language, retrieve real‑time information about their court cases, such as case status and next hearing dates.
EVIDENCE
He details the chatbot that “any citizen can access… in any language… give your name and PIN code and it will tell you if you have a case, next date of hearing, previous order” while explicitly avoiding legal advice [91-98].
MAJOR DISCUSSION POINT
Direct access to justice
AGREED WITH
Nirmal Bhansali, Speaker 1
Argument 3
Multilingual legal transcription tool boosts court productivity 2‑3× – Arghya Bhattacharya
EXPLANATION
Arghya notes that their multilingual transcription tool, which understands Indian accents and dialects, dramatically increases court productivity by enabling courts to record more witness depositions per day.
EVIDENCE
He states the tool “understands Indian accents and dialects” and that courts using it improve productivity “two to three X,” allowing recording of four to six depositions per day [105-110].
MAJOR DISCUSSION POINT
Efficiency gains in judiciary
Argument 4
Justice is a logistics problem; AI can streamline case handling and reduce pendency – Arghya Bhattacharya
EXPLANATION
Arghya frames justice as a logistical challenge rather than a purely legal one, suggesting that AI can address inefficiencies such as paperwork, file searching, and case tracking to reduce pendency.
EVIDENCE
He observes that courts are filled with “towers of paper” and that “justice in these settings is really not a question of law. It’s become a question of logistics” and that AI can make courts more efficient [78-80].
MAJOR DISCUSSION POINT
Justice system logistics
Argument 5
Direct track: chatbot gives citizens real‑time case information; indirect track: AI improves court efficiency – Arghya Bhattacharya
EXPLANATION
Arghya distinguishes two pathways for AI in justice: a direct track where citizens obtain case updates via a chatbot, and an indirect track where AI tools enhance court operations such as transcription and document navigation.
EVIDENCE
He outlines the direct track with the WhatsApp chatbot (see above) and the indirect track describing transcription, digitizing workflows, and navigating thousands of pages [84-98][101-110].
MAJOR DISCUSSION POINT
Dual AI pathways in justice
Argument 6
Non‑profit status reduces procurement friction and builds trust with courts – Arghya Bhattacharya
EXPLANATION
Arghya explains that being a non‑profit removes procurement barriers, as courts are less concerned about data privacy, costs, and evaluation, making it easier to secure contracts and scale.
EVIDENCE
He notes that the non-profit model “took away a lot of the stress around… data… charging… evaluation” and that courts now have more experience drafting RFPs after working with them [266-284].
MAJOR DISCUSSION POINT
Procurement simplification
Argument 7
AI tools should avoid providing legal advice or summarization because they are not yet safe or reliable
EXPLANATION
Arghya stresses that while AI can deliver information, it must not be used to give legal advice or generate case summaries, as inaccuracies could cause harm. He recommends steering clear of legal intelligence until the technology is proven safe.
EVIDENCE
He explicitly states that the chatbot discourages any sort of legal advice using AI models and that they are steering away from legal summarization because it is not safe enough at this point [98-100].
MAJOR DISCUSSION POINT
Safety and ethics of AI in law
S
Speaker 1
4 arguments132 words per minute1438 words651 seconds
Argument 1
Agile, innovation‑friendly public procurement to avoid slow classic processes – Speaker 1
EXPLANATION
Speaker 1 argues that traditional public procurement is too slow for fast‑moving AI technologies, recommending an agile, small‑step approach that brings together key players to develop solutions quickly.
EVIDENCE
He describes how classic procurement can take three years to buy ten phones, making the process obsolete for tech, and proposes an agile model where “key players… compete to see the best… small step development” [224-243].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Agile procurement and the role of standards in accelerating AI adoption are discussed in [S26].
MAJOR DISCUSSION POINT
Procurement reform
AGREED WITH
Nirmal Bhansali, Arghya Bhattacharya
Argument 2
Scaling hub model focuses on ecosystem building and rapid impact, avoiding pilot trap – Speaker 1
EXPLANATION
Speaker 1 outlines Rwanda’s AI Scaling Hub, which aims to drive AI implementation aligned with national priorities by scouting successful use cases, adapting them, and building an ecosystem to sustain impact, thereby preventing projects from staying in pilot mode.
EVIDENCE
He explains the hub’s mission to “drive AI implementation while ensuring alignment with national priorities” and its two pillars: scouting successful solutions and building an ecosystem for scaling and sustainability [135-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The risk of pilots preventing scale and the need for ecosystem-wide approaches are highlighted in [S27] and [S17].
MAJOR DISCUSSION POINT
Scaling AI solutions
AGREED WITH
Nirmal Bhansali
Argument 3
Public procurement reforms (agile, small‑step development) accelerate deployment – Speaker 1
EXPLANATION
Speaker 1 reiterates that adopting agile procurement methods, such as fast‑track competitions and iterative development, can keep pace with rapid technology changes and speed up AI deployment.
EVIDENCE
He highlights the need to avoid the three-year procurement cycle, suggesting a model where “key players… compete… small step development” to stay relevant [232-240].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recommendations for agile, iterative procurement to keep pace with AI are found in [S26].
MAJOR DISCUSSION POINT
Accelerated deployment
Argument 4
Building Kinyarwanda datasets to enable AI for a low‑resource language – Speaker 1
EXPLANATION
Speaker 1 notes that Rwanda is creating text and voice datasets for Kinyarwanda, a low‑resource language, to allow AI models to understand and operate in the local language, with expectations of a full‑stack dataset in a few years.
EVIDENCE
He states that Rwanda is “building the models, building the data set for the language” and expects a full-stack Kinyarwanda dataset in a couple of years, with ongoing improvements [152-155].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Efforts to create low-resource language datasets and the importance of local language AI are described in [S15] and [S16].
MAJOR DISCUSSION POINT
Low‑resource language development
AGREED WITH
Archana Joshi
A
Agustya Mehta
4 arguments180 words per minute642 words213 seconds
Argument 1
Investment priorities must stay nimble, avoid sunk‑cost fallacy, and follow user‑driven trends – Agustya Mehta
EXPLANATION
Agustya stresses that investment decisions should remain flexible, avoiding commitment to outdated plans, and should adapt to emerging user needs and trends, especially as AI becomes central to product positioning.
EVIDENCE
He recounts how the Ray-Ban glasses were initially designed for photo/audio use, then shifted to music based on user behavior, and warns against “sunk cost fallacy” and the need to be nimble [292-303].
MAJOR DISCUSSION POINT
Flexible investment strategy
Argument 2
“Nothing about us without us”: hire diverse teams and involve people with disabilities in design – Agustya Mehta
EXPLANATION
Agustya advocates the principle that products should be designed with direct involvement of people with disabilities, and that teams should be diverse to avoid a narrow perspective, framing inclusion as both ethical and business‑wise.
EVIDENCE
He states “nothing about us without us” and argues for hiring people from varied backgrounds, noting that it is “good business” and not merely charity [345-350].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The principle of designing “with, not for” users and involving people with disabilities is emphasized in [S18]; inclusive assistive tools such as “Be My Eyes” illustrate the value of diverse design in [S22].
MAJOR DISCUSSION POINT
Participatory design principle
Argument 3
Universal/accessible design improves products for everyone, driving broader innovation – Agustya Mehta
EXPLANATION
Agustya claims that designing for accessibility benefits all users and fuels broader innovation, citing historical examples where assistive technologies led to mainstream products.
EVIDENCE
He references universal design, mentions curb cuts benefiting strollers and carts, and lists inventions like flatbed scanners, text-to-speech, and OCR that originated from accessibility work [341-355].
MAJOR DISCUSSION POINT
Universal design benefits
AGREED WITH
Nirmal Bhansali, Archana Joshi
Argument 4
Ray‑Ban AI glasses evolved from photo/audio use to music focus, showing need for flexible investment – Agustya Mehta
EXPLANATION
Agustya explains that the Ray‑Ban AI glasses’ product roadmap shifted from image capture to music playback based on user behavior, illustrating how investment priorities must adapt to real‑world usage patterns.
EVIDENCE
He describes the first iteration intended for photos and calls, then the second iteration improving speakers for music after observing user preferences, and notes the lack of AI in early plans [295-303].
MAJOR DISCUSSION POINT
Product evolution driven by user data
A
Archana Joshi
4 arguments148 words per minute1765 words712 seconds
Argument 1
High cost of diverse data; need government data initiatives to lower barriers – Archana Joshi
EXPLANATION
Archana points out that inclusive AI requires diverse datasets, which are currently expensive, and suggests that government initiatives like AI Kosh can provide affordable, locally relevant data to reduce these costs.
EVIDENCE
She notes that “$1 spent on AI, you have to spend $3 on data” and highlights India’s AI Kosh as a government platform offering diverse Indian datasets to make inclusion financially viable [317-332].
MAJOR DISCUSSION POINT
Data cost barrier
AGREED WITH
Speaker 1
Argument 2
Positioning inclusion as CSR limits budgets; inclusion must be economically viable – Archana Joshi
EXPLANATION
Archana argues that framing inclusion solely as a CSR activity ties it to limited CSR budgets, which often do not support robust product development, and that inclusion should be pursued as a sound business proposition.
EVIDENCE
She says “If you position inclusion as a CSR initiative, you are also going to get budgets which match the CSR initiatives, which don’t necessarily translate to good products” and advises against this approach [311-314].
MAJOR DISCUSSION POINT
CSR vs business case
AGREED WITH
Nirmal Bhansali, Agustya Mehta
Argument 3
Boardrooms now see inclusion as good business; early inclusive design prevents costly later fixes – Archana Joshi
EXPLANATION
Archana observes a shift in corporate boardrooms where inclusion is recognized as a profitable strategy, and she stresses that embedding inclusion early avoids expensive retrofits later.
EVIDENCE
She notes that “businesses are appreciating the fact that if they don’t do inclusive by design, they are leaving money on the table” and that inclusion is now part of long-term ROI discussions [202-204].
MAJOR DISCUSSION POINT
Inclusion as ROI
Argument 4
Offline‑first AI platform for real‑time refugee aid allocation – Archana Joshi
EXPLANATION
Archana describes an AI solution built for humanitarian field workers that processes real‑time data to direct aid during refugee crises, designed to function offline or with intermittent connectivity.
EVIDENCE
She explains the product helps field workers “process real-time information” and must work when internet connectivity is down, emphasizing offline capability [165-174].
MAJOR DISCUSSION POINT
Humanitarian AI design
AGREED WITH
Nirmal Bhansali, Speaker 1
R
Rutuja Pol
2 arguments181 words per minute1411 words465 seconds
Argument 1
Integrating design, access, and investment pillars ensures inclusion becomes a concrete, everyday conversation across products
EXPLANATION
Rutuja highlights the three pillars identified by Nirmal—design, access, and investment—and calls for their combined use so that inclusion moves from a concept to a routine part of product development and discussion.
EVIDENCE
She references the three things that were important from the findings-design, access, and investment-and asks how to use them interchangeably and together to ensure inclusion is not just a concept but becomes a common part of product conversations [66-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The three-pillar framework (design, access, investment) is outlined in [S1]; participatory design is reinforced in [S18]; and the need for procurement standards is noted in [S26].
MAJOR DISCUSSION POINT
Holistic inclusion framework
Argument 2
The summit has successfully moved inclusion discussions into boardrooms, making them a mainstream business consideration
EXPLANATION
Rutuja observes that the event has shifted the conversation about inclusive AI from niche circles into corporate boardrooms, indicating that inclusion is now being treated as a strategic business issue.
EVIDENCE
She notes that the summit made the conversation inclusive, really common and very boardroom, entered into the boardroom finally [205-207].
MAJOR DISCUSSION POINT
Inclusion as a business priority
M
Moderator
1 argument103 words per minute110 words63 seconds
Argument 1
Publicly launching the report with a group photograph helps raise visibility and stakeholder engagement for the findings
EXPLANATION
The moderator emphasizes the importance of a visible launch event, using a collective photograph to signal the release of the report and to draw attention from participants and wider audiences.
EVIDENCE
The moderator thanks Nirmal for his findings and announces the launch of the report, requesting a quick photograph of the panel to publicize the work [58-60].
MAJOR DISCUSSION POINT
Report dissemination and outreach
Agreements
Agreement Points
Access to AI requires multi‑layered solutions including connectivity, skills and appropriate interfaces
Speakers: Nirmal Bhansali, Archana Joshi, Speaker 1
Multi‑layered access needs (connectivity, skilling, interfaces) – Nirmal Bhansali Offline‑first AI platform for real‑time refugee aid allocation – Archana Joshi Building Kinyarwanda datasets to enable AI for a low‑resource language – Speaker 1
All three speakers stress that technology alone is insufficient; reliable connectivity, offline capability and user-centric interfaces must be addressed to achieve inclusive AI [2][6][165-174][366-367].
POLICY CONTEXT (KNOWLEDGE BASE)
Recognises the need highlighted in the Global AI Policy Framework that connectivity gaps and lack of local language content impede AI adoption, and aligns with calls for infrastructure, skills and data access in African AI policy discussions [S58][S59][S51][S57].
Language and localisation are foundational for inclusive AI
Speakers: Nirmal Bhansali, Arghya Bhattacharya, Speaker 1
Language is foundational; AI must operate in local languages and contexts – Nirmal Bhansali Multilingual WhatsApp chatbot provides case‑status information to citizens – Arghya Bhattacharya Building Kinyarwanda datasets to enable AI for a low‑resource language – Speaker 1
The speakers agree that AI systems must understand and operate in the languages of their users, whether in banking, justice or agriculture, to be effective [22-24][91-93][152-155].
POLICY CONTEXT (KNOWLEDGE BASE)
Echoes UNESCO and IGF findings that local language content is critical for equitable AI use and is a core element of inclusive data policies [S58][S48][S59].
Participatory and inclusive design is essential from the outset
Speakers: Nirmal Bhansali, Arghya Bhattacharya, Agustya Mehta
Embed inclusion from the start through participatory design – Nirmal Bhansali Design process includes judges before any code is written – Arghya Bhattacharya “Nothing about us without us”: hire diverse teams and involve people with disabilities – Agustya Mehta
All three emphasize involving end-users or people with disabilities early in the design cycle to ensure relevance and avoid later failure [26][371-374][345-350].
POLICY CONTEXT (KNOWLEDGE BASE)
Supported by IGF 2023 recommendations that inclusive design must be embedded from the design phase, and by data agency discussions emphasizing participatory governance [S47][S65][S66].
Many AI projects remain stuck in pilot mode; scaling mechanisms are needed
Speakers: Nirmal Bhansali, Speaker 1
AI projects often remain stuck in the pilot stage due to systemic constraints – Nirmal Bhansali Scaling hub model focuses on ecosystem building and rapid impact, avoiding pilot trap – Speaker 1
Both note that without dedicated scaling pathways and ecosystem support, promising AI pilots fail to achieve broader impact [18-21][135-148].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects the ‘pilotitis’ issue documented in AI infrastructure reports and calls for scaling mechanisms through government involvement and data governance [S61][S62][S63].
Government procurement and institutional capacity must be re‑engineered for AI
Speakers: Nirmal Bhansali, Speaker 1, Arghya Bhattacharya
Institutional capacity within governments is crucial for AI adoption and procurement standards – Nirmal Bhansali Agile, innovation‑friendly public procurement to avoid slow classic processes – Speaker 1 Non‑profit model aligns incentives and eases court adoption – Arghya Bhattacharya
All three call for new, agile procurement models and capacity building to align incentives, reduce friction and enable faster AI deployment in the public sector [31-34][26-27][224-243][266-284].
POLICY CONTEXT (KNOWLEDGE BASE)
Aligns with emerging procurement frameworks that shift from lowest-price to outcome-driven models and promote agile public procurement for AI [S50][S52][S51].
Inclusion is a business opportunity, not merely a CSR activity
Speakers: Nirmal Bhansali, Archana Joshi, Agustya Mehta
Assistive‑tech market (“purple economy”) is a $150 billion business opportunity – Nirmal Bhansali Positioning inclusion as CSR limits budgets; inclusion must be economically viable – Archana Joshi Universal/accessible design improves products for everyone, driving broader innovation – Agustya Mehta
The panel concurs that inclusive AI delivers commercial value and should be pursued as a core business strategy rather than a peripheral CSR effort [16-18][311-314][349-351].
POLICY CONTEXT (KNOWLEDGE BASE)
Corroborated by AI4All analysis that frames inclusion as a market driver and economic incentive for firms [S49][S57].
Diverse, affordable data is essential for inclusive AI
Speakers: Archana Joshi, Speaker 1
High cost of diverse data; need government data initiatives to lower barriers – Archana Joshi Building Kinyarwanda datasets to enable AI for a low‑resource language – Speaker 1
Both highlight that the scarcity and cost of representative datasets hinder inclusion, and that public initiatives can mitigate this challenge [317-332][152-155].
POLICY CONTEXT (KNOWLEDGE BASE)
Consistent with policy briefs emphasizing open, diverse datasets and addressing data silos as barriers to scaling AI solutions [S48][S59][S61].
Similar Viewpoints
Both see the provision of real‑time information to end‑users (e.g., case status) as a key way to bridge access gaps, requiring reliable connectivity and user‑friendly interfaces [2][6][91-93].
Speakers: Nirmal Bhansali, Arghya Bhattacharya
Multi‑layered access needs (connectivity, skilling, interfaces) – Nirmal Bhansali Direct track: chatbot gives citizens real‑time case information – Arghya Bhattacharya
Both stress that supporting low‑resource or regional languages is essential for AI uptake among marginalized populations [91-93][152-155].
Speakers: Arghya Bhattacharya, Speaker 1
Multilingual WhatsApp chatbot provides case‑status information – Arghya Bhattacharya Building Kinyarwanda datasets to enable AI for a low‑resource language – Speaker 1
Both argue that designing for accessibility (offline capability, universal design) yields broader societal benefits and better product performance [165-174][341-355].
Speakers: Archana Joshi, Agustya Mehta
Offline‑first AI platform for real‑time refugee aid allocation – Archana Joshi Universal/accessible design improves products for everyone – Agustya Mehta
Unexpected Consensus
Non‑profit models and agile public procurement both seen as ways to reduce procurement friction for AI in the public sector
Speakers: Arghya Bhattacharya, Speaker 1
Non‑profit status reduces procurement friction and builds trust with courts – Arghya Bhattacharya Agile, innovation‑friendly public procurement to avoid slow classic processes – Speaker 1
While Arghya focuses on the legal sector and Speaker 1 on national AI scaling, both converge on the need for alternative procurement approaches (non-profit vehicles or agile, fast-track processes) to accelerate AI adoption, a link not explicitly drawn elsewhere in the discussion [266-284][224-243].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects guidance from AI Procurement in a Box and agile procurement pilots that promote non-profit and flexible procurement pathways to accelerate AI adoption [S50][S52].
Overall Assessment

The panel exhibits strong consensus around four core themes: (1) inclusive AI must address multi‑layered access barriers (connectivity, skills, language); (2) participatory, user‑centered design is non‑negotiable; (3) existing procurement and institutional frameworks are too slow, requiring agile or non‑profit‑based mechanisms; (4) inclusion is framed as a lucrative business opportunity rather than a charitable add‑on. These agreements cut across the topics of Closing all digital divides, Artificial intelligence, The enabling environment for digital development, and The digital economy, indicating a shared understanding that technical, policy and market levers must be aligned to realise inclusive AI at scale.

High – most speakers echo each other’s positions, with only minor variations in emphasis. The convergence suggests that future policy and industry initiatives are likely to prioritize multilingual, offline‑first, participatory solutions supported by reformed procurement and clear business cases for inclusion.

Differences
Different Viewpoints
Preferred procurement model for inclusive AI solutions
Speakers: Nirmal Bhansali, Speaker 1
Governments should act as anchor buyers and embed standards that reward accessibility (Nirmal Bhansali) Classic public procurement is too slow for AI; an agile, innovation‑friendly procurement approach is needed (Speaker 1)
Nirmal argues that procurement should be anchored by government standards and incentives to ensure accessibility [31-34], while Speaker 1 contends that traditional procurement cycles (e.g., three years to buy ten phones) are obsolete for fast-moving AI and proposes a rapid, small-step, competitive model to keep pace with technology [224-243].
POLICY CONTEXT (KNOWLEDGE BASE)
Debate mirrors discussions in AI procurement literature about outcome-based versus lowest-price contracts and the need for flexible legal frameworks [S50][S52][S56].
Organizational form best suited to scale inclusive AI in the justice sector
Speakers: Arghya Bhattacharya, Archana Joshi
Operating as a non‑profit aligns incentives with courts and eases procurement and trust (Arghya Bhattacharya) Corporate ROI pressures often lead to phased, language‑first roll‑outs and make inclusion a later concern (Archana Joshi)
Arghya emphasizes that a non-profit structure removes data-privacy, cost, and evaluation concerns, facilitating court adoption and scaling across states [266-284], whereas Archana describes boardroom decisions that prioritize English-only pilots to demonstrate ROI before adding local languages, arguing this approach risks exclusion and later failure [191-199].
POLICY CONTEXT (KNOWLEDGE BASE)
Informed by UNESCO Guidelines for AI in the Judiciary and analyses of justice sector AI adoption that explore institutional models and scaling challenges [S53][S54][S55].
When inclusion should be embedded in product development
Speakers: Nirmal Bhansali, Archana Joshi, Agustya Mehta
Inclusion must be built from the start through participatory design with target users (Nirmal Bhansali) Inclusion is often postponed to later phases due to ROI concerns; early inclusion is advocated but not always practiced (Archana Joshi) Investment priorities must stay nimble and adapt to emerging user trends, avoiding sunk‑cost fallacy (Agustya Mehta)
Nirmal calls for participatory design from the outset, involving end-users to avoid failure [26-27]; Archana recounts instances where clients launch English-only pilots to prove ROI, pushing inclusive features to later phases [191-199]; Agustya stresses that investment decisions should be flexible and follow real-world usage, warning against rigid early plans that become obsolete [292-303].
POLICY CONTEXT (KNOWLEDGE BASE)
Tied to IGF and inclusive design recommendations that advocate for inclusion from the start rather than as an afterthought [S47][S48][S66].
Unexpected Differences
Philosophical stance on building solutions while the plane is in flight versus establishing standards first
Speakers: Speaker 1, Nirmal Bhansali
Rwanda’s scaling hub builds AI solutions and datasets on the go, emphasizing rapid ecosystem building (Speaker 1) Nirmal calls for embedding accessibility standards and procurement incentives before large‑scale deployment (Nirmal Bhansali)
Speaker 1’s ‘build the plane as we fly it’ approach [135-148] contrasts sharply with Nirmal’s insistence on pre-defined accessibility standards and anchor-buyer mechanisms [31-34], an unexpected clash between a fast-iteration mindset and a standards-first policy stance.
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects ongoing discourse where technical standards often precede process and safety standards, highlighting tension between rapid deployment and normative frameworks [S44][S45][S56].
Overall Assessment

The panel shows strong consensus that inclusive AI is essential, but the speakers diverge on the mechanisms to achieve it—particularly around procurement models, organizational forms (non‑profit vs for‑profit), and the timing of inclusion in product design. These disagreements are moderate in intensity and revolve around policy and business‑process choices rather than the core value of inclusion.

Moderate disagreement; the differing views highlight the need for coordinated policy frameworks that can accommodate both agile innovation and standards‑based procurement, and for business models that balance ROI pressures with early inclusive design.

Partial Agreements
All three agree that local language support is essential for inclusive AI, but Nirmal stresses the principle, Speaker 1 focuses on creating new language datasets, and Arghya leverages existing multilingual interfaces to deliver information [22-24][152-155][91-98].
Speakers: Nirmal Bhansali, Speaker 1, Arghya Bhattacharya
Language is foundational for enabling inclusion (Nirmal Bhansali) Building Kinyarwanda datasets to enable AI in a low‑resource language (Speaker 1) Multilingual WhatsApp chatbot provides case‑status information in any language (Arghya Bhattacharya)
Each speaker highlights the need for capacity development to make AI effective, yet they focus on different domains—government technical expertise, judicial training, and field‑worker resilience—showing agreement on the goal but divergence in target audiences and methods [26-27][378-383][165-174].
Speakers: Nirmal Bhansali, Arghya Bhattacharya, Archana Joshi
Building institutional capacity within governments is crucial for AI adoption (Nirmal Bhansali) Adalat AI Academy trains judges and builds capacity to use AI tools (Arghya Bhattacharya) Designing AI for field workers that works offline builds capacity in humanitarian contexts (Archana Joshi)
Takeaways
Key takeaways
Inclusive AI requires a three‑pillar framework: participatory design, real‑world access (low‑bandwidth, multilingual, offline‑first), and aligned investment/procurement incentives. Language and local context are foundational; AI must support regional languages (e.g., Kinyarwanda, Hindi) and operate in low‑resource environments. Many AI projects stall at the pilot stage due to systemic constraints such as funding, slow public procurement, and lack of ecosystem support. The “purple economy” (assistive‑tech market) represents a $150 billion business opportunity, making inclusion a commercial imperative rather than charity. Non‑profit models and agile, innovation‑friendly public procurement can reduce friction and accelerate adoption, especially in the justice sector. Participatory design (“nothing about us without us”) and hiring diverse teams lead to products that work for everyone and drive broader innovation. Concrete use cases (Shishumapin, Reban glasses, YesSense, multilingual legal transcription, AI‑enabled refugee aid, Ray‑Ban glasses) illustrate how inclusive design, low‑resource readiness, and government support create impact.
Resolutions and action items
Release the inclusive‑AI report online within the next few days (as announced by Nirmal Bhansali). Governments to act as anchor buyers and embed accessibility standards in public procurement specifications. Adopt agile, innovation‑friendly procurement processes to avoid the three‑year lag of traditional ICT procurement. Encourage NGOs and non‑profits to serve as intermediaries for AI deployments in courts and other public services. Scale the Rwanda AI Scaling Hub model to identify, adapt, and rapidly deploy proven AI solutions in local contexts. Integrate inclusive design training (e.g., Adalat AI Academy) into official curricula for judges and other public‑sector users. Leverage government data initiatives such as India’s AI Kosh to lower the cost of diverse, multilingual datasets.
Unresolved issues
How to sustainably fund and maintain low‑bandwidth, offline‑first AI solutions for remote or disaster‑affected areas. The high cost and scarcity of diverse, multilingual training data; concrete mechanisms to reduce these costs remain unclear. Ensuring that AI products move beyond pilot projects at scale without compromising data privacy or security. Balancing phased language roll‑outs (e.g., English first, then Hindi) with the need for immediate inclusivity; no consensus reached. Long‑term governance structures for continuous ecosystem building (innovation hubs, standards bodies) were discussed but not finalized.
Suggested compromises
Adopt a phased, agile procurement approach that combines rapid small‑step development with periodic reviews, rather than waiting for lengthy traditional RFP cycles. Use non‑profit entities to align incentives and reduce procurement friction while still delivering commercial‑grade AI solutions. In product road‑maps, allow flexibility to pivot investment toward emerging user‑driven use cases (e.g., shifting Ray‑Ban glasses focus from photo capture to music/audio) to avoid sunk‑cost fallacy.
Thought Provoking Comments
The market of assistive tech products for people with disabilities – often seen as a charitable cause – is actually a $150 billion business opportunity. It’s not charity, it’s a simple business proposition.
Reframes disability‑focused technology from a moral imperative to a sizable commercial market, challenging the common perception that such products are only for philanthropy.
Shifted the conversation from purely social good to economic viability, prompting other panelists to discuss how profit motives can drive inclusive design and influencing the later discussion on investment and anchor‑buyer strategies.
Speaker: Nirmal Bhansali
A lot of AI products are stuck in the pilot stage because the surrounding system – last‑mile diffusion, funding, limited support – prevents scaling.
Identifies systemic bottlenecks beyond technology, highlighting why many promising pilots never become real‑world solutions.
Led to deeper exploration of institutional capacity and procurement challenges, setting up Arghya’s and the Rwanda speaker’s remarks about scaling mechanisms and non‑profit pathways.
Speaker: Nirmal Bhansali
At least 33 % of the world – 2.6 billion people – still don’t have internet. AI tools must work in low‑bandwidth or offline environments, not just on high‑speed smartphones.
Brings a hard data point that grounds the inclusion debate in concrete infrastructure realities, emphasizing design for low‑resource contexts.
Prompted Archana to cite the refugee‑crisis use case where connectivity is intermittent, and reinforced the panel’s focus on designing for “real‑world conditions.”
Speaker: Nirmal Bhansali
Governments can act as anchor buyers and embed standards that reward accessibility and open standards, shaping market incentives for inclusive AI.
Proposes a concrete policy lever—government procurement—to align market forces with inclusion, moving the discussion from theory to actionable policy.
Spurred the Rwanda speaker to describe Rwanda’s AI Scaling Hub procurement model and Arghya’s discussion of non‑profit procurement advantages.
Speaker: Nirmal Bhansali
Justice in district courts is really a logistics problem, not a legal problem.
Reframes the core challenge of access to justice, shifting focus from substantive law to operational inefficiencies that AI can address.
Opened the floor to talk about AI tools that streamline case information (WhatsApp chatbot) and transcription, influencing the later discussion on pain‑killer vs. vitamin solutions.
Speaker: Arghya Bhattacharya
We should build painkillers before vitamins – solve the biggest pain points (e.g., handwritten notes) first, then add extra features.
Provides a strategic product‑development framework that prioritizes high‑impact, immediate needs over nice‑to‑have features.
Guided the conversation toward pragmatic design choices, resonating with Archana’s ROI vs. inclusion debate and Agustya’s emphasis on core accessibility.
Speaker: Arghya Bhattacharya
Being a non‑profit helped us align incentives with courts, removed data‑privacy concerns, and made it easier to get into procurement processes.
Highlights an unconventional organizational model that can overcome procurement and trust barriers, challenging the assumption that only for‑profit firms can scale AI in public institutions.
Inspired the Rwanda speaker’s mention of agile, innovation‑friendly procurement and reinforced the theme of creative institutional pathways.
Speaker: Arghya Bhattacharya
We are building the plane as we fly it – developing AI models and datasets for Kinyarwanda while simultaneously deploying solutions.
Captures the iterative, resource‑constrained reality of low‑resource language AI development, challenging the notion that perfect data must exist before deployment.
Provided a vivid metaphor that resonated with the audience, leading to follow‑up questions about low‑resource language challenges and influencing the discussion on agile scaling hubs.
Speaker: Speaker 1 (Rwanda AI Scaling Hub)
Positioning inclusion as a CSR initiative ties it to limited CSR budgets and often results in sub‑par products; it should be framed as core business value.
Challenges a common corporate framing strategy, arguing that CSR positioning undermines the economic case for inclusion.
Shifted the boardroom narrative from “nice‑to‑have” to “must‑have,” reinforcing the earlier point about ROI and prompting the panel to discuss how inclusion drives revenue.
Speaker: Archana Joshi
In AI, you spend $1 on the model but $3 on diverse data; without affordable, high‑quality data, inclusion stalls.
Quantifies the hidden cost of inclusion, exposing a practical barrier that many executives overlook.
Deepened the conversation on investment, leading to mentions of government data initiatives (AI Kosh) and reinforcing the need for policy‑level data support.
Speaker: Archana Joshi
Accessible design is good design; universal design benefits everyone. Nothing about us without us – involve people with lived experience from day one.
Synthesizes a core design philosophy that ties ethical inclusion to universal product excellence, and stresses participatory design.
Echoed Nirmal’s participatory design pillar, reinforced by Arghya’s courtroom immersion practice, and set the tone for the final design‑focused segment.
Speaker: Agustya Mehta
Many innovations (flatbed scanner, OCR, text‑to‑speech) originated from accessibility needs; innovation is seeded by accessibility.
Provides historical evidence that accessibility drives broader technological progress, challenging the view that inclusion is a niche add‑on.
Strengthened the argument that investing in inclusive AI yields spill‑over benefits, influencing the panel’s concluding remarks on sustainable, inclusive growth.
Speaker: Agustya Mehta
Overall Assessment

The discussion was steered by a series of pivotal insights that repeatedly reframed inclusion from a peripheral concern to a central business and policy driver. Nirmal’s framing of the ‘purple economy’, the pilot‑stage bottleneck, and the low‑connectivity reality set the agenda, prompting panelists to surface concrete strategies—Arghya’s logistics‑first view of justice, the painkiller‑vs‑vitamin product lens, and the non‑profit procurement model; the Rwanda speaker’s ‘building the plane as we fly it’ metaphor illustrated agile scaling in low‑resource settings; Archana’s critique of CSR framing and data‑cost analysis exposed hidden economic barriers; and Agustya’s universal‑design mantra tied all these threads together, showing that inclusive design fuels broader innovation. Each of these comments acted as a turning point, opening new sub‑topics (policy, procurement, data, language, boardroom strategy) and deepening the conversation, ultimately shaping the panel’s consensus that inclusive AI is both a moral imperative and a scalable, profitable market opportunity.

Follow-up Questions
What are the safest ways to provide citizens access to judicial information via AI?
Rutuja indicated she would return to Arghya on the safest methods for accessing judicial information, highlighting a need to identify secure, reliable channels that avoid providing legal advice.
Speaker: Rutuja Pol
What changes are needed in procurement rules to make AI adoption in courts faster and more usable?
Rutuja asked Arghya about how existing procurement rules affect court deployments and what should change, pointing to a gap in policy that hampers timely AI integration.
Speaker: Rutuja Pol, Arghya Bhattacharya
How can AI models be safely used for legal summarization for different stakeholders (citizens, judges, lawyers)?
Arghya mentioned steering away from legal summarization due to safety concerns, indicating a need for research on safe, context‑specific summarization.
Speaker: Arghya Bhattacharya
Why do many AI products remain stuck in the pilot stage and fail to scale?
Nirmal highlighted that numerous AI solutions never move beyond pilots, suggesting investigation into systemic barriers such as last‑mile diffusion, funding, and support.
Speaker: Nirmal Bhansali
How can comprehensive Kinyarwanda language datasets (text and voice) be developed for AI applications?
The Rwandan speaker discussed ongoing work to build Kinyarwanda datasets, indicating a research need for full‑stack language resources for low‑resource languages.
Speaker: Speaker 1 (Olivier)
What is the impact of inclusive design on business ROI and market incentives?
Archana described the tension between ROI and inclusion, suggesting a need to study how inclusive AI affects financial performance and incentives.
Speaker: Archana Joshi
How effective are AI‑powered advisory solutions for smallholder farmers with low connectivity and language barriers?
Olivier gave an example of an AI advisory tool for farmers speaking only Kinyarwanda and with shaky connectivity, indicating a research gap on adoption and outcomes.
Speaker: Speaker 1 (Olivier)
What are the best practices for scaling AI‑driven accessibility hardware (e.g., Ray‑Ban glasses) across diverse user groups?
Agustya described the evolving product roadmap and unexpected usage patterns, pointing to a need for research on scaling and user adoption of AI‑enabled devices.
Speaker: Agustya Mehta
What are the best practices for embedding participatory design (‘nothing about us without us’) in AI product development?
Both speakers emphasized participatory design as essential, indicating a need for concrete guidelines and frameworks.
Speaker: Nirmal Bhansali, Agustya Mehta
How does the AI Kosh government data repository affect the cost and feasibility of building inclusive AI systems?
Archana mentioned AI Kosh as a source of diverse datasets that could lower inclusion costs, suggesting research on its actual impact.
Speaker: Archana Joshi
How can non‑profit models be leveraged to align incentives and facilitate AI adoption in the public sector?
Arghya highlighted that being a non‑profit helped with trust and procurement, indicating a need to explore nonprofit structures as vehicles for public‑sector AI.
Speaker: Arghya Bhattacharya
What is the impact of AI training academies (e.g., Adalat AI Academy) on changing judicial workflows and technology adoption?
Arghya described the Academy’s role in training judges and uncovered gaps (e.g., browser updates), suggesting research on training effectiveness.
Speaker: Arghya Bhattacharya
What metrics should be used to evaluate AI’s role as a force multiplier for accessibility?
Rutuja asked Agustya how AI devices drive accessibility‑first innovation, prompting the need for evaluation frameworks.
Speaker: Rutuja Pol, Agustya Mehta
Will ecosystems choose to build systems that ensure AI expansion is durable, equitable, and sustainable after the summit?
Nirmal posed a rhetorical but open question about long‑term ecosystem commitment, indicating a need for longitudinal study of post‑summit adoption.
Speaker: Nirmal Bhansali

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World

Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Renvi framing the discussion around the need for sovereign, inclusive, and impactful artificial intelligence as a global imperative [1-3]. She defined AI sovereignty as the ability of nations to control their own data, infrastructure, and talent, contrasting the United States’ market-driven model, China’s centralized scaling, Europe’s trust-based regulation, the Middle East’s infrastructure hubs, and India’s emerging focus on data, infrastructure, and talent sovereignty [8-13]. Renvi argued that India is actively pursuing this sovereignty by building a national AI ecosystem that emphasizes responsible, democratized AI for all stakeholders [15-17]. She highlighted the affordability of AI compute under the India AI Mission, noting that access costs less than two cents per minute, which she presented as a key driver of democratization [21-23]. Inclusion is further reinforced by encouraging startups, researchers, and diverse cultural, linguistic, and gender groups to participate in AI development [24-26]. To illustrate impact, Renvi described her personal experience of translating a children’s AI book into 22 Indian languages using the Sarvam AI model, thereby expanding readership and supporting the National Education Policy 2020 [30-41]. This translation effort not only broke language barriers but also generated royalty income and contributed to India’s GDP, demonstrating how young innovators can create economic value through sovereign AI tools [42-44]. She emphasized that sharing knowledge across borders and establishing a multilateral AI council are essential for responsible and inclusive AI governance [45-48]. Renvi also asserted that Generation Alpha, including herself, will be active agents shaping AI’s future rather than passive recipients [49-52]. The talk concluded with a call for collaborative learning and empowerment through AI, positioning India as ready to lead in this cooperative framework [46-47]. Following Renvi’s remarks, Speaker 2 thanked her and introduced the next panel focused on the next generation of tech entrepreneurs [58-60]. The panel will feature leaders from Glean, Credo AI, and Origin Bio, with moderation by Ranirudh Suri of the India Internet Fund [61-64]. The transition underscored the continuity of the conference’s theme: advancing AI innovation through inclusive, sovereign, and collaborative efforts worldwide [59-63].


Keypoints


Major discussion points


AI sovereignty and digital independence – Renvi frames AI as moving from “large global AI to empowered, scalable, sovereign AI” and contrasts how different regions (US, China, Europe, Middle East) pursue AI, while India focuses on “data sovereignty, infrastructure sovereignty, and talent sovereignty” to boost its economy [3-5][8-13].


Democratization and inclusive AI – The speaker stresses that AI must be affordable (under 2 cents per minute) and accessible to a wide range of stakeholders, including startups, researchers, diverse cultures, languages, disabilities, and gender, positioning inclusion as a core pillar of India’s AI strategy [15-26].


Impactful, home-grown AI use case – Renvi shares a personal example: using the Indian sovereign model Sarvam AI to translate a children’s AI book into 22 Indian languages, thereby supporting the National Education Policy, expanding market reach, and contributing to India’s GDP [30-43].


Call for multilateral cooperation and shared learning – The talk concludes with a rallying message that nations should “learn from each other and share our learnings” through bodies like the GP AI Council, emphasizing that the next generation (Gen Alpha) will be active agents of change [45-47].


Overall purpose / goal


The discussion aims to advocate for a self-reliant, inclusive, and impact-driven AI ecosystem in India, showcasing how sovereign AI can democratize access, spur economic growth, and empower the younger generation, while urging global collaboration to shape responsible AI governance.


Overall tone


The tone is optimistic and forward-looking, beginning with a strategic, almost policy-level overview of AI sovereignty, shifting to an enthusiastic description of inclusive, affordable AI, moving into a personal, inspirational narrative about tangible impact, and ending with a hopeful, rallying call to action for shared learning and generational participation. The progression moves from analytical to personal to motivational, maintaining an upbeat and confident voice throughout.


Speakers

Renvi


– Areas of expertise: AI sovereignty, inclusive AI, AI democratization, digital independence, AI policy


– Role: Speaker / presenter (delivered keynote remarks)


– Title:


Speaker 2


– Areas of expertise: Event moderation, panel facilitation


– Role: Moderator/Chair of the session introducing the next panel [S1]


– Title:


Additional speakers:


Arvind Jain – Founder and Chief Executive Officer, Glean


Navina Singh – Founder and Chief Executive Officer, Credo AI


Malhar Abide – Chief Technology Officer, Origin Bio


Ranirudh Suri – Managing Partner, India Internet Fund (moderator of the panel)


Full session reportComprehensive analysis and detailed insights

Renvi opened the session by framing artificial intelligence as “sharing is learning” and introduced three inter-linked pillars that should guide the next wave of AI development: (1) independent or sovereign AI, (2) inclusive and democratized AI, and (3) AI that is freely accessible and delivers tangible impact [1-8].


She then mapped the global AI sovereignty landscape, contrasting how different regions pursue AI autonomy. The United States leads in model development and drives innovation through a market-driven approach [9-10]; China follows a centrally-controlled, rapid-scaling strategy backed by strong international governance [11-12]; Europe prioritises trust and compliance, having introduced the world’s first comprehensive AI law [13-14]; the Middle East focuses on building critical AI infrastructure hubs [13-14]; and India is “digging into” three forms of sovereignty-data, infrastructure and talent-to accelerate its economy [9-14].


Renvi highlighted that India’s AI Mission is deliberately affordable and inclusive. She noted that compute power under the mission costs “less than 2 cents per minute,” and that the platform already hosts 7 500 data sets and 273 models [15-23]. This low-cost environment is intended to open the AI ecosystem to startups, researchers, and developers while embracing cultural, linguistic, disability and gender diversity [15-23].


To illustrate the impact of a sovereign AI stack, Renvi described her own experience after completing an India AI Mission certification. Using the full-stack model Sarvam AI, she translated a children’s book she wrote at age six into 22 Indian languages-including Punjabi, Tamil and Hindi. The book, already available on Amazon, has been acknowledged by UN Secretary-General António Guterres and India’s Ministry of Education. The translation expands readership, aligns with the National Education Policy 2020’s call for AI-enabled learning from Grade 3 onward, and contributes to GDP through increased sales and royalties [24-44].


She added a cautionary note that “there is no assurance that any country will get it all correct and do it truly” [45-46].


Renvi called for multilateral cooperation through the forthcoming GP AI Council, urging nations to learn from each other and share learnings in order to build a responsible, inclusive AI framework that respects human connection [47-48].


Turning to her generation, she asserted that Generation Alpha “is born with AI around us” and pledged to be active contributors rather than passive consumers [49-53].


She concluded with the traditional benediction “Sarvajan Hitai, Sarvajan Sukhai.” [54-55]


Speaker 2 then thanked Renvi and introduced the next panel on the “next generation of techies,” featuring leaders from Glean, Credo AI and Origin Bio, with moderation by Ranirudh Suri of the India Internet Fund [56-62].


Session transcriptComplete transcript of the session
Renvi

Sharing is learning with the rest of the world. One, an AI that is independent. From large global AI to empowered, scalable, sovereign AI. Sovereignty. The generation sitting right in front of me grew up taking it for only political and geographical individuality. Fast forward to now, the world has a completely new landscape for its definition. I’m growing up knowing it’s to be more around something I may like to call digital independence. And achieving AI sovereignty has become a global imperative. And then I’m seeing an emergence of very AI models which are not just differentiating from the rest of the world. by scale, computer parameters, but by the very approach different nations are building them with. While US leads the global AI models and the technology sector drives innovation, China likes to keep its control centralized with rapid scaling and strong international governance.

While Europe likes to build it more with trust and compliance with the world’s first comprehensive AI law, Middle East positions itself by building AI hubs in the infrastructure layer contributing critical nodes in the AI boom. Well, India is digging into sovereignty. Data sovereignty, infrastructure sovereignty. And most importantly, talent sovereignty. And I’m glad. That is what my country needs to boost its economy. Two, an AI that is inclusive. From the artificial general intelligence race to responsible, democratized AI inclusion. The democratization of AI with inclusion, which I touched upon in my keynote at the EIFGO Global Summit in Geneva last year, has become a core focus area for not just India, but even for the United Nations and the rest of the world.

I’m seeing how India is leading a shift from the artificial general intelligence race to the AI. Two, responsible, democratized AI inclusion. The democratization of the AI course as a key enabler for India’s digital public infrastructure 7500 data sets and 273 models have already been deployed as natural resources to build AI solutions across sectors. Allow me to share my two cents on the affordability of AI compute power under the India AI Mission. Well, to your surprise, it is less than 2 cents per minute. How’s that for democratization? Inclusion of different Indian startups, researchers and developers. Social inclusion of different cultures, languages, disabilities and even gender equality. Overall inclusion of human capital, innovation, social empowerment and the list goes on.

Third, AI is free and AI that is impactful. From safe, innovative, actionable AI to impactful AI. Let’s move to impact and let’s do it a bit differently here. How about I share my own use case of an AI model just released by India. Thanks to my recently completed certification course from the India AI Mission, I observed how every single bit of content was exemplified with an India specific use case impacting lives, businesses and industries. So here’s my back story. When I was six, I written a book on AI. Are you born with AI? This had been made available globally on Amazon and even had been acknowledged by His Excellency, Secretary General of the United Nations, Sir Antonio Guterres and the Ministry of Education, Government of India.

Thanks to the full stack AI sovereign model now in place, Sarvam AI, I’m able to translate my book into 22 different Indian languages, boosting the sales of my book and contributing to India’s GDP. Here’s a sneak peek into this. So you can see here that I’ve translated it into Punjabi, Tamil, Hindi, and then 19 more languages, but obviously I can’t fit on the slide. Impact? One, it helps me live my dream to drive A -L -O -C to all my friends out there breaking language barriers. Two, it helps me support the National Education Policy 2020 of the Government of India by introducing A -L -O -C from Grade 3 onwards. Democritization checked. Three, it helps to have a wider reach as an author, boosting the sales and the royalty I get from the book.

Business impact and GDP contribution checked. So, if a Gen Alpha can contribute to AI literacy countrywide by first writing a book on artificial intelligence, then using AI tools to make illustrations to make it relevant for young minds, and then further use Indian AI tools to translate it into multiple Indian languages, boosting the sales of his book and the royalty, then, to contribute to India’s GDP at age 8, I am confident that each and every one of you can leave your impact with relevant Indian AI models. Amalgamating, be you geopolitically driven or an inclusive AI -impact fabric, and there is no assurance that any country will get it all correct and do it truly. My simple yet important message here is that we can all learn from each other and share our learnings to make this world more empowered with AI.

And that is exactly what India is all set to do once the GP AI Council members convene and define the multilateral cooperation for responsible and inclusive AI, keeping in mind the value of a human connection. Also, me and my generation are part of this AI revolution too. We understand and observe how AI is being shaped up globally. Be it governments, be it tech giants, be it start -ups or even scientists. We are not just at the receiving end. Do not forget we are born with AI around us and we will contribute and be the true agents of change of what you all build today. I stand for I, Generation Alpha. I stand for India. I stand for impact.

And the world will witness all three when they have been raised to the power of AI. Sarvajan Hitai, Sarvajan Sukhai. Thank you.

Speaker 2

Thank you. Thank you. Thank you, Renvi. We have our next panel, which is next generation of techies. May I now invite Mr. Arvind Jain, Founder and Chief Executive Officer, Glean. Ms. Navina Singh, Founder and Chief Executive Officer, Credo AI. Malhar Abide, Chief Technology Officer, Origin Bio. And the panel will be moderated by Mr. Ranirudh Suri, Managing Partner, India Internet Fund. In the meantime…

Related ResourcesKnowledge base sources related to the discussion topics (12)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“The United States leads in model development and drives innovation through a market‑driven approach”

The knowledge base notes that the United States leads through global AI models and technology sector innovation, confirming the market-driven leadership claim [S7].

Confirmedhigh

“China follows a centrally‑controlled, rapid‑scaling strategy backed by strong international governance”

S7 describes China’s approach as maintaining centralized control while pursuing rapid scaling and strong international governance, matching the report’s description.

Additional Contextmedium

“India is “digging into” three forms of sovereignty—data, infrastructure and talent—to accelerate its economy”

S22 explains that AI sovereignty is not monolithic and highlights diverse national priorities, aligning with India’s focus on data, infrastructure and talent as distinct sovereignty dimensions.

Confirmedhigh

“Compute power under the India AI Mission costs “less than 2 cents per minute””

S10 reports the compute facility is priced at 65 rupees per hour, which converts to roughly 1.4 cents per minute, confirming the sub‑2‑cent cost claim.

Additional Contextmedium

“India’s AI Mission is deliberately affordable and inclusive, opening the AI ecosystem to startups, researchers, and developers while embracing cultural, linguistic, disability and gender diversity”

S16 and S56 describe India’s AI initiative as supporting multiple large‑language‑model projects, emphasizing open ecosystems, linguistic diversity and broad adoption by startups and researchers, providing additional detail on the mission’s inclusive intent.

Additional Contextlow

“The low‑cost environment of the India AI Mission reflects India’s broader culture of frugal, cost‑effective innovation”

S51 highlights India’s reputation for frugal innovation (e.g., Chandrayaan mission) and suggests this mindset underpins the affordable compute offering mentioned in the report.

External Sources (61)
S1
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S2
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S3
S4
WSIS+20 Open Consultation session with Co-Facilitators — – **Jennifer Chung** – (Role/affiliation not clearly specified)
S5
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S6
The reality of science fiction: Behind the scenes of race and technology — How do you know I’m real? I’m not real. I’m just like you. You don’t exist in this society. If you did, your people woul…
S7
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — This discussion features an 8-year-old prodigy presenting their perspective on global AI development and India’s strateg…
S8
Workshop 2: The Interplay Between Digital Sovereignty and Development — Karen Mulberry: Thank you very much. Hopefully everyone caught our instructions so you understand how we’re going to be …
S9
Host Country Open Stage — This paradoxical statement challenges the typical understanding of digital sovereignty as protectionist or isolationist….
S10
Driving Indias AI Future Growth Innovation and Impact — Minister Jayant Chaudhary outlined the government’s approach to AI democratization, highlighting the India AI mission’s …
S11
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Agarwal explained that while India has strong talent and skills, they faced challenges with compute infrastructure and d…
S12
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — In summary, the speaker underscored the need for a commitment to universal design in technological innovations, a cultur…
S13
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The Bharat GPT consortium exemplifies this approach, bringing together nine academic institutions through a Section 8 no…
S14
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And it’s a core technology that a country like India must understand. from the foundational level. Otherwise, we will be…
S15
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — An interesting fact is that most of the AI models in the world work in English. But your AI model works in Indian langua…
S16
Open Internet Inclusive AI Unlocking Innovation for All — Canada is looking into. Australia is looking into. The alternative is how do we give all the other AI companies the same…
S17
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — In conclusion, the analysis reinforces the potential of digitalisation and emerging technologies, such as artificial int…
S18
Gen AI: Boon or Bane for Creativity? — Krista Kim, a renowned digital artist and the founder of Techism, envisions the convergence of AI, spatial computing or …
S19
Multilateral Intergenerational High-Level Dialogue: Youth Special Track — This argument positions youth as active agents of change rather than passive recipients of innovation. Young people cont…
S20
Responsible AI in India Leadership Ethics & Global Impact part1_2 — And last, enterprises. Like many of yours in this room, that are willing and excited to go first that really look at tra…
S21
Building the Next Wave of AI_ Responsible Frameworks & Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S22
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S23
AI Governance: Ensuring equity and accountability in the digital economy (UNCTAD) — The country has taken measures to ensure the safety and security of the virtual space, as not all actors have good inten…
S24
What policy levers can bridge the AI divide? — – **Affordability**: Internet costs exceeding recommended percentages of income This discussion highlighted that bridgi…
S25
Let’s design the next Global Dialogue on Ai & Metaverses | IGF 2023 Town Hall #25 — Antoine Vergne:Yes, thanks. But before that, I wanted to make a comment on Amy’s question about developers. And I think …
S26
Global AI Policy Framework: International Cooperation and Historical Perspectives — But now I think even the Global North, they are also saying, you know, I think Geneva should be there. We need someone, …
S27
Multi-stakeholder Discussion on issues about Generative AI — Furthermore, Andrade highlights the significance of dialogue and cooperation in the global AI landscape. He particularly…
S28
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — This discussion features an 8-year-old prodigy presenting their perspective on global AI development and India’s strateg…
S29
Building Indias Digital and Industrial Future with AI — Deepak Maheshwari from the Centre for Social and Economic Progress provided historical context, tracing India’s digital …
S30
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S31
Conversational AI in low income & resource settings | IGF 2023 — One important observation made by the speakers is the need for a people-focused, collaborative, equitable, and sustainab…
S32
The Government’s AI dilemma: how to maximize rewards while minimizing risks? — Highlighting the value of inclusive participation in setting these standards, Aramendia stresses the necessity of engagi…
S33
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The Bharat GPT consortium exemplifies this approach, bringing together nine academic institutions through a Section 8 no…
S34
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — An interesting fact is that most of the AI models in the world work in English. But your AI model works in Indian langua…
S35
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And I want this. The most important thing that I want people to understand is… just because, and I think that the, you…
S36
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S37
Let’s design the next Global Dialogue on Ai & Metaverses | IGF 2023 Town Hall #25 — Antoine Vergne:Yes, thanks. But before that, I wanted to make a comment on Amy’s question about developers. And I think …
S38
Multi-stakeholder Discussion on issues about Generative AI — Furthermore, Andrade highlights the significance of dialogue and cooperation in the global AI landscape. He particularly…
S39
Global AI Policy Framework: International Cooperation and Historical Perspectives — They learned from each other. So definitely globalization is not a new thing. This is happening again and again in diffe…
S40
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — 3. Global collaboration: Li Junhua stressed the importance of cooperation among all stakeholders. Doreen Bogdan-Martin:…
S41
Main Session on Artificial Intelligence | IGF 2023 — Moderator 1 – Maria Paz Canales Lobel:Thank you very much for that. So we’re running out of time. I suppose to summarize…
S42
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Artificial intelligence requires enormous competition. Artificial capacity, which in turn requires unprecedented amounts…
S43
Summit Opening Session — This has to be treated as a shared responsibility, especially amid the massive investment and adoption of artificial int…
S44
Global AI Governance: Reimagining IGF’s Role & Impact — Lara-Castro introduced the concept of AI as a “social technical tool,” emphasising that AI systems arise from society an…
S45
Main Session 2: The governance of artificial intelligence — Ziemann describes the current AI governance landscape as having numerous emerging tools, principles, processes and bodie…
S46
What is it about AI that we need to regulate? — Launch of the Global CyberPeace Indexsession highlighted structural inequalities, with Marlena Wisniak noting that”the d…
S47
Comprehensive Report: “Converging with Technology to Win” Panel Discussion — The discussion began by comparing two major technology ecosystem models: the U.S. approach, driven by university-industr…
S48
AI race shows diverging paths for China and the US — The US administration’s new AI action plan frames global development as anAI racewith a single winner. Officials argue A…
S49
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — India’s deployment of technology as an inclusive, developmental resource was highlighted. Here, the national AI strategy…
S50
Open Forum #30 High Level Review of AI Governance Including the Discussion — Abhishek Singh, Under-Secretary from the Indian Ministry of Electronics and Information Technology, emphasised that oper…
S51
Keynote-Rishi Sunak — Sunak praised India’s distinctive culture of frugal innovation, exemplified by the Chandrayaan moon mission, which achie…
S52
Powering the Technology Revolution / Davos 2025 — Antonio Neri: I wish what you described was available when I was in California, because I can tell you I had an electr…
S53
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Abhishek Singh: Thank you for convening this and bringing this very, very important subject at FORC, like how do we bala…
S54
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Participant: It’s going to be similar. It’s going to follow the similar playbook that we had for the DPI. And while in D…
S55
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Audience:Hamid Hawja is my name, from Morocco, director of Hebdo magazine. I have two questions. First, I’m just wonderi…
S56
Building the AI-Ready Future From Infrastructure to Skills — The emphasis on open ecosystems, linguistic diversity, human oversight, and broad adoption provides a framework balancin…
S57
Democratizing AI Building Trustworthy Systems for Everyone — Making models available as open source or open-weight to empower ecosystem adaptation while maintaining some proprietary…
S58
WS #208 Democratising Access to AI with Open Source LLMs — Develop shared, open datasets that include diverse cultural and linguistic information
S59
Building Sovereign and Responsible AI Beyond Proof of Concepts — And I think that’s the key thing. But the important thing is that if the trust is lost in terms of the sovereignty, the …
S60
Panel Discussion Data Sovereignty India AI Impact Summit — High level of consensus with complementary perspectives rather than conflicting viewpoints. The implications suggest a m…
S61
How Multilingual AI Bridges the Gap to Inclusive Access — And I think that’s something that’s very evident in this conversation. So it’s great to be part of this club. So C -Line…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
R
Renvi
8 arguments105 words per minute956 words542 seconds
Argument 1
Nations adopt distinct AI sovereignty strategies: US drives innovation, China centralizes control, Europe prioritizes trust and compliance, Middle East builds infrastructure hubs, India focuses on data, infrastructure, and talent sovereignty (Renvi)
EXPLANATION
Renvi outlines how different regions pursue AI sovereignty in distinct ways, highlighting US innovation, China’s centralized control, Europe’s emphasis on trust and compliance, the Middle East’s focus on infrastructure hubs, and India’s concentration on data, infrastructure, and talent. This demonstrates a fragmented global landscape of AI strategy.
EVIDENCE
She cites the United States leading global AI models and driving innovation, China maintaining centralized control with rapid scaling and strong governance, Europe building AI with trust and compliance through its comprehensive AI law, the Middle East contributing critical infrastructure nodes, and India emphasizing data, infrastructure, and talent sovereignty [9-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The varied AI sovereignty approaches of different regions are discussed, with emphasis on India’s focus on data, infrastructure and talent, and the need for collaborative yet controlled strategies [S9][S21][S13][S14].
MAJOR DISCUSSION POINT
AI sovereignty strategies
Argument 2
Digital independence emerges as a new definition of sovereignty for the current generation (Renvi)
EXPLANATION
Renvi introduces the concept of digital independence as a redefinition of sovereignty for her generation, moving beyond traditional political and geographic notions toward autonomy in the digital realm. She suggests this shift reflects the evolving global landscape.
EVIDENCE
She reflects on growing up with the idea of digital independence and notes that the world’s definition of sovereignty has changed, contrasting earlier notions of political/geographic individuality with the new digital focus [4-6].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of digital independence as a re-defined sovereignty for the current generation is highlighted in the 8-year-old keynote, which frames sovereignty beyond geography toward digital autonomy [S7].
MAJOR DISCUSSION POINT
Digital independence as sovereignty
Argument 3
India’s AI Mission offers ultra‑low compute costs (under 2 cents per minute), enabling widespread access for startups, researchers, and developers (Renvi)
EXPLANATION
Renvi states that the India AI Mission provides compute power at a cost of less than two cents per minute, making AI resources highly affordable. This low cost is presented as a key enabler for democratizing AI access across the ecosystem.
EVIDENCE
She explicitly mentions the compute cost being under 2 cents per minute, highlighting the surprising affordability of AI resources in India [22].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s AI Mission’s low-cost compute provision is confirmed by reports of 38,000 GPUs and pricing below a dollar per GPU-hour, translating to roughly a few cents per minute for users [S10][S11].
MAJOR DISCUSSION POINT
Affordable AI compute
Argument 4
AI inclusion spans cultures, languages, disabilities, and gender, fostering social empowerment and broader innovation (Renvi)
EXPLANATION
Renvi emphasizes that AI inclusion in India covers a wide range of dimensions, including cultural diversity, multilingualism, accessibility for people with disabilities, and gender equality. This inclusive approach is portrayed as driving social empowerment and expanding innovation.
EVIDENCE
She lists inclusion of Indian startups, researchers, and developers, as well as social inclusion of different cultures, languages, disabilities, and gender equality, noting the broader impact on human capital and innovation [24-26].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusion across languages, cultures, disabilities and gender is reflected in Bharat GPT’s multilingual models covering 22 Indian languages, commitments to universal design for disabilities, and gender-gap considerations in digital policy [S13][S15][S12][S17][S16].
MAJOR DISCUSSION POINT
Inclusive AI across society
Argument 5
Using the sovereign Sarvam AI model, a book was translated into 22 Indian languages, boosting sales, supporting the National Education Policy, and contributing to India’s GDP (Renvi)
EXPLANATION
Renvi describes how the Sarvam AI model enabled her to translate her book into 22 Indian languages, which increased sales and royalty income, aligned with the National Education Policy 2020, and contributed to the country’s GDP. This case illustrates the tangible economic and educational impact of sovereign AI tools.
EVIDENCE
She explains that the full-stack Sarvam AI model allowed translation into 22 languages, leading to higher book sales, support for the National Education Policy, and a measurable contribution to India’s GDP, with specific examples of language translation and impact statements [36-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Sarvam AI model’s capability to translate into 22 Indian languages aligns with the Bharat GPT consortium’s multilingual model covering the same language set, supporting educational and economic goals [S13][S15][S14].
MAJOR DISCUSSION POINT
AI‑driven translation impact
Argument 6
Demonstrates how a Gen‑Alpha individual can create, illustrate, and monetize AI‑enhanced content, illustrating tangible economic impact (Renvi)
EXPLANATION
Renvi showcases a scenario where a Generation‑Alpha person writes a book, uses AI tools for illustration, and then employs Indian AI models to translate it, thereby generating sales and royalty income that contribute to GDP. This example underscores the economic potential for young creators using AI.
EVIDENCE
She narrates that a Gen-Alpha individual can contribute to AI literacy, create illustrations, translate the work into multiple Indian languages, boost sales and royalties, and thus add to India’s GDP at a young age [43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The 8-year-old prodigy example demonstrates a Gen-Alpha creator using AI for content creation and monetisation, echoing youth-led AI innovation narratives [S7][S19].
MAJOR DISCUSSION POINT
Gen‑Alpha economic impact via AI
Argument 7
Generation Alpha are active agents of change, not merely recipients, and should share learnings to empower the world with AI (Renvi)
EXPLANATION
Renvi argues that Generation Alpha should be seen as proactive contributors to AI development rather than passive consumers. She calls for sharing knowledge and experiences to collectively empower the world with AI technologies.
EVIDENCE
She delivers a message that all generations can learn from each other, emphasizes that her generation observes global AI shaping, and asserts that they are not just recipients but true agents of change [45-51].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Youth as proactive AI change agents is emphasized in the keynote and multilateral youth dialogue, advocating knowledge sharing across generations [S7][S19].
MAJOR DISCUSSION POINT
Youth as AI change agents
Argument 8
India seeks to lead multilateral cooperation for responsible and inclusive AI through the GP AI Council, emphasizing human connection (Renvi)
EXPLANATION
Renvi notes that India aims to spearhead multilateral cooperation on responsible and inclusive AI by convening the GP AI Council, with a focus on maintaining human connection in AI governance. This reflects India’s ambition to shape global AI policy.
EVIDENCE
She states that India is prepared to act once the GP AI Council members convene to define multilateral cooperation for responsible and inclusive AI, highlighting the importance of human connection [46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s intention to spearhead multilateral responsible AI through the GP AI Council is echoed in discussions of India’s leadership in responsible AI frameworks and global AI discourse [S20][S21][S1].
MAJOR DISCUSSION POINT
Multilateral AI cooperation
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Overall Assessment

The discussion was dominated by Renvi, who presented a series of arguments about AI sovereignty, digital independence, affordable compute, inclusive AI, economic impact of AI‑driven translation, the role of Generation Alpha, and India’s ambition for multilateral AI cooperation. Speaker 2 only offered brief procedural remarks and did not introduce substantive arguments, resulting in virtually no substantive points of agreement between distinct speakers. Consequently, the level of consensus across speakers is minimal, limiting the ability to draw broader collective conclusions about the topics discussed. The implications are that the session primarily reflected Renvi’s perspective, with little demonstrable cross‑speaker validation or debate on the highlighted issues.

Low – only one speaker contributed substantive content, so there is no observable cross‑speaker consensus on the key issues.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains a single substantive contribution from Renvi, who outlines India’s AI strategies, digital independence, affordable compute, inclusion, and multilateral cooperation. Speaker 2 only offers a brief thank‑you and introduces the next panel without presenting any substantive position or counter‑argument. Consequently, there are no observable points of contention or partial agreement between the speakers. The discussion is therefore largely harmonious, with no disagreement that could affect the topics under consideration.

Minimal – the interaction is a transition rather than a debate, implying that the primary focus remains on Renvi’s presentation of India’s AI agenda without challenge.

Takeaways
Key takeaways
AI sovereignty is being pursued differently worldwide: US focuses on innovation, China on centralized control, Europe on trust and compliance, Middle East on infrastructure hubs, and India on data, infrastructure, and talent sovereignty, reflecting a shift toward digital independence. India’s AI Mission provides ultra‑low compute costs (under 2 cents per minute), enabling broad, affordable access for startups, researchers, and developers, supporting inclusive and democratized AI. Inclusion in AI spans languages, cultures, disabilities, and gender, fostering social empowerment and wider innovation. A practical example using India’s sovereign Sarvam AI model translated a book into 22 Indian languages, boosting sales, supporting the National Education Policy 2020, and contributing to GDP, illustrating tangible economic impact of AI for a Gen‑Alpha individual. Generation Alpha is positioned as active agents of change, not just recipients, and should share learnings globally. India aims to lead multilateral cooperation for responsible and inclusive AI through the GP AI Council, emphasizing human connection.
Resolutions and action items
India to convene GP AI Council members to define multilateral cooperation frameworks for responsible and inclusive AI. Encourage sharing of AI learnings and best practices across nations and generations to empower global AI development.
Unresolved issues
How to harmonize differing national AI sovereignty strategies into a cohesive global framework. Mechanisms for ensuring AI inclusion across all demographic groups at scale. Specific pathways for operationalizing the GP AI Council’s recommendations into policy and practice. Balancing rapid AI scaling with trust, compliance, and governance standards across regions.
Suggested compromises
Combine geopolitical AI strategies with an inclusive AI fabric, promoting both national sovereignty goals and shared, responsible AI development. Adopt a collaborative approach where nations share learnings and resources to achieve both innovation and compliance objectives.
Thought Provoking Comments
AI sovereignty has become a global imperative.
Frames AI development as a matter of national independence rather than just technological progress, shifting the conversation from pure innovation to geopolitical strategy.
Sets the stage for the subsequent comparison of how different regions (US, China, Europe, Middle East, India) are pursuing AI, prompting the audience to think about AI through a sovereignty lens rather than a purely commercial one.
Speaker: Renvi
While the US leads the global AI models and the technology sector drives innovation, China keeps its control centralized with rapid scaling and strong international governance; Europe builds AI with trust and compliance; the Middle East contributes critical infrastructure nodes; India is digging into data, infrastructure, and talent sovereignty.
Provides a concise, comparative map of global AI strategies, highlighting distinct national philosophies and exposing the diversity of approaches.
Introduces a new analytical framework that moves the discussion from a single‑country focus to a multi‑regional perspective, encouraging listeners to consider cross‑border collaboration and competition.
Speaker: Renvi
The democratization of AI with inclusion has become a core focus area for not just India, but even for the United Nations and the rest of the world.
Elevates the conversation from technical deployment to ethical and societal dimensions, linking national policy to global governance and human rights concerns.
Broadens the scope of the dialogue, prompting participants to contemplate policy, equity, and international standards alongside technical capabilities.
Speaker: Renvi
Affordability of AI compute power under the India AI Mission is less than 2 cents per minute.
Offers a concrete, striking metric that quantifies democratization, making the abstract idea of affordable AI tangible.
Triggers a shift from high‑level strategy to practical feasibility, encouraging the audience to envision real‑world adoption scenarios and cost‑effective innovation.
Speaker: Renvi
Using the full‑stack AI sovereign model Sarvam AI, I translated my book into 22 Indian languages, boosting sales and contributing to India’s GDP.
Provides a personal, relatable case study that demonstrates how sovereign AI tools can generate economic and social impact at an individual level.
Transforms the discussion from macro‑policy to micro‑implementation, illustrating the direct benefits of AI sovereignty and democratization, and inspiring others to envision similar use‑cases.
Speaker: Renvi
We are not just at the receiving end. We are born with AI around us and we will be the true agents of change of what you all build today.
Reframes the generational narrative, positioning Generation Alpha as co‑creators rather than passive consumers, and challenges the audience to involve younger voices in AI development.
Creates a turning point toward a forward‑looking, inclusive tone, urging stakeholders to consider youth participation and long‑term cultural shifts in AI governance.
Speaker: Renvi
My simple yet important message here is that we can all learn from each other and share our learnings to make this world more empowered with AI.
Summarizes the talk with a collaborative call‑to‑action, emphasizing knowledge exchange across borders and sectors.
Concludes the segment on a unifying note, setting up the transition to the next panel and reinforcing the theme of multilateral cooperation.
Speaker: Renvi
Overall Assessment

Renvi’s remarks introduced a multi‑dimensional view of AI—combining sovereignty, inclusivity, affordability, and personal impact—that reshaped the discussion from a generic technology overview to a nuanced debate about geopolitics, policy, economics, and generational agency. Each key comment acted as a pivot point, expanding the conversation’s scope, grounding abstract ideas in concrete data, and ultimately steering the audience toward a collaborative, globally‑mindful outlook before handing over to the next panel.

Follow-up Questions
How can different nations’ approaches to AI sovereignty (US scaling, China centralization, Europe trust/compliance, Middle East infrastructure, India data/talent sovereignty) be compared and evaluated for effectiveness?
Understanding the varied models can guide global policy and help countries adopt best practices for AI development and governance.
Speaker: Renvi
What is the detailed cost structure and scalability of AI compute power under the India AI Mission (noted as less than 2 cents per minute)?
Assessing affordability is crucial for democratizing AI access and ensuring sustainable growth of AI initiatives.
Speaker: Renvi
What is the measurable economic impact of AI-driven multilingual translation on book sales, author royalties, and contribution to India’s GDP?
Quantifying this impact can validate the broader economic benefits of sovereign AI tools and inform future investment.
Speaker: Renvi
What frameworks and mechanisms will the GP AI Council develop for multilateral cooperation on responsible and inclusive AI?
Defining governance structures is essential to ensure AI is developed ethically and inclusively across borders.
Speaker: Renvi
How can India effectively build talent sovereignty through education, skill development, and retention of AI professionals?
Talent is a critical pillar of AI sovereignty; research is needed on strategies that nurture and retain skilled AI workforce.
Speaker: Renvi
What methods can ensure AI inclusion across diverse cultures, languages, disabilities, and gender, and how can their outcomes be measured?
Inclusive AI promotes equitable access; studying implementation approaches and impact metrics will guide policy and product design.
Speaker: Renvi

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Panel Discussion Inclusion Innovation & the Future of AI

Panel Discussion Inclusion Innovation & the Future of AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by framing AI’s growing role in daily life and the need to balance excellence with inclusion in policy and practice [4-9][14-16]. Dean argued that AI should first be governed through existing legal frameworks such as liability and product regulations, rather than creating a single new AI law, and that the presumption should be that current law is sufficient unless proven otherwise [24-32][41-44]. He identified “tail events” – low-probability but high-impact risks – as the area where proactive governance is justified, and he has advocated for transparency legislation to address such threats [34-40]. Dean also emphasized that AI compute infrastructure, especially data centers powering frontier models, ought to be treated as critical national infrastructure comparable to ports or railroads [111-114].


Gabriela stressed that AI development requires a whole ecosystem of government-backed investments, incentives, institutions and infrastructure, noting historic U.S. examples such as DARPA and the Internet that were publicly funded [48-53][54-58]. She described AI technologies as natural monopolies that create market distortions, and argued that public-private partnerships are needed to break the “diffusion machine” and ensure broader economic inclusion [133-146].


Ivana (representing Wipro) explained that AI governance must go beyond compliance, embedding privacy, security, and resilience into products and adopting a techno-legal approach that translates law into technical tools [66-74][75-84]. She highlighted the importance of continuous monitoring of AI systems in production, designing trust mechanisms for agentic AI, and involving employees in governance to shift from pure risk-management to “AI for good” [80-87][94-107].


When asked whether inclusion is an ethical imperative or a competitive strategy, Gabriela replied that the two are inseparable, asserting that inclusive policies boost market competitiveness by preventing concentration and fostering a level-playing field [133-146]. Dean identified a blind spot in current discourse: the tendency to dismiss frontier models as unnecessary, despite massive public investment and their potential to create capabilities beyond today’s imagination, especially for the Global South [156-169].


He reiterated that building AI readiness will require new institutions and infrastructure, while managing both everyday harms and future catastrophic risks, and warned that concentration of AI power is a key political-economic challenge [200-214]. The moderator concluded that the discussion highlighted trade-offs, potential solutions, and the urgency of preparing national AI capabilities for competitive advantage and inclusive growth [216-217].


Keypoints

Major discussion points


Defining AI inclusion beyond data representation – inclusion is framed as access to compute, standards, policy frameworks and regulatory clarity, not just equitable datasets [4-13].


Governance strategy: use existing law, intervene for tail-risk events, and treat AI infrastructure as critical – Dean argues that AI should first be governed through current liability and product regulations, with proactive rules only for low-probability, high-impact “tail” events, and that AI data-centers should be classified as critical national infrastructure [24-33][34-42][111-124].


Public-private ecosystem and market concentration – Gabriela stresses that government-funded research (e.g., DARPA, Internet) is essential, that AI markets behave as natural monopolies/oligopolies requiring policy to curb distortions, and that inclusive policies must coexist with competitiveness [48-55][140-151].


Organizational AI governance as a strategic capability – Ivana outlines the shift from compliance to a broader “trust stack” that embeds privacy, security, and ethical design, requires continuous monitoring, and involves employees in the governance loop [65-84][94-107].


Blind spots and future challenges – Panelists identify (i) the over-focus on frontier models while cheaper models can suffice, (ii) the lack of education and skill pipelines to prepare societies for AI, and (iii) the absence of global consensus on “red-line” prohibitions [156-168][171-182][189-198].


Overall purpose / goal of the discussion


The panel was convened to explore how AI can be made inclusive and beneficial while preserving national competitiveness. Participants examined policy trade-offs, governance mechanisms, ecosystem investments, and practical implementation steps needed to build AI readiness at both governmental and organizational levels, and to surface gaps that must be addressed for responsible, equitable AI deployment worldwide.


Overall tone and its evolution


The conversation began with an upbeat, collaborative tone (“such a pleasure to be here…”) and a forward-looking optimism about AI’s benefits. As the dialogue progressed, speakers introduced more cautionary notes-highlighting regulatory gaps, tail-risk threats, market monopolies, and the need for rigorous governance-shifting the tone toward a balanced, problem-solving stance. By the closing remarks, the tone became reflective yet hopeful, acknowledging significant challenges (education gaps, lack of global red-lines) while reaffirming confidence that coordinated policy and ecosystem action can steer AI toward inclusive, competitive outcomes.


Speakers

Ivana Bartoletti – AI governance, privacy, security, and responsible AI implementation; Virtual panelist (panelist) [S1]


Gabriela – AI policy, public-private partnerships, inclusion and market dynamics; (no specific title provided)


Dean – AI policy and governance expert; identified as Dean Xue Lan, specialist in governance and policy [S7]


Speaker 1 – Moderator/host of the panel discussion [S10]


Additional speakers: None


Full session reportComprehensive analysis and detailed insights

The session opened with the moderator welcoming the panel and framing the week-long focus on “who truly benefits from artificial intelligence and under what rules” [1-4]. She emphasized that AI has moved from a niche enterprise tool to a pervasive part of daily life-work, entertainment, health-care, hiring and many other domains [5-9]-and argued that “inclusion in AI” goes far beyond equitable data-sets to include access to compute, common standards, supportive policy frameworks and clear cross-border regulations [10-13]. She also presented the discussion as a trade-off between “excellence” and “inclusion” [14-16].


Dean’s opening remarks


Dean argued that AI should first be governed through the existing web of liability, product-safety and other statutes rather than a stand-alone AI law [24-33]. He urged governments to map current legal tools-such as the United States’ liability doctrine-to AI use-cases, presuming existing law is sufficient unless a clear threat model demonstrates otherwise [40-44]. He identified “tail events” (low-probability, high-impact scenarios such as pandemics) as domains where proactive governance, including transparency legislation, is justified [34-40]. Dean said data-centres that power frontier AI are akin to ports or railroads and should be regarded as critical national infrastructure; he also referenced the U.S. policy to subsidise AI data-centre development in the Global South [111-124]. He noted his prior role in the Trump administration’s White House Office of Science and Technology Policy, where he helped shape the AI Action Plan and the AI Export Program [125-130]. Finally, Dean warned that dismissing frontier models as unnecessary would be a serious oversight, pointing out that the United States is allocating roughly $1 trillion this year to AI development, which will enable capabilities we cannot yet name and could offer particular opportunities for the Global South [156-160][156-169].


Gabriela’s response


Gabriela broadened the conversation to the ecosystem needed for responsible AI development. She cited historic government-funded programmes such as DARPA and the early Internet as seeds for breakthrough technologies and called for similar public investment today to nurture an AI ecosystem of incentives, institutions and infrastructure [48-55]. Describing AI technologies as “natural monopolies” that tend toward oligopolistic concentration, she warned that this breaks the “diffusion machine” that spreads innovation [133-146]. To counteract market distortions she advocated public-private partnerships, open-research models and policies that promote economic inclusion, reduce concentration and ensure a level playing field [140-151]. Gabriela highlighted India’s digital identification system as an example of large-scale public investment, noting how the government financed a registry capable of handling 100 million people per month [150-155]. She repeatedly linked inclusion to both ethical duty and competitive advantage, arguing that inclusive policies boost market competitiveness by preventing concentration and fostering a robust diffusion of AI benefits [131-138][140-146]. She also stressed the urgent need to overhaul school curricula, reduce teachers’ administrative burdens and invest in teacher training so the future workforce can effectively engage with AI tools [171-183].


Ivana’s contribution


Ivana positioned AI governance as a strategic organisational capability rather than a mere compliance checklist. She explained that governance must embed privacy, security, legal safeguards and resilience into AI products from design through deployment, requiring investment in privacy-enhancing technologies and a “techno-legal” translation of law into technical tools [65-74][75-84]. She highlighted that many early AI-governance initiatives were led by privacy professionals because a large share of AI harms are privacy-related [70-73]. Continuous post-deployment monitoring, mechanisms for human override of agentic AI, and protection against model drift and hallucinations were presented as essential components of a “trust stack” [80-87]. Ivana also referenced a recent World Economic Forum article she authored on designing trust for agentic AI [115-118]. While acknowledging AI’s benefits, she warned against naïvely ignoring risks such as disinformation, deep-fakes and the reinforcement of existing inequalities, urging a shift from pure risk-management to an “AI-for-good” mindset that engineers fairness and inclusivity into systems [94-107][102-107].


Moderator follow-up and “mindset” framing


The moderator returned to the inclusion question, asking whether it should be framed primarily as an ethical duty or a competitive strategy. She reiterated that inclusion requires building the necessary mindset, skill-sets and tool-sets, emphasizing AI literacy among teachers, students and employees [165-167][84-88][126-130]. Gabriela echoed the need to upgrade education systems and up-skill teachers, noting that no major education system has yet been refreshed to teach AI concepts or equip teachers with the required tools [171-182].


Blind-spot round


– Dean warned that overlooking frontier models is a serious blind spot, stressing the scale of U.S. investment and the unknown capabilities these models will unlock [156-169].


– Gabriela identified chronic under-investment in education and skills development as a fundamental barrier to AI readiness [171-182].


– Ivana pointed out the lack of a global consensus on “red-lines” for AI-clear ethical boundaries that all nations agree not to cross-leaving a gap in international governance [189-198].


Areas of agreement


Both the moderator and Ivana agreed that inclusion must go beyond data representation to include compute access, standards and regulatory clarity [9-13][104-106]. Dean and the moderator concurred that AI compute facilities should be classified as critical infrastructure and that governments should partner with the private sector to develop them [111-124]. Gabriela and the moderator shared the view that AI policy should be built as an ecosystem of investments, incentives and institutions rather than relying solely on regulation [48-52][15].


Points of disagreement


Dean maintained that existing legal frameworks are generally sufficient and that the burden of proof lies with regulators proposing new rules [24-33][40-44], whereas Gabriela argued that the rapid evolution of AI demands a broader ecosystem of public investment and possibly new regulatory tools to prevent market distortions [48-55][53-55]. Dean’s focus on proactive governance for tail-risk events contrasted with Ivana’s call for a fairness-centric, techno-legal approach that embeds ethical design throughout the AI lifecycle [34-38][102-107].


Key take-aways


1. Existing law is presumed adequate for most AI applications; new rules are needed only for demonstrated tail-event gaps [24-33][40-44].


2. Proactive governance is required for low-probability, high-impact AI risks [34-38].


3. AI governance is a strategic capability integrating privacy, security and techno-legal tools [65-74][75-84].


4. Inclusion is both an ethical imperative and a competitive advantage [131-138].


5. Public-private partnerships and government investment are essential to curb market concentration and nurture open research [48-55][140-151].


6. AI compute facilities should be treated as critical national infrastructure [111-124].


7. AI’s natural-monopoly tendencies require policy interventions to prevent concentration [133-146][208-214].


8. Urgent reform of education systems and up-skilling of teachers and workers are needed for AI readiness [171-183].


9. Blind spots include under-estimating frontier models and the absence of global “red-lines” [156-169][189-198].


Closing


The moderator summarized the trade-offs discussed, the potential solutions offered, and the imperative to build AI readiness for national competitiveness [215-217]. Dean closed by reminding the audience of the massive institutional and infrastructural challenges ahead, the need to manage both everyday harms and future catastrophic risks, and the importance of preventing concentration of AI power through coordinated policy and competitive dynamics [200-214].


Overall, the panel expressed optimism tempered by acknowledgement of significant challenges, leaving the audience with a clear roadmap: treat compute as critical infrastructure, leverage existing legal tools while targeting tail-risk gaps, invest in public-private ecosystems, embed fairness and trust into AI systems, and urgently reform education to prepare the next generation for an AI-augmented world.


Session transcriptComplete transcript of the session
Speaker 1

And it’s such a pleasure to be here with such lovely panelists and an audience who’s possibly going to skip some of the lunchtime to join us today in our discussions. Let me get started by really talking about, you know, we are towards the end of the week. It’s been a fantastic week, lots of conversations. And one thing which I reflect back on most of the conversations has been what is the most defining question of our time, which is who all is artificial intelligence really benefiting and with what rules? If I look at it, AI’s enterprise infrastructure, AI’s public sector capability, AI’s even geopolitical leverage is what we’ve seen across all these days. But more importantly, AI has become a part and parcel of our daily lives.

It stretches from everything from making our work life easier. to making sure that we get our entertainment as and when and how we require it. And more importantly, from healthcare to hiring to anything you can possibly imagine. When we really focus on inclusion in AI, one thing which has kind of stayed as a thought for the last five days is inclusion in AI is way beyond equitable representation in data sets. It’s, you know, it’s everything. It’s about access to compute. It’s about standards. It’s about having a right policy framework, which encourages everyone, everywhere. And more important, it’s also getting clarity on regulations, which are there across countries, to see how it can really be beneficial.

Now, to take the discussion ahead, today’s conversation is going to be really about trade -offs. Excellence and inclusion. It’s been interesting on how to navigate both these terminologies whenever you think of any policy or a framework. So I’m going to start with my first question to Dean. So Dean, you know, you’ve been working at the frontier of AI policy. You’ve been at the institutional design through the Foundation for American Innovation. There is a lot of growing debate between self -regulation and innovation -first approaches. Where should policymakers really draw the line without really undermining national competitiveness?

Dean

So I think it’s a, first of all, thanks for being here. And thank you for having me. It’s an honor to be here. The way I think about this is that, you know, we will govern AI through a very large intersecting web of different things, right? It’s not just going to be one day one bill is going to get passed and that’s going to be the AI bill and then AI is regulated, right? AI is currently regulated today. It’s regulated by many different things. It’s regulated in the United States by things like liability doctrine and a lot of existing products. regulations and things like that. So I think step number one for government is let’s take the existing bodies of law, you know, many of which just as, you know, as in India and the United States, we’re quite proud of.

Many countries, you know, are very proud of their regulatory and legal traditions. We have a common law tradition in the United States that we are proud of. So let’s take those things and let’s figure out how to apply them to AI. And then, you know, the companies, I think, thus far, the major AI labs have been, I think, responsible stewards when it comes to the major risks. Now, I think the area where you might need proactive governance first is, at least in my view, is really this domain of tail events, potential events that could be very serious, have very serious consequences that are relatively unlikely. So, So, you know, pandemic is an example of a tail event.

And I think AI might have some tail, you know, sort of catastrophic type risks associated with it. And so this is an area where some proactive governance, I think, is needed. And I’ve written supportively about transparency laws in the United States along those lines. So I think that’s where when we have a clear and demonstrated threat model and we have a, you know, clear evidence that existing law is not sufficient. I think one area, one aspect of AI governance that I often push back on and that I often dispute is there’s this kind of assumption baked in whenever we talk about AI regulation that the existing law is insufficient and that the current status quo is that AI is unregulated in some way.

And I think that should actually be, we should have the opposite presumption. We should presume that existing law is sufficient and that there is some sort of good solution. And then. Yeah. It should be, the burden of proof should be on the person who wants the regulation to show this is why existing law doesn’t work.

Speaker 1

Thank you, Dean. That’s very interesting that we go with an assumption. And with that, Gabriela, let me move on to you. So should, you know, how can governments foster open innovation, assuming to whatever Dean said, while minimizing the risk of market distortions?

Gabriela

Well, I think that it’s a very nice segue because I completely agree with Dean that there are a very broad portfolio of policy interventions that has not only to be with regulations. Regulations is looking at the way the technology is developed. But we need to think about this as an ecosystem that needs to be nurtured, that need investments. That needs incentives. that needs institutions and that needs infrastructure. And therefore it’s not only the technological conversation about what do we do with AI, but what kind of an economy we want that is really productive, that delivers for people with AI, and for that you need government intervention. And let me tell you, what is very interesting is we usually tend to think that the private sector is an innovative force and the government is a break.

In the U .S. that was not the case. The U .S. was the place where the massive investment in innovation in DARPA, in the creation of the Internet, all the foundational issues that we are seeing now were financed at some point by basic research that was paid by the government of the U .S. And many countries fill that space and that’s why it’s so important that we invest in research because it cannot be that the research is being done only by the private sector. and then it’s also true that when the government gets into the research it’s open research because it needs to bring everybody around the table and then it needs to be shared which is not always the case when you have a private sector innovation so I will contest this also way of framing the issues in terms of the government’s only creating market distortions because at the end it’s about how the government can be effective to address the market distortions that we see many times emerged in this case I like to see the AI technologies as natural monopolies somebody invented something somebody laid the whole network to operate it and then it was a monopoly and now it’s oligopoly at the end it’s very concentrated so there are market distortions now that needs to be addressed by government policies again there is a wide gap of things that needs to be done to ensure that the main distortion that can occur nationally and globally, which is that this is a story of a lucky few, is prevented.

Speaker 1

Thanks, Gabriela. And you know, it’s interesting you mention that because at least in India, whenever we speak about public -private partnerships, it’s all about how we are moving from a culture of competition to cooperation, to really working together so that the markets stay healthy. With that, we move over to you, Amanda. So, Amanda, you know… Eva. Sorry. Yeah, Amanda is missed. So, Eva, let’s talk about the global AI governance strategy at Wipro, right? Many organizations are developing a responsible AI framework. How do we move beyond policy statements? Through measurable accountability. and specifically when we have to do that at scale.

Ivana Bartoletti

Thank you very much and it’s great to be here and thanks to all of you for joining. So I have what I say often, I have the best job in the world, which is basically to translate a lot of the things that we’ve been discussed over the last few days into practice. So basically means we’ve heard democratisation, we’ve heard inclusion, we’ve heard how it’s important that AI is inclusive and by inclusivity, it’s not just about access, as was said, but it’s also making sure that many get the opportunity to participate in the design of this technology, but also in the decisions around what we are producing and who is going to be benefiting from that.

We, I think in a lot of our work, a lot of organisations, what happened over the last few years when generative AI came about, a lot of organizations we had to face something quite dramatic if you think about it because before then AI was very much for engineers for scientists to work with if they think about machine learning people who knew about AI then what happened a few years ago is that generative AI came and everybody got access to it did you remember and do you remember how companies started to scramble with who’s got access do we leave people our employees to access this the systems do we create our own private instance how do we navigate the fact that we want people to play with these tools with the fact that we have to be safe and secure as an enterprise and then things evolved and a lot of organizations if you know how the debate around governors started and you know a lot of organizations started to set up governance boards and they started to set up ethics boards and all of it and I think we realized at some point and I took on the challenge of AI governance from a privacy standpoint the reason for this and many people in organizations took on AI governance from a privacy standpoint not only because a lot of AI harms are actually privacy harms but also because privacy professionals knew about risk management and then we realized that actually governance of AI is much more than that it’s much more than risk management it’s much more than compliance we realized and I think this summit show that really clearly that AI governance is really about a strategic capability that an organization must have to create long -term value What does that mean?

It means that you have to do two things. First, you have to look at what you want to deploy or develop and that is where you need to embed privacy, security, legal protections, resilience into the products that you’re working on. That is not an easy one. It’s not an easy one. It requires knowledge. It requires investment in privacy enhancing and security enhancing technologies. It requires what, for example, India is promoting which is a techno -legal approach. It’s not just about the law but it’s also about how you translate the law into technical tools. So you have to do all of that and then you have to look at what happens once the product is in production.

So how do you monitor it once it’s out in production? How do you make sure that if, for example, you’re using AI to fire and fire, as sometimes it happens, you have tools to pull the trigger if something goes wrong? Now we are into the realm of agentic AI. If you’re interested in this, I’ve just published an article on the World Economic Forum of a subject I’m really fascinated in, which is what is the design for trust in agentic AI? So, for example, governance means that you do design these agents, but you give people, according to security standards, but also according to their own preferences, the right to intervene when they don’t want the machine to make a decision in an autonomous way.

And then you make sure that you protect from cascading hallucinations, from model drifting, all of that. So governance, to me, is very much about… . the capability that organizations have to think laterally about AI, which means impact, design choices, the trust stack that enables people and employees to trust the product. And one element which to me is very important is to make sure that companies bring their employees with them. That is a very crucial part of governance because the work is going to change. People are going to change the way that they work. They’re going to, and it’s important, the people are going to know best how to use AI are the people working in a company.

This is why I’ve seen successful companies developing a lot of use cases based on their activity and asking their employees, how should we innovate this? This is a fundamental part of governance, I believe, because it brings people with them. us. So very encompassing approach to governance. I think we are evolving and changing how we see it but certainly I think it’s become very clear over the last sort of few years and especially with things like this summit talking about impact that it’s way beyond compliance and it’s way

Speaker 1

Thanks Eva and that makes me very curious enough to ask you a very quick question. So do you still feel you’re underestimating the risks because you spoke about AI trust?

Ivana Bartoletti

No, I think let’s be honest here. Over the last few years we’ve seen amazing benefits coming from AI, right? Beautiful stuff, fantastic. Every day there is a piece of news that makes us hopeful that we can improve our well -being and we can feel better in the world we live in. But at the same time we’ve seen the risks too and we’ve got to be honest that looking at the success without looking at the risks is very naive. We can’t. because we’re not going to be able to deploy AI successfully if we don’t look at the risks. We’ve seen disinformation. We see deepfakes. We have seen AI softwareizing existing inequalities into decision -making around people, future, rights, and livelihood.

That’s not okay. So we’re not underestimating the risks. But we can’t approach governance from a risk management control. We have to shift our approach and do AI for good and change the way that we look into this. So we have to engineer fairness into the systems that we create. We have to engineer inclusivity into the systems that we create. And, of course, we have to manage the risk. But the mindset has to really shift.

Speaker 1

Thanks. And that gets me back to Dean. So my question to you is, inclusion at the national level often intersects with computer access. And research infrastructure. You spoke about public -private partnerships, spoke about trust in… emerging technologies like artificial intelligence and maybe even quantum going ahead, should governments treat compute as critical infrastructure?

Dean

Yeah, I think they should. The data centers that power frontier AI systems are going to be a part, you know, like ports or railroads. They’re going to be critical infrastructure of the future. I believe that’s true. Prior to my current role, I worked in the Trump administration in the White House Office of Science and Technology Policy. And in particular, I was one of the people that shaped the administration’s AI action plan and AI export program, which my former boss, Michael Kratios, was just here talking about and announcing some next steps on. I was really excited to see that. One of the key messages of that that I feel was I feel this is maybe a communications failure on our part.

But. You know, the United States government has publicly said the president. has come out and made as a flagship of his AI policy that we intend to subsidize the development of AI data centers in the global south. That is a policy of the United States under this administration. And we don’t have the interest in exercising control over the technology in the way that I think the prior administration did in some ways. We don’t want to control other countries’ use in the same way that the prior administration did. So I do think you should think of it as critical infrastructure. And I think that you should think of the United States as a partner in the construction of that.

And I think that owning infrastructure of this kind is an asset that states and regions can use for years to come.

Speaker 1

Thank you. So… So, you know, it’s been interesting because whenever we speak about AI at scale, when we talk about taking AI to every single person… across the planet, there are always three vectors we look at. So that can be mindset, skill sets, and tool sets. You just spoke about tool sets, which is extremely relevant. And that takes my question to you, Gabriela. When we talk about mindset, should inclusion be framed primarily as an ethical imperative or a competitive strategy or even both? What’s your take on that?

Gabriela

I really like the way this question is framed, my dear, because I’m sure that people think that going for inclusive policies might hinder competitiveness. Who from the public believes that? Can we have a show of hands? That being inclusive might hinder competitiveness? Investing in competitiveness might go against inclusiveness? I really think I’m an economist, and I think in this area we really need to think. We need to think about economic inclusiveness. because if we just think about social policies that might be needed when some people are left behind and therefore we need to invest in communities or in infrastructure or in people, kids that are in deep need of education, those things are very important.

But more importantly, we need to consider how do we foster market economies that are inclusive and that’s the core issue here. And I can tell you because I have been looking at the question of inequalities. Actually, I’m now co -chairing the task force on inequalities financial disclosure. And what we have seen is that when you have market concentration, productivity flattens. And what happened here, and we saw it at the OECD report that we did some years ago, when you have concentration at the top as the one we are seeing now, we saw that the OECD report which is companies having the whole concentration of compute capacities, the capacity to sort out skills and attract the skills, the capacity of having the financial means to invest.

What happens is that the diffusion machine, which is this very important element that trickles down the innovative developments into a broader set of users and benefits, is broken. Now the diffusion machine is broken. And therefore we need to see how do we ensure that the diffusion is faster. And to do that, of course, I agree with Ivana. The question is how do we ensure that we create the capacities of people and economies that are lagging behind. But we also need to see how do we diminish market dominance. And I know that there are many other considerations, geopolitics, competition matters, trade secrets matter, all these things matter. But for me, competitiveness, inclusiveness has to do with creating the highest well -being for people and that’s the outcome and that’s where ethics competitiveness, all of these narratives collide together because at the end what are we looking at?

that we have well distributed 70 % of wealth in many countries, 60 % of wealth 50 % of wealth is owned by just the top 10 % income groups but that’s not sustainable I get into Europe and Mexico and I was asking where do I put my children because I need good schools and they told me choose the right neighborhood that’s not possible and therefore I feel that there needs to be this set of policies and who is there to ensure a level playing field who is the one that needs to be using the tax systems or the incentive systems or the investment systems or to ensure that people are not left behind or the anti -competition or the non -competitive practices who is there to I pay my taxes so that the governments deliver on their promises so I think this is super important and I feel for example that what India has managed, this question of the digital registry I was with Mr.

Murthy when he presented his plan so many years ago I never could believe that you were going to be doing registry for 100 million people every month, it was just like you’re crazy, that will never happen who finances the government and now you have all India with the digital identification, it’s just amazing and then you go with the financial thing so I feel this is this is this is the this is the this is the this is the this is the this is the this is the this is the this is the this is the this is the this is the this

Speaker 1

Thanks, Gabriela. Since with this vision that the world of tomorrow with AI would certainly be a better world and hopefully be a better world than what it is today, I have a common question for all the panelists. And the question is, what do you see as the most significant blind spot in recent times AI discourse, keeping in all the conversations you’ve possibly had this week and even prior to that? And maybe what we can do is, Dean, we can go with you first.

Dean

Yeah. So in terms of blind spots, I think maybe the most important thing I could possibly say here would be one thing I’ve heard repeated a lot in the conversations I’ve had this week is this notion that the frontier models, the best AI systems are not… necessary. You can find good enough models that can, you know, that are cheaper to run. And in some cases, I think that will be true. But I would point to the very significant blind spot there that, you know, I believe that what we are doing is building systems that are going to be smarter than humans at all cognitive labor. That is a very serious goal. The United States is currently spending, like, it’s not a joke, right?

That’s not a joke. That’s not hype. That’s not crypto. We’re spending a trillion dollars this year on that. That’s the plan. We’re going to do it. It’s going to happen, right? And so the capabilities of those systems and the way that that will change the way the world works, I think ambitious people will be able to do unbelievably broad range of things. And I think this could really be an incredible opportunity for countries in the global south and really everyone in the world to participate in building the future together. And this is not something rejecting frontier models out of some sort of belief that that preserves sovereignty because there are existing use cases you can think of that can be done you know with with with cheaper models i would use a think of think of frontier ai as being useful for stuff that uh we don’t even have words for today right concepts that you will invent uh and you know that we will all invent together that’s the future that we’re building um and uh i think that’s a it’s an easy thing to miss and i think missing it is basically missing the ball game

Speaker 1

that’s very interesting thank you for sharing that gorilla what about you

Gabriela

i would say a traditional education education education okay i am really that takes awake my sleep why aren’t we upgrading massively the education pedagogy Why are we changing the way we go in school? Why don’t we invest in our teachers for them to understand how these technologies can help improve student outcomes and at the same time make their life easier? I see a lot of teachers complaining about all the administrative work that they need to do that don’t leave any space for them to invest in quality changes with their students. And I’m not seeing that happening. And we need that pipeline. If the future that Dean is projecting is going to happen, is going to arrive, we need people to be very well equipped.

And where do we get that equipment? I’m fine to invest in the workers in the market. That’s very important. And I think that we need to upgrade that too, the skills of the people in the market. But the school system needs to be upgraded. And actually, I haven’t seen it really happening anywhere. This is a challenge. This is a challenge for North, South, East, West. and I invite that for all to confront this challenge.

Speaker 1

Okay, I love the fact that you brought education and skilling as a part of it because building AI readiness has become so essential to ensure natural competitiveness no matter which market we are talking about. And just to share an incidence, India was one of the few countries where AI was introduced as a school subject way back in 2019, even before the COVID era. So students could learn AI as they would possibly learn a biology or a physics. But yes, that’s a major challenge which we’re trying to work on. Ivana, your book.

Ivana Bartoletti

I was very impressed yesterday when your prime minister spoke. And I was very impressed by one thing that he said. But he said, you know, develop here and serve humanity. And I think that to me went to point that it’s been very strong. here and where I see because the I like also something that has been missing so far so he said something very very important and he said that AI needs to be used for inclusion for economic well -being inclusion as we said as access but also as participation for many as reduction of the gap between areas of society in geographies across India inclusion also as creating models that respect your languages and your dialects and the ethical norms bind in this country together because the eye that we have now is often not reflecting of the diversity of the world one thing that following on this one thing that has been good has been to see many leaders coming from all over the world.

One thing that I’ve always supported is how we haven’t aligned, I’ve always thought this, how we haven’t aligned on what the red lines of AI are. Are there things that us as a society or as a world, we are never going to do or we don’t want to do? Regardless of, are they, and we’ve seen appeals coming over recent years. We’ve had massive debates around ethics of AI in different ways, whether it’s the US, whether it’s Europe, whether it’s, in different ways, or everyone in different ways. But I think when it comes to something which is far more than technology, because AI is far more than technology, AI’s power is geopolitics, is earth, cables, sea, so much.

I think one of the things that probably we are overlooking is how we, and if we will be, as a world able to come together and have some red lines and say, well actually, we’re not going to go

Speaker 1

Thank you. So I just want to take a moment to thank the panelists and maybe I can ask Dean for you to sum it up.

Dean

Well, I think there’s a lot of different things. Unfortunately, the subject of AI governance is so difficult because it’s so capacious, right? It’s such an enormous topic. But look, I think we have a very real infrastructure development sort of challenge ahead of us. We have a huge complex of new types of institutions and old institutions that are going to change and evolve in various ways. And there’s all sorts of interlocking work to do on things like that that are going to be critical for the governance of AI for both everyday types of harms and also sort of catastrophic things that feel futuristic. But I think that are going to be real parts of our lives in the pretty near future.

And then I think, you know, another thing I would I would kind of double click on is this need for competitive. Yes, which I feel. agree with. And one of the things that I think is exciting about AI is that the price per token of models does drop quite quickly. And so there are a lot of good competitive dynamics here. There’s also centralizing tendencies. And so I think working together to figure out how to prevent those tendencies, I think that’s going to be extremely important. The concentration of power in AI and that issue in the long term is going to be, I think, one of the most important parts of the political economy of this topic.

So yeah, I think that’s how I see it.

Speaker 1

Thank you. That’s fantastic. We spoke about trade -offs, we spoke about potential solutions, and we spoke about building AI readiness for national competitiveness. Thank you so much to all the panelists, and it was lovely having a conversation.

Gabriela

Thanks to the moderator. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (21)
Factual NotesClaims verified against the Diplo knowledge base (9)
Confirmedhigh

“The session opened with the moderator framing the focus on who truly benefits from artificial intelligence and under what rules.”

The knowledge base notes that the main session on AI needs to consider who is using and who is benefiting from it [S13].

Confirmedhigh

“Dean argued that AI should first be governed through the existing web of liability, product‑safety and other statutes rather than a stand‑alone AI law.”

Discussion summaries of US AI governance under the Trump administration highlight a liability-based approach instead of new AI-specific regulation [S108].

Confirmedhigh

“Dean urged governments to map current legal tools—such as the United States’ liability doctrine—to AI use‑cases, presuming existing law is sufficient unless a clear threat model demonstrates otherwise.”

Panel discussions emphasize that existing legal frameworks should be presumed sufficient until proven otherwise, placing the burden of proof on advocates of new rules [S22].

Confirmedmedium

“Dean identified “tail events” (low‑probability, high‑impact scenarios such as pandemics) as domains where proactive governance, including transparency legislation, is justified.”

Workshop notes refer to low-probability, high-risk scenarios as a focus for risk-mitigation strategies [S113].

Confirmedhigh

“Dean said data‑centres that power frontier AI are akin to ports or railroads and should be regarded as critical national infrastructure.”

A speaker explicitly compared AI-powering data centres to ports and railroads, calling them future critical infrastructure [S114].

Confirmedmedium

“Dean noted his prior role in the Trump administration’s White House Office of Science and Technology Policy.”

The same source that mentions the data-centre analogy also confirms his previous work in the Trump administration’s White House [S114].

Additional Contextmedium

“Dean suggested that existing regulations should be used as a foundation and complemented rather than replaced with entirely new AI statutes.”

Commentary from WS #162 advises countries to complement existing regulations instead of creating wholly new AI laws, adding nuance to Dean’s stance [S37].

Additional Contextlow

“Historical precedent shows that legal principles can adapt to new technologies without needing separate legislation.”

Analysis of past technology regulation (e.g., the internet) argues that existing frameworks can be extended to cover emerging tech, supporting the view that AI may be governed by current law [S60].

Additional Contextmedium

“Transparency legislation is important for managing tail‑event risks.”

Experts stress that treating algorithms as black boxes limits transparency and can perpetuate disparities, underscoring the relevance of transparency measures [S105].

External Sources (117)
S1
Global AI Governance: Reimagining IGF’s Role & Impact — – **Ivana Bartoletti** – Virtual panelist (specific role/title not mentioned in transcript) Elizabeth Orembo: Thanks, I…
S2
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — Moderator:you very much, Björn. And you said it, the key of us being together here is to learn from each other, which me…
S3
Lightning Talk #245 Advancing Equality and Inclusion in AI — – **Ivana Bartoletti**: Was scheduled to speak but was unable to attend due to unfortunate circumstances. Role/expertise…
S4
DC-Sustainability Data, Access & Transparency: A Trifecta for Sustainable News | IGF 2023 — Audience:OK, that’s a nice clarification. Hello, everyone. I’m Alice Lenna from Brazil. I’m also a consultant for GRI, t…
S5
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-is-transforming-diplomacy-and-conflict-management — And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in…
S6
How AI Is Transforming Diplomacy and Conflict Management — And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in…
S7
Laying the foundations for AI governance — – **Lan Xue**: Dean (Dean Xue Lan), expertise in governance and policy Robert Trager: Good. We can finally end this her…
S8
Legal Notice: — Chief of International Law Studies. He has previously served as Dean of the George C. Marshall Center in Germany and Gen…
S9
https://dig.watch/event/india-ai-impact-summit-2026/how-multilingual-ai-bridges-the-gap-to-inclusive-access — It’s wonderful. At NTU Singapore, we’re the newest members of ICAIN, but it’s fantastic that the… And I’ve only been a…
S10
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S11
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S12
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S13
Main Session on Artificial Intelligence | IGF 2023 — Needs to consider who is using, who is benefiting from it, and who has the risk
S14
WS #31 Cybersecurity in AI: balancing innovation and risks — Charbel Shbir: Hello. Yes, it is. Hello, my name is Charbel Shbir. I’m president of Lebanese ISOC. Regarding your q…
S15
AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82 — It is crucial to strike the right balance between regulation and innovation to ensure fairness and responsible consumpti…
S16
AI Governance Dialogue: Steering the future of AI — #### Pillar 1: Inclusion Doreen Bogdan Martin: Thank you. And we now have a chance together to reflect on AI governance…
S17
Open Forum #71 Advancing Rights-Respecting AI Governance and Digital Inclusion through G7 and G20 — Alison Gilwald: and Melinda Ngo. Thank you very much. I’m going to, of course, leave Melinda to speak to the specifics o…
S18
WS #254 The Human Rights Impact of Underrepresented Languages in AI — Gustavo Fonseca Ribeiro: Yeah, of course. So what can we do at the international level, and international organizatio…
S19
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Anne Flanagan: Hello, apologies that I’m not there in person today. I’m in transit at the moment, hence my picture on yo…
S20
Bridging the AI innovation gap — LJ Rich: to invite our opening keynote. It’s a pleasure to invite to the stage the director of the Telecommunications St…
S21
Policy Network on Artificial Intelligence | IGF 2023 — It is believed that allowing academics and technical groups the space to explore and experiment is crucial for advancing…
S22
Panel Discussion Inclusion Innovation & the Future of AI — Existing legal frameworks should be presumed sufficient until proven otherwise, with burden of proof on those advocating…
S23
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Legal measures are acknowledged as an important consideration in the context of AI. However, it is argued that relying s…
S24
From principles to practice: Governing advanced AI in action — Brian Tse: right now? First of all, it’s a great honor to be on this panel today. To ensure that AI could be used as a f…
S25
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — India possesses many essential ingredients for AI success: a robust software services industry, thriving startup ecosyst…
S26
Main Session | Policy Network on Artificial Intelligence — Brando Benifei: Thank you. Thank you very much. First of all, I’m really happy to be able to talk in this very impor…
S27
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S28
Science as a Growth Engine: Navigating the Funding and Translation Challenge — So I would just say that it’s something which the private sector can play a part, because as you say, you cross borders….
S29
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Cohen emphasised that sandboxes “require significant governance resources, clear eligibility criteria, testing framework…
S30
Nepal Engagement Session — The conversation highlighted two major technological breakthroughs. First, the integration of Bhashini (India’s language…
S31
Education meets AI — In conclusion, the integration of AI and digital tools in education is reshaping the job market and requires individuals…
S32
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — Ivana Bartoletti: Yeah. So thank you. Excellent questions. So I wanted to just start with a provocation. I mean, yo…
S33
Why science metters in global AI governance — Bengio advocated for high-level principles that avoid technical details since “the details are going to change,” while o…
S34
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S35
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S36
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Lack of infrastructure, skills, compute access, and data access hinder policy effectiveness
S37
WS #162 Overregulation: Balance Policy and Innovation in Technology — Galvez suggests that countries should consider their local needs and existing regulations when developing AI governance …
S38
Dynamic Coalition Collaborative Session — Legal and regulatory | Cybersecurity | Development The speaker outlines a comprehensive framework for AI governance tha…
S39
Swiss AI Initiatives and Policy Implementation Discussion — This comment challenges the prevailing ‘checkbox compliance’ approach to AI governance by proposing a fundamental reorie…
S40
Networking Session #74 Digital Innovations Forum- Solutions for the Offline People — Balancing government-funded projects with maintaining market competitiveness
S41
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — An audience member emphasizes the importance of research and continuous stakeholder engagement in policy formulation. Th…
S42
Main Session | Dynamic Coalitions — June Paris: Can you hear me? Yes. Okay. Please, go ahead, we’re looking forward to hearing you talking about bridging di…
S43
AI ethics shifts from principles to governance frameworks — AI now influences decisions in healthcare, finance, hiring, and public administration, pushing AI ethics into thecentre …
S44
AI Meets Cybersecurity Trust Governance & Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S45
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — The speakers demonstrated remarkable consensus across several key areas: the need to balance governance with innovation,…
S46
Smart Regulation Rightsizing Governance for the AI Revolution — Bella Wilkinson from Chatham House provided a realistic assessment of the current geopolitical landscape, arguing that g…
S47
AI and the future of digital global supply chains (UNCTAD) — However, the adoption of AI in trade faces major barriers. These include the lack of expertise, high costs, absence of g…
S48
Technology Regulation and AI Governance Panel Discussion — Competition Policy and Market Structure Legal and regulatory | Economic Most restrictions to competition actually come…
S49
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — Competition policy and advocacy play an important role, especially in developing countries, where competition authoritie…
S50
Building fair markets in the algorithmic age (The Dialogue) — Furthermore, the analysis highlights another unintended consequence of AI in the competition arena. It suggests that dif…
S51
Panel Discussion Inclusion Innovation & the Future of AI — Ball advocates for minimal new regulation, preferring existing legal frameworks with burden of proof on those wanting ne…
S52
WS #205 Contextualising Fairness: AI Governance in Asia — Tejaswita Kharel: in global conversations around AI bias at the moment? Every speaker strictly has five minutes and we…
S53
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Fairness, accountability, and transparency must be evaluated in a relevant way Importance of hearing various perspectiv…
S54
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S55
Dedicated stakeholder session (in accordance with agreedmodalities for the participation of stakeholders of 22 April 2022) — Diplo Foundation: Mr. Chair, distinguished delegates, colleagues, my name is Vladimir Adunovic. I represent Diplo Fou…
S56
Open Forum #45 Advancing Cyber Resilience of Critical Infrastructure — Ms. Timea Suto: Thanks, Marie. I’ll try to be brief. Really, for business protecting critical infrastructure today, It i…
S57
WS #103 Aligning strategies, protecting critical infrastructure — Ms Robyn Greene argues that policies must consider the broader technological landscape and its impacts on critical infra…
S58
INTERNATIONAL CIIP HANDBOOK 2008 / 2009 — Critical infrastructures extend across many sectors of the economy and key government services.’ 2 In the first section…
S59
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — These key comments fundamentally shaped the discussion by establishing three critical paradigm shifts: (1) from standard…
S60
Do we really need specialised AI regulation? — History demonstrates the resilience of legal principles in adapting to new technologies. For example, when the internet …
S61
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — “First, people must be at the center of AI strategy, as we heard all along today”[107]. “Investment in skills, lifelong …
S62
YCIG & DTC: Future of Education and Work with advancing tech & internet — Pajaro points out that the rapid advancement of AI and other technologies is changing the skills required in the modern …
S63
Artificial intelligence (AI) and cyber diplomacy — The conversation expanded to highlight the universal need for digital literacy and capacity building in AI, urging gover…
S64
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Future skills requirements emphasise working with technology rather than coding, with increasing importance placed on ps…
S65
Vers un indice de vulnérabilité numérique (OIF) — Another noteworthy observation is the shift in focus from a punitive approach to a preventive approach in terms of regul…
S66
Informal Stakeholder Consultation Session — Moving from Reactive Regulation to a Proactive Vision:Called for moving beyond reactive regulation that only limits harm…
S67
Unveiling Trade Secrets: Exploring the Implications of trade agreements for AI Regulation in the Global South — Overall, the analysis highlights the contrasting perspectives and approaches to regulation, specifically the comparison …
S68
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — The discussion showed remarkable consensus on identifying problems (infrastructure gaps, skills shortages, data availabi…
S69
The Global Power Shift India’s Rise in AI & Semiconductors — So the goal of Genesis Project is to really, one, align public and private partnership, two, invest government resources…
S70
The Foundation of AI Democratizing Compute Data Infrastructure — High level of consensus across diverse stakeholders (academic, government, civil society, private sector, international …
S71
WS #462 Bridging the Compute Divide a Global Alliance for AI — Ivy describes Stargate as a $500 billion infrastructure project over four years that requires different types of partner…
S72
Critical infrastructure — AI plays a pivotal role in safeguarding critical infrastructure systems. AI can strengthen the security of critical infr…
S73
AI as critical infrastructure for continuity in public services — The discussion revealed relatively low levels of direct disagreement, with most speakers focusing on different aspects o…
S74
OpenAI’s push to establish AI as critical infrastructure — In a recent interview,Chris Lehane, the newly appointed vice president of public works at OpenAI, underscores AI’s role …
S75
Session — – The need for inclusion of diverse views, not just representation
S76
Open Forum #71 Advancing Rights-Respecting AI Governance and Digital Inclusion through G7 and G20 — Gilwald contends that current digital inclusion challenges are primarily demand-side issues rather than infrastructure p…
S77
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Need to develop concrete public interest frameworks covering models, talent, and data sharing beyond just compute
S78
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S79
AI Governance Dialogue: Steering the future of AI — Doreen Bogdan Martin: Thank you. And we now have a chance together to reflect on AI governance with someone who has a un…
S80
Panel Discussion Inclusion Innovation & the Future of AI — However, Ball acknowledged that proactive governance may be necessary for addressing “tail events” – low-probability, hi…
S81
Panel 3 – Legal and Regulatory Tools to Reduce Risks and Strengthen Resilience  — Submarine cables should be classified and treated as critical infrastructure
S82
Dynamic Coalition Collaborative Session — Legal and regulatory | Cybersecurity | Development The speaker outlines a comprehensive framework for AI governance tha…
S83
Building Sovereign and Responsible AI Beyond Proof of Concepts — Theresa describes emerging UK regulations targeting high‑risk AI, including transparency, explainability and third‑party…
S84
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Alan Paic:Yes, it was not about further countries joining. Well, I can also mention that. So we do have a membership pro…
S85
Networking Session #74 Digital Innovations Forum- Solutions for the Offline People — Balancing government-funded projects with maintaining market competitiveness
S86
Main Session | Dynamic Coalitions — June Paris: Can you hear me? Yes. Okay. Please, go ahead, we’re looking forward to hearing you talking about bridging di…
S87
AI Meets Cybersecurity Trust Governance & Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S88
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — AI is a general-purpose technology that holds the potential to increase productivity and build impactful solutions acros…
S89
Global Enterprises Show How to Scale Responsible AI — The conversation delved deeply into the practical challenges of implementing AI governance at scale, with Gurnani provid…
S90
AI ethics shifts from principles to governance frameworks — AI now influences decisions in healthcare, finance, hiring, and public administration, pushing AI ethics into thecentre …
S91
From Technical Safety to Societal Impact Rethinking AI Governanc — It applies in the business world as well. You have 51 % of the board control or equity in a company. Basically control t…
S92
Smart Regulation Rightsizing Governance for the AI Revolution — Bella Wilkinson from Chatham House provided a realistic assessment of the current geopolitical landscape, arguing that g…
S93
Shaping the Future AI Strategies for Jobs and Economic Development — Infrastructure challenges including energy, cooling, and water consumption are critical blind spots that need immediate …
S94
AI and the future of digital global supply chains (UNCTAD) — However, the adoption of AI in trade faces major barriers. These include the lack of expertise, high costs, absence of g…
S95
From principles to practice: Governing advanced AI in action — Lack of consensus on what constitutes ‘intolerable risks’ and appropriate risk thresholds globally
S96
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Rosemary Kayess:Hello, thank you for the invitation to speak today. Article 27 of the Universal Declaration of Human Rig…
S97
Panel Discussion: 01 — We are expecting our other guests to join us very soon as Ms. Devjani Khosh, Distinguished Fellow Niti Aayog is going to…
S98
(Interactive Dialogue 2) Summit of the Future – General Assembly, 79th session — Juan Manuel Santos: Distinguished co-chairs, excellencies, ladies and gentlemen, like my fellow elder, Ellen Johnson, …
S99
When Code and Creativity Collide: AI’s Transformation of Music and Creative Expression — Juliet Mann argues that artificial intelligence is advancing at an unprecedented pace compared to previous technologies….
S100
The Global Economic Outlook — Georgieva emphasizes the importance of making artificial intelligence accessible to all, not just a privileged few. She …
S101
How AI Is Transforming Indias Workforce for Global Competitivene — Sue Daley from Tech UK shared how the UK government has created an AI skills partnership aimed at training over one mill…
S102
Thinking through Augmentation — Additionally, Christy emphasizes the necessity of involving workers in the AI transformation process. She believes that …
S103
Media and Education for All: Bridging Female Academic Leaders and Society towards Impactful Results — ### Accessibility as Universal Design ### Balancing Principles with Accessibility Anita Lamprecht: Thank you very much…
S104
AI for equality: Bridging the innovation gap — These key comments transformed what could have been a superficial discussion about women and technology into a sophistic…
S105
Internet Governance Forum 2024 — DuringWS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy, Monica Lopez pointed out that treating algor…
S106
Keynote-Jeet Adani — She rises to stabilize, she rises to anchor a world searching for balance and she rises to build systems that are inclus…
S107
Workshop 1: AI & non-discrimination in digital spaces: from prevention to redress — Louise Hooper: Thanks. Good morning, everybody. So the first thing that I’d like to talk about is what AI systems are an…
S108
Discussion Summary: US AI Governance Strategy Under the Trump Administration — Ball advocated for a liability-based approach rather than comprehensive preemptive regulation, suggesting policymakers s…
S109
Rethinking AI regulation: Are new laws really necessary? — Specialised AI regulation may not be necessary, as existing laws already cover many aspects of AI-related concerns. Jova…
S110
Keynotes — O’Flaherty acknowledges that the regulatory work is not finished and that current regulatory models will likely be insuf…
S111
Keynote-Alexandr Wang — “We publish model cards and evaluation benchmarks and data so you can see how they work, their intended use, and how we …
S112
https://dig.watch/event/india-ai-impact-summit-2026/driving-indias-ai-future-growth-innovation-and-impact — The innovate side really comes down to. Areas like skilling, which I know when Minister Chaudhry joins us, we will get i…
S113
Panel 2 – Anticipating and Mitigating Risks Along the Global Subsea Network  — I think Andy yesterday from the UK government mentioned that that report and that committee was coming at it from a very…
S114
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-inclusion-innovation-the-future-of-ai — Yeah, I think they should. The data centers that power frontier AI systems are going to be a part, you know, like ports …
S115
Building Scalable AI Through Global South Partnerships — And this particular event gave us that opportunity. I think we were very clear that what we wanted to do was to let peop…
S116
WS #100 Integrating the Global South in Global AI Governance — Data and Infrastructure Challenges Focus of inclusion efforts Idlebi suggests that initiatives are needed to encourage…
S117
WS #208 Democratising Access to AI with Open Source LLMs — Disclaimer:This is not an official session record. DiploAI generates these resources from audiovisual recordings, and th…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
3 arguments158 words per minute938 words355 seconds
Argument 1
AI must be evaluated on who it benefits and under what regulatory rules.
EXPLANATION
Speaker 1 frames the central question of the summit as determining the beneficiaries of artificial intelligence and establishing appropriate governance frameworks. This sets the agenda for discussing equity, policy, and impact.
EVIDENCE
Speaker 1 states that the most defining question of our time is who AI really benefits and with what rules, framing the need to consider beneficiaries and regulatory frameworks. [4]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The IGF Main Session on Artificial Intelligence stresses that AI governance must consider who is using, who benefits, and who bears the risks, directly supporting this framing [S13].
MAJOR DISCUSSION POINT
Defining AI beneficiaries and governance
Argument 2
Inclusion in AI goes beyond data representation to include compute access, standards, policy frameworks, and regulatory clarity.
EXPLANATION
Speaker 1 argues that equitable AI requires more than diverse datasets; it also needs universal access to computational resources, common standards, supportive policies, and clear cross‑border regulations. This broader view of inclusion links technical and regulatory dimensions.
EVIDENCE
Speaker 1 explains that inclusion in AI extends beyond equitable data sets to include access to compute, standards, policy frameworks, and regulatory clarity across countries. [9-13]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The IGF inclusion pillar highlights a broad definition of AI inclusion that covers standards, policy frameworks and cross-border regulations beyond data sets [S16]; discussions on under-represented languages underline the need for linguistic and compute inclusion [S18]; the Nepal session shows language-AI platforms improving access, illustrating inclusion beyond data [S30]; and the Democratizing AI dialogue notes challenges of sharing compute resources, reinforcing the broader inclusion view [S27].
MAJOR DISCUSSION POINT
Broad definition of AI inclusion
AGREED WITH
Ivana Bartoletti
Argument 3
Policy must balance the trade‑offs between excellence and inclusion in AI development.
EXPLANATION
Speaker 1 highlights that achieving high performance (excellence) can conflict with equitable access (inclusion), and that policymakers need to navigate these competing priorities. This sets up the panel’s focus on navigating trade‑offs.
EVIDENCE
Speaker 1 notes that the conversation will focus on trade-offs between excellence and inclusion, highlighting the challenge of balancing high performance with equitable access in policy design. [15]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The consumer-protection forum stresses striking the right balance between regulation (excellence) and innovation (inclusion) for fairness [S15]; the IGF Policy Network notes the importance of balancing regulation with fostering innovation [S21]; and the cybersecurity-in-AI session discusses balancing innovation and risk, echoing the trade-off theme [S14].
MAJOR DISCUSSION POINT
Balancing excellence and inclusion
D
Dean
3 arguments169 words per minute1310 words464 seconds
Argument 1
Existing legal frameworks should be presumed sufficient for AI, with regulators bearing the burden of proof to show inadequacy.
EXPLANATION
Dean contends that many AI risks can be addressed through current liability, product, and other regulations, and that new AI‑specific laws should only be introduced when clear gaps are demonstrated. He flips the usual presumption, placing the onus on proponents of regulation.
EVIDENCE
Dean argues that existing bodies of law-such as liability doctrines and product regulations-should be applied to AI, and that the presumption should be that current law is sufficient, placing the burden of proof on those seeking new regulation. [24-33] He also states that we should presume existing law is sufficient and the burden of proof should be on the person who wants regulation to show why existing law doesn’t work. [40-44]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Panel Discussion on Inclusion, Innovation & the Future of AI explicitly states that existing legal frameworks should be presumed sufficient until proven otherwise, placing the burden of proof on those seeking new rules [S22]; the cybersecurity-in-AI session also highlights reliance on existing liability doctrines for AI software [S14].
MAJOR DISCUSSION POINT
Presumption of adequacy of existing law
DISAGREED WITH
Gabriela
Argument 2
Proactive governance is required for low‑probability, high‑impact tail events associated with AI.
EXPLANATION
Dean identifies catastrophic scenarios, likening AI risks to pandemics, and argues that such tail events justify anticipatory regulatory measures, including transparency laws, before clear threats materialize.
EVIDENCE
Dean identifies tail events-low-probability, high-impact scenarios like pandemics-as areas where proactive AI governance is needed, and he has advocated for transparency laws to address such risks. [34-38]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same panel discussion calls for anticipatory governance of low-probability, high-impact tail events and mentions support for transparency laws [S22]; the ‘From principles to practice’ session discusses managing global-scale AI challenges, aligning with proactive tail-risk governance [S24].
MAJOR DISCUSSION POINT
Governance for AI tail risks
DISAGREED WITH
Ivana Bartoletti
Argument 3
AI compute infrastructure should be classified as critical national infrastructure.
EXPLANATION
Dean compares data centers powering frontier AI to ports and railroads, arguing they are essential to national security and economic competitiveness, and should be treated as critical infrastructure with public‑private partnership support.
EVIDENCE
Dean states that data centers powering frontier AI systems are comparable to ports or railroads and should be classified as critical infrastructure, citing his role in shaping the U.S. AI action plan and export program. [111-115]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel participants argue that AI compute facilities should be treated as critical national infrastructure, akin to ports or railways [S22]; the Sovereign AI for India briefing identifies compute as a strategic bottleneck, underscoring its critical status [S25]; and the Democratizing AI dialogue notes foundational compute resources as a major challenge, supporting the critical-infrastructure framing [S27].
MAJOR DISCUSSION POINT
AI compute as critical infrastructure
AGREED WITH
Speaker 1
DISAGREED WITH
Speaker 1, Gabriela
G
Gabriela
3 arguments143 words per minute1299 words544 seconds
Argument 1
AI policy must be built as an ecosystem of investments, incentives, institutions, and infrastructure, not limited to regulation.
EXPLANATION
Gabriela emphasizes that effective AI governance requires coordinated funding, incentive mechanisms, institutional support, and robust infrastructure, forming a holistic ecosystem that nurtures innovation.
EVIDENCE
Gabriela says that AI policy must be an ecosystem comprising investments, incentives, institutions, and infrastructure, not merely regulation, to nurture the technology. [48-52]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI Policy Research Roadmap emphasizes an ecosystem approach combining investments, incentives, institutions and infrastructure for evidence-based policy [S19]; the ‘Science as a Growth Engine’ discussion highlights the role of government funding and institutional support in translating research into impact, reinforcing the ecosystem view [S28]; AI sandboxes are cited as requiring governance resources, clear eligibility and institutional frameworks, part of such an ecosystem [S29]; the panel also presents a counterpoint from Dean favoring minimal new regulation, illustrating the debate [S22].
MAJOR DISCUSSION POINT
Ecosystem approach to AI policy
AGREED WITH
Speaker 1
DISAGREED WITH
Dean
Argument 2
Government has historically driven major technological breakthroughs and should continue to fund AI research to prevent market distortions.
EXPLANATION
Gabriela points to DARPA and the creation of the Internet as examples of public‑sector innovation, arguing that similar public investment is needed to avoid reliance on private‑sector monopolies and to ensure open, inclusive AI development.
EVIDENCE
Gabriela notes that the U.S. government funded DARPA and the Internet, arguing that similar public investment is essential to avoid market distortions and support AI as a natural monopoly. [53-55]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The ‘Science as a Growth Engine’ session notes that government funding of basic research has historically avoided market distortions and spurred breakthroughs, supporting continued public investment in AI [S28]; the Sovereign AI for India briefing references historic public-sector successes like DARPA and the Internet as models for AI investment [S25]; and the panel discussion contrasts Dean’s minimal-regulation stance with calls for comprehensive government intervention (Gabriela) [S22].
MAJOR DISCUSSION POINT
Public sector role in AI innovation
Argument 3
Education systems must be overhauled to provide AI literacy and skills for teachers and students.
EXPLANATION
Gabriela calls for modernizing curricula, investing in teacher training, and reducing administrative burdens so educators can integrate AI tools effectively, arguing that without such reforms the AI workforce pipeline will be insufficient.
EVIDENCE
Gabriela criticizes the lack of modernization in education, calling for upgraded pedagogy, teacher training, and skill development to prepare people for AI-driven futures. [171-183]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The ‘Education meets AI’ report stresses the need for AI literacy, digital skills, and curriculum reform to keep pace with AI-driven job markets [S31]; the Policy Network notes that old educational systems need to change to support AI competencies [S21]; and the Nepal engagement session demonstrates how AI language platforms can empower local officials, illustrating the benefits of AI-enabled education [S30].
MAJOR DISCUSSION POINT
AI education reform
AGREED WITH
Speaker 1, Ivana Bartoletti
I
Ivana Bartoletti
3 arguments143 words per minute1426 words594 seconds
Argument 1
AI governance should embed privacy, security, and legal safeguards from design through deployment, using a techno‑legal approach.
EXPLANATION
Ivana stresses that products need built‑in privacy and security controls, and that translating legal requirements into technical tools—exemplified by India’s techno‑legal model—is essential for responsible AI.
EVIDENCE
Ivana explains that effective AI governance requires embedding privacy, security, and legal protections into products from the design stage and continuing monitoring in production, using a techno-legal approach as exemplified by India’s initiatives. [70-76]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The ‘Why science matters in global AI governance’ briefing discusses techno-legal approaches that embed privacy, security and legal requirements into AI products from design to production [S33]; the cybersecurity ethics session argues that legal measures alone are insufficient, underscoring the need for integrated technical safeguards [S23]; and Ivana’s parliamentary remarks highlight the role of lawmakers in shaping such techno-legal frameworks [S32].
MAJOR DISCUSSION POINT
Techno‑legal embedding of safeguards
Argument 2
Trust in agentic AI requires mechanisms that let users intervene in autonomous decisions to prevent cascading harms.
EXPLANATION
Ivana argues that users should have the ability to stop or override AI agents when outcomes are undesirable, addressing risks like hallucinations and model drift, thereby building a trust stack.
EVIDENCE
Ivana describes the need for trust in agentic AI by giving users the right to intervene in autonomous decisions, preventing cascading hallucinations and model drift, as outlined in her recent World Economic Forum article. [80-83]
MAJOR DISCUSSION POINT
User intervention for trustworthy AI
Argument 3
Governance should shift from pure risk management to engineering fairness and inclusivity while still addressing AI hazards.
EXPLANATION
Ivana acknowledges AI’s benefits and risks, urging a move beyond risk‑control frameworks toward proactive design of fair and inclusive systems, integrating ethical considerations into AI development.
EVIDENCE
Ivana acknowledges both the benefits and the risks of AI, arguing that governance should move beyond pure risk management toward engineering fairness and inclusivity while still managing hazards. [94-107]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same ‘Why science matters in global AI governance’ source calls for moving beyond risk-control frameworks toward proactive engineering of fairness and inclusivity in AI systems [S33]; the cybersecurity ethics discussion also emphasizes fairness and ethical dimensions alongside risk management [S23].
MAJOR DISCUSSION POINT
Fairness and inclusivity in AI governance
DISAGREED WITH
Dean
Agreements
Agreement Points
Inclusion in AI must be understood broadly, covering not only data representation but also access to compute, standards, policy frameworks and the need to embed fairness and inclusivity throughout AI systems.
Speakers: Speaker 1, Ivana Bartoletti
Inclusion in AI goes beyond data representation to include compute access, standards, policy frameworks, and regulatory clarity. Fairness and inclusivity in AI governance
Both Speaker 1 and Ivana stress that AI inclusion is more than equitable datasets; it requires universal compute access, standards, supportive policies and the engineering of fairness and inclusivity into AI products [9-13][104-106].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with the inclusive AI agenda highlighted in the AI Governance Dialogue emphasizing unprecedented participation and inclusion-by-design, and with calls to address infrastructure, data and skill gaps as barriers to inclusive policy [S53][S54][S59][S68].
AI compute infrastructure should be treated as critical national infrastructure and receive public‑private partnership support.
Speakers: Dean, Speaker 1
AI compute infrastructure should be classified as critical national infrastructure.
Dean argues that data centres powering frontier AI are akin to ports or railways and must be regarded as critical infrastructure, a view echoed by Speaker 1’s question about treating compute as critical infrastructure for national inclusion [111-114][108-110].
POLICY CONTEXT (KNOWLEDGE BASE)
Several policy discussions treat AI compute as critical infrastructure, citing the need for public-private partnerships and security considerations, e.g., the International CIIP Handbook definition of critical sectors [S58], the OpenAI positioning of AI as critical infrastructure [S74], and large-scale compute alliance initiatives [S71][S69][S72].
Education systems and workforce skills must be upgraded to provide AI literacy for teachers, students and employees.
Speakers: Gabriela, Speaker 1, Ivana Bartoletti
Education systems must be overhauled to provide AI literacy and skills for teachers and students.
Gabriela calls for modernising curricula, teacher training and reducing administrative burdens; Speaker 1 highlights the need for mindset, skill-sets and tool-sets; Ivana stresses bringing employees along and up-skilling them, all pointing to a shared demand for AI-focused education and capacity building [171-183][126-130][84-88].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of AI literacy and upskilling is reflected in the AI Impact Summit 2026 emphasis on skills investment [S61], and multiple analyses urging curriculum adaptation and digital literacy programs [S62][S63][S64].
Effective AI policy requires an ecosystem of investments, incentives, institutions and infrastructure rather than relying solely on regulation.
Speakers: Gabriela, Speaker 1
AI policy must be built as an ecosystem of investments, incentives, institutions, and infrastructure, not limited to regulation.
Gabriela describes AI policy as an ecosystem of funding, incentives and institutions, while Speaker 1 frames the policy challenge as balancing excellence and inclusion, both moving beyond a narrow regulatory focus [48-52][15].
POLICY CONTEXT (KNOWLEDGE BASE)
Panel discussions contrast minimal new regulation with comprehensive government-driven investment and institutional frameworks, supporting an ecosystem approach [S51][S69][S70][S68].
Similar Viewpoints
Both see a strong role for government partnership in building and governing AI infrastructure, whether as critical national assets or as part of a broader innovation ecosystem [111-114][48-52].
Speakers: Dean, Gabriela
AI compute infrastructure should be classified as critical national infrastructure. AI policy must be built as an ecosystem of investments, incentives, institutions, and infrastructure, not limited to regulation.
Both stress that AI governance must go beyond compliance and data‑set equity to embed fairness, inclusivity and broader systemic access [9-13][104-106].
Speakers: Speaker 1, Ivana Bartoletti
Inclusion in AI goes beyond data representation to include compute access, standards, policy frameworks, and regulatory clarity. Fairness and inclusivity in AI governance
Both highlight the necessity of up‑skilling people—teachers, employees and the broader workforce—to ensure responsible AI use and governance [171-183][84-88].
Speakers: Gabriela, Ivana Bartoletti
Education systems must be overhauled to provide AI literacy and skills for teachers and students. Governance should shift from pure risk management to engineering fairness and inclusivity while still addressing AI hazards.
Unexpected Consensus
Government as a partner for AI compute infrastructure
Speakers: Dean, Gabriela
AI compute infrastructure should be classified as critical national infrastructure. AI policy must be built as an ecosystem of investments, incentives, institutions, and infrastructure, not limited to regulation.
Dean frames compute centres as critical infrastructure needing public partnership, while Gabriela, typically focused on broader ecosystem issues, also stresses direct government investment to avoid market distortions-an alignment not explicitly anticipated given their different primary emphases [111-114][48-52].
POLICY CONTEXT (KNOWLEDGE BASE)
Public-private partnership models for compute infrastructure are advocated in the Genesis Project and the Global Alliance for AI, positioning governments as key partners [S69][S71][S51].
Recognition of AI’s transformative power and the need for proactive governance
Speakers: Dean, Ivana Bartoletti
AI compute infrastructure should be classified as critical national infrastructure. Fairness and inclusivity in AI governance
Dean warns that AI systems will become smarter than humans across all cognitive labour, while Ivana stresses trust in agentic AI and the necessity of embedding safeguards-both converging on the view that AI’s future impact is profound and requires forward-looking governance, a point not overtly shared earlier in the discussion [159-161][80-83].
POLICY CONTEXT (KNOWLEDGE BASE)
Speakers repeatedly stress AI’s transformative potential and the need for proactive, forward-looking governance, as in the AI Governance Dialogue opening remarks and OpenAI’s New Deal analogy [S54][S66][S74][S61].
Overall Assessment

The panel shows strong convergence on four themes: a broad, systemic view of inclusion; the classification of AI compute as critical infrastructure; the urgent need to overhaul education and up‑skill the workforce; and the requirement for an ecosystem‑based policy approach that blends public investment with regulatory clarity.

High consensus across speakers, indicating a shared understanding that AI governance cannot rely solely on narrow regulation but must integrate infrastructure, education, and inclusive design. This consensus suggests that future policy initiatives should prioritize public‑private partnerships, critical‑infrastructure designation for compute resources, and large‑scale capacity‑building programmes to achieve equitable AI benefits.

Differences
Different Viewpoints
Presumption of adequacy of existing legal frameworks versus the need for new, broader AI policy interventions
Speakers: Dean, Gabriela
Existing legal frameworks should be presumed sufficient for AI, with regulators bearing the burden of proof to show inadequacy. AI policy must be built as an ecosystem of investments, incentives, institutions, and infrastructure, not limited to regulation.
Dean argues that current liability, product, and other statutes are enough for AI governance and that any new regulation must first prove a gap in existing law [24-33][40-44]. Gabriela counters that effective AI governance requires a holistic ecosystem of public investment, incentives and possibly new regulatory tools, indicating that existing frameworks are insufficient on their own [48-52][53-55].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates at the Inclusion Innovation panel show one side favoring existing frameworks with burden of proof [S51], while others argue for new policy tools, echoing broader discussions on whether existing competition law suffices [S48][S60].
Focus on risk‑centric, tail‑event proactive regulation versus a shift toward fairness and inclusivity beyond pure risk management
Speakers: Dean, Ivana Bartoletti
Proactive governance is required for low‑probability, high‑impact tail events associated with AI. Governance should shift from pure risk management to engineering fairness and inclusivity while still addressing AI hazards.
Dean emphasizes anticipatory regulation for catastrophic, low-probability AI scenarios (e.g., pandemics) and supports transparency laws as a pre-emptive measure [34-38]. Ivana argues that AI governance should move beyond a narrow risk-control lens to embed fairness and inclusivity throughout design and deployment, suggesting a broader ethical-technical approach [102-107].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent literature contrasts risk-based, tail-event regulation with rights-based, fairness-oriented approaches, highlighting a shift toward preventive, inclusive governance [S65][S66][S67][S68][S50].
Role of compute infrastructure – critical national infrastructure versus a component of broader inclusion without explicit critical‑infrastructure status
Speakers: Dean, Speaker 1, Gabriela
AI compute infrastructure should be classified as critical national infrastructure. Inclusion in AI includes access to compute as one of many elements needed for equitable AI. AI policy must be an ecosystem that includes investments and infrastructure, not limited to regulation.
Dean asserts that data centers powering frontier AI are akin to ports or railroads and should be treated as critical infrastructure, warranting public-private partnership and subsidies [111-115]. Speaker 1 and Gabriela treat compute access as part of a broader inclusion and ecosystem agenda, without explicitly framing it as critical infrastructure, focusing instead on investment and policy support [9-13][48-52].
POLICY CONTEXT (KNOWLEDGE BASE)
While some sources label AI compute as critical infrastructure requiring protection [S56][S57][S58][S72][S73][S74], other discussions frame it as one element of broader inclusion without explicit critical status [S53][S59].
Unexpected Differences
Government’s role in preventing market distortions versus reliance on existing competition law
Speakers: Dean, Gabriela
Existing legal frameworks should be presumed sufficient for AI, with the burden of proof on regulators. Government has historically driven major technological breakthroughs and should continue to fund AI research to prevent market distortions.
Dean’s stance that existing law already handles competition and that new regulation should only be introduced with clear evidence of a gap [24-33][40-44] contrasts sharply with Gabriela’s claim that active government investment and policy are needed to avoid monopolistic market distortions and to nurture innovation [53-55]. This divergence is unexpected given both speakers operate within policy circles but adopt opposite views on the necessity of new governmental intervention.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between government intervention to avoid market distortions and reliance on competition law is discussed in competition policy analyses and panel debates [S48][S49][S51][S68].
Treating AI compute as a strategic critical infrastructure versus viewing it as one element of broader inclusion without explicit critical‑infrastructure framing
Speakers: Dean, Speaker 1
AI compute infrastructure should be classified as critical national infrastructure. Inclusion in AI includes access to compute as part of a wider set of inclusion measures.
Dean explicitly categorizes AI data centers as future-critical infrastructure comparable to ports or railroads [111-115], while Speaker 1 mentions compute access only as one facet of inclusion without assigning it critical-infrastructure status [9-13]. The mismatch in framing-strategic asset versus inclusion component-is not anticipated given the shared focus on inclusion.
POLICY CONTEXT (KNOWLEDGE BASE)
Similar to the previous point, the strategic framing of AI compute as critical infrastructure is supported by policy handbooks and industry statements, while inclusion-by-design narratives treat it as part of a wider ecosystem [S58][S59][S71][S74].
Overall Assessment

The panel displayed moderate disagreement centered on the adequacy of existing legal regimes versus the need for new, ecosystem‑wide policy measures, and on the emphasis of risk‑centric regulation versus fairness‑centric governance. While all participants agreed on the overarching goal of inclusive, trustworthy AI, they diverged on the mechanisms—legal presumption, proactive tail‑risk rules, public investment, techno‑legal embedding, and the strategic classification of compute resources.

The level of disagreement is moderate but consequential: differing assumptions about legal sufficiency and the role of government could shape whether AI governance leans toward incremental adaptation of current law or toward a more transformative, investment‑driven framework. These divergences will affect policy design, allocation of resources, and the speed at which inclusive AI ecosystems can be built.

Partial Agreements
All three speakers share the goal of making AI inclusive and equitable, but differ on the primary means: Speaker 1 emphasizes a broad definition of inclusion covering technical and regulatory dimensions [9-13]; Gabriela stresses an ecosystem of public investment and institutional support [48-52]; Ivana focuses on embedding fairness and inclusivity through techno‑legal design and governance practices [102-107].
Speakers: Speaker 1, Gabriela, Ivana Bartoletti
Inclusion in AI goes beyond data representation to include compute access, standards, policy frameworks, and regulatory clarity. AI policy must be built as an ecosystem of investments, incentives, institutions, and infrastructure, not limited to regulation. Governance should shift from pure risk management to engineering fairness and inclusivity while still addressing AI hazards.
Both agree that law and regulation play a role in AI governance, yet Dean leans on the sufficiency of current statutes, whereas Ivana advocates for translating legal requirements into technical safeguards, indicating a shared recognition of legal relevance but divergent implementation pathways [24-33][40-44] vs [70-76].
Speakers: Dean, Ivana Bartoletti
Existing legal frameworks should be presumed sufficient for AI, with regulators bearing the burden of proof to show inadequacy. AI governance should embed privacy, security, and legal safeguards from design through deployment, using a techno‑legal approach.
Takeaways
Key takeaways
Existing legal frameworks are generally sufficient for AI; regulators must bear the burden of proof to show otherwise (Dean). Proactive governance is needed for low‑probability, high‑impact tail events such as catastrophic AI failures (Dean). AI governance should be a strategic capability that embeds privacy, security, and techno‑legal tools rather than merely a compliance checklist (Ivana). Inclusion must be pursued both as an ethical imperative and as a driver of economic competitiveness; it extends beyond data representation to access to compute, standards, and policy frameworks (Gabriela). Public‑private partnerships and government investment are essential to nurture the AI ecosystem, address market distortions, and support open research (Gabriela). AI compute facilities (data centers) should be classified as critical national infrastructure, with governments acting as partners in their development (Dean). AI technologies exhibit natural‑monopoly tendencies; policy must curb concentration to preserve competition while leveraging falling token costs that foster competitive dynamics (Dean, Gabriela). Education systems and teacher training are major blind spots; curricula need to be upgraded to prepare students and workers for AI‑augmented futures (Gabriela). Trust in agentic AI requires design mechanisms for human intervention, safeguards against hallucinations, and continuous monitoring in production (Ivana). Key blind spots identified: under‑estimating the strategic importance of frontier models and the lack of global consensus on AI “red lines” or ethical boundaries (Dean, Ivana).
Resolutions and action items
Treat AI compute infrastructure as critical national infrastructure and explore government‑partnered investment, especially in the Global South (Dean). Encourage governments to apply existing liability and product regulations to AI, only adding new rules for demonstrated tail‑event risks (Dean). Develop AI governance capabilities within organizations that integrate privacy, security, and techno‑legal tools throughout the product lifecycle (Ivana). Promote public‑private partnership models to fund research, open innovation, and address market distortions caused by AI concentration (Gabriela). Initiate initiatives to upgrade school curricula and provide teacher training on AI tools and pedagogy (Gabriela). Create mechanisms for human override and monitoring of agentic AI systems to ensure trust and mitigate model drift (Ivana).
Unresolved issues
How to concretely balance self‑regulation with innovation‑first approaches without harming national competitiveness. Specific policy instruments needed to curb AI market concentration and prevent oligopolistic dominance. Mechanisms for achieving global consensus on AI “red lines” and ethical boundaries. Funding models and governance structures for scaling compute infrastructure in developing regions. Detailed implementation plans for integrating AI education and upskilling across diverse education systems. Operational guidelines for applying existing legal frameworks to emerging AI use‑cases.
Suggested compromises
Presume existing law is sufficient for most AI applications, but allow targeted, proactive regulation for identified tail‑event risks (Dean). Frame inclusion as both an ethical duty and a competitive advantage, aligning social goals with economic incentives (Gabriela). Adopt public‑private partnership approaches that combine government funding with private sector innovation to mitigate market distortions (Gabriela). Recognize AI compute as critical infrastructure while avoiding overly restrictive control, positioning governments as partners rather than controllers (Dean). Leverage falling token prices to foster competition while implementing policies to counteract centralizing tendencies (Dean).
Thought Provoking Comments
We should presume that existing law is sufficient and place the burden of proof on those who want new regulation to show why current law doesn’t work.
Challenges the common assumption that AI is a regulatory vacuum and flips the default stance, prompting a more evidence‑based approach to new legislation.
Set the regulatory framing for the discussion, leading other panelists to consider how existing legal tools can be leveraged rather than defaulting to new, possibly heavy‑handed regulations.
Speaker: Dean
AI technologies are natural monopolies; we need government intervention to address market concentration and prevent a ‘lucky few’ scenario.
Introduces the economic concept of natural monopoly into AI policy, linking market structure to inclusion and highlighting the risk of oligopolistic control.
Shifted the conversation from pure regulation to ecosystem design, prompting later discussion on competition, diffusion of technology, and the role of public‑private partnerships.
Speaker: Gabriela
AI governance is a strategic capability that goes beyond compliance—it requires embedding privacy, security, and resilience into products, monitoring them in production, and giving people the right to intervene with agentic AI.
Expands the notion of governance from a checklist to an ongoing, technical‑legal practice, introducing concepts like trust stacks and agentic AI control.
Deepened the technical dimension of the debate, influencing subsequent remarks about techno‑legal approaches and the need for measurable accountability at scale.
Speaker: Ivana Bartoletti
Data centers that power frontier AI should be treated as critical infrastructure, like ports or railroads.
Reframes compute resources as essential public assets, moving the policy conversation toward infrastructure investment and sovereignty concerns.
Prompted a discussion on national strategies for compute, linking it to earlier points about inclusion, access, and the role of governments in building AI capacity.
Speaker: Dean
Inclusion should be seen both as an ethical imperative and a competitive strategy; market concentration breaks the ‘diffusion machine’ that spreads innovation broadly.
Synthesizes ethical and economic arguments, highlighting how concentration hampers the spread of AI benefits and calling for policies that boost diffusion.
Guided the panel toward concrete policy levers—tax, incentives, anti‑trust—to ensure equitable AI diffusion, and reinforced the earlier monopoly discussion.
Speaker: Gabriela
Rejecting frontier models is a blind spot; the most powerful future uses will come from capabilities we can’t yet name, and investing in them opens opportunities for the global south.
Challenges the notion that cheaper, smaller models are sufficient, emphasizing the strategic importance of cutting‑edge AI for future innovation and inclusion.
Reoriented the conversation toward long‑term investment in high‑performance AI, influencing later remarks about education, skills, and the need to prepare societies for advanced models.
Speaker: Dean
Our education system has not been upgraded to teach AI concepts or equip teachers; without this pipeline, the promised AI future cannot be realized.
Identifies a systemic blind spot—human capital development—that underpins all other policy discussions about AI readiness.
Added a concrete, actionable focus on curriculum reform and teacher training, linking back to earlier points on inclusion, skills, and national competitiveness.
Speaker: Gabriela
We lack globally agreed ‘red lines’ for AI; without shared boundaries, we risk divergent ethical standards and geopolitical friction.
Raises the geopolitical dimension of AI governance, pointing out the absence of international consensus on unacceptable uses.
Expanded the scope of the debate to include global coordination, influencing the final reflections on political economy and the need for shared norms.
Speaker: Ivana Bartoletti
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from a generic talk about AI policy to a nuanced exploration of regulation, market structure, infrastructure, governance practice, and human capital. Dean’s challenge to the presumption of regulatory gaps and his framing of compute as critical infrastructure set a legal‑and‑strategic foundation. Gabriela’s emphasis on natural monopolies and the dual nature of inclusion as ethical and competitive introduced economic depth and highlighted concentration risks. Ivana’s articulation of governance as a strategic, techno‑legal capability and the need for global red lines broadened the conversation to operational and geopolitical layers. Collectively, these comments redirected the dialogue toward concrete policy levers, long‑term investment in frontier models, and the essential role of education, thereby deepening the analysis and outlining a comprehensive roadmap for inclusive, competitive, and responsibly governed AI.

Follow-up Questions
How can governments proactively govern tail events (low‑probability, high‑impact AI risks) that could have catastrophic consequences?
Dean highlighted the need for proactive governance of tail events, indicating that existing law may be insufficient and that specific policy mechanisms are required.
Speaker: Dean
What specific policy tools are needed to address market distortions caused by AI natural monopolies and concentration of power?
Gabriela described AI technologies as natural monopolies leading to market distortions and called for policies to mitigate these effects.
Speaker: Gabriela
How can organizations effectively monitor AI systems in production and intervene when harms arise?
Ivana emphasized the challenge of post‑deployment monitoring and the need for tools that allow timely intervention to prevent or mitigate AI‑related harms.
Speaker: Ivana Bartoletti
What design principles should constitute a "trust stack" for agentic AI, enabling users to intervene or override autonomous decisions?
She referenced her work on designing trust for agentic AI and the importance of giving users control over autonomous systems.
Speaker: Ivana Bartoletti
Should compute infrastructure (e.g., AI data centers) be classified as critical national infrastructure, and what regulatory regime should apply?
Dean argued that AI compute facilities are akin to ports or railroads and should be treated as critical infrastructure, prompting further policy development.
Speaker: Dean
Should inclusion be framed primarily as an ethical imperative, a competitive strategy, or both, and how can policies balance these perspectives?
Gabriela questioned the framing of inclusion and its relationship to competitiveness, suggesting a need for integrated policy approaches.
Speaker: Gabriela
What are the blind spots regarding reliance on frontier AI models versus cheaper, less powerful alternatives?
Dean identified a blind spot in assuming cheaper models are sufficient, stressing the importance of understanding the unique value of frontier models.
Speaker: Dean
How can education systems and teacher training be upgraded to prepare students and educators for AI readiness?
She pointed out the lack of pedagogical reform and teacher support for AI integration, indicating a need for systemic educational research and investment.
Speaker: Gabriela
How can the global community agree on AI "red lines"—unacceptable uses—and enforce them across jurisdictions?
Ivana noted the absence of worldwide consensus on AI red lines, highlighting a gap in international governance research.
Speaker: Ivana Bartoletti
What mechanisms can prevent concentration of power in AI and promote competitive dynamics while avoiding centralizing tendencies?
Dean warned about centralizing tendencies and the concentration of AI power, calling for anti‑trust and competition‑focused research.
Speaker: Dean
How can the diffusion of AI innovations be accelerated to reach lagging economies and communities?
She described the broken diffusion machine caused by market concentration and called for research into ways to speed up equitable diffusion of AI benefits.
Speaker: Gabriela

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Safe and Responsible AI at Scale Practical Pathways

Safe and Responsible AI at Scale Practical Pathways

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by Shalini Kapoor highlighted that enterprises and governments hold vast amounts of information in fragmented PDFs and digitised documents, creating an “information divide” that limits AI’s ability to provide accurate answers [1-7][8-15][16-18]. She illustrated this with the example of a Nagpur entrepreneur unable to locate a biotechnology subsidy because the relevant government notification remained hidden in a siloed document, which LLMs could not retrieve [7-13][14-15].


Rohit Bardawaj argued that before data can be considered AI-ready, the ecosystem must agree on a clear definition and a shared framework that includes cataloguing, machine-readable metadata, context files and business glossaries [33-46][160-205]. He emphasized that such a framework should be open and federated, avoiding a single data owner and ensuring a data steward orchestrates the ecosystem [181-184][185-194].


Prem Ramaswami described Google’s Data Commons as an open-source platform that transforms diverse datasets into a machine-readable knowledge graph, enabling an AI search layer that can combine global statistics with local queries [55-63][64-66][69-71]. He noted that the system is designed to be bottom-up, allowing users to overlay their own CSV data onto existing public datasets, thereby reducing risk for small businesses making location decisions [277-283][298-302].


Ashish Srivastava added that real-world solutions suffer from data fragmentation, requiring interoperability, contextualisation through glossaries, and verification of declared data to be truly AI-ready [92-102][103-108][124-130]. He advocated for reusable policy artifacts (DPIs/DPGs) that can be automatically enforced at the API level, preventing reliance on manual human enforcement [228-236].


The participants agreed that LLM outputs can be unstable, prompting the need for benchmarks that test consistency across models and repeated queries [80-84][85-88]. A brief debate emerged over whether making alternative data AI-ready is primarily a governance issue or a technical one, with Rohit ultimately framing it as a governance challenge that requires standards and stewardship [162-170][176-180].


Shalini introduced the “data boarding pass” concept, a checklist-based certification that would allow organisations to certify data as AI-ready and facilitate secure, on-demand access [353-360][361-363]. She also referenced a “give-data-give-model” framework that ties incentives, value and exchangeability together to sustain a formal data economy [390-398][399-401].


The panel concluded that while building AI-ready data infrastructures is a long-term journey, collaborative standards, open tools and incentive mechanisms are essential to unlock the massive potential of data for both India and the Global South [408-410].


Keypoints

Major discussion points


The fundamental problem of fragmented, non-AI-ready data - Enterprises and governments hold massive information in PDFs, legacy systems, and siloed databases that lack trust, safety, and interoperability, preventing LLMs from delivering accurate answers. Examples include an entrepreneur in Nagpur unable to find a biotechnology subsidy because the notification is stuck in a document [7-15] and the massive compliance-query load of 3,000 entities handling 5 million new queries per year [23-27].


Need for a shared, institutional framework to make data “AI-ready” - Panelists stress that institutions (e.g., MOSB/NSO) must define standards, create federated governance, and provide catalogues, metadata, context files, and business glossaries so data can be safely reused. Rohit proposes a consensus framework and a “core + aspirational” AI-readiness model [33-46]; later he outlines concrete steps: machine-readable JSON catalogues, metadata, context files, and knowledge-graph glossaries [160-224].


Practical use-cases that illustrate the value of AI-ready data - When data is structured and linked, it can power diverse applications: government-level statistical analysis, MSME location-risk modelling, agricultural decision support, education-domain translation via glossaries, and health-worker tools. Prem describes how Data Commons can de-risk a shop-owner’s location choice by overlaying private sales data with 50 k public datasets [298-302]; Ashish highlights a journey-centric solution that enforces data policies automatically [228-235].


Trust, consistency, and benchmarking challenges - LLMs can return different answers for the same query, raising concerns about reliability. Rohit cites a study where identical prompts produced divergent analyses [75-82]; Shalini notes ongoing work on a benchmark to measure answer stability across LLMs and users [84-88]; Ashish stresses the need for guardrails, human-in-the-loop risk assessment, and verification of public data [125-130][126-130].


Building a sustainable data economy with incentives - The panel proposes mechanisms such as a “data boarding pass” checklist, the G-I-V-E model (Guarantee, Incentive, Value, Exchangeability), and differentiated licensing (free for research, paid for commercial use) to motivate data contribution and ensure long-term funding. Shalini outlines the boarding-pass concept and incentive framework [353-361][391-399]; Rohit clarifies public funding and commercial licensing for NSO data [380-388].


Overall purpose / goal


The discussion aimed to diagnose why large-scale data in India remains “AI-unready,” to propose institutional and technical standards that make data safe, trusted, and interoperable, and to illustrate how such standards can unlock high-impact applications for government, MSMEs, and the broader public sector while laying the groundwork for a formal data economy.


Tone of the discussion


– The conversation opens with a concerned, problem-identifying tone, highlighting data silos and trust gaps.


– It shifts to a collaborative, solution-focused tone as participants outline frameworks, open-source tools, and federated governance.


– Mid-session the tone becomes cautiously critical, emphasizing inconsistencies in LLM outputs and the need for benchmarks and guardrails.


– Toward the end it turns optimistic and promotional, showcasing concrete use-cases, the “data boarding pass,” and calls to action for audience engagement.


Overall, the tone evolves from problem-statement to constructive planning, tempered by realism about technical limits, and concludes with an encouraging call for adoption and partnership.


Speakers

Ashish Srivastava


– Area of Expertise: AI, data interoperability, contextualization, verification, agentic AI, education and health solutions.


– Role/Title: Practitioner; leads the AI Innovation for Inclusion Initiative (A4I) Lab – a collaboration between Microsoft and IIIT Bangalore; former head of a Gen AI company. [S1]


Prem Ramaswami


– Area of Expertise: Data Commons, knowledge graphs, AI-ready data, open-source data platforms, AI-driven search.


– Role/Title: Google – Lead for the Data Commons project (open-source stack, knowledge-graph integration). [S2]


Shalini Kapoor


– Area of Expertise: AI-ready data governance, data economy, policy, trusted and safe AI deployment.


– Role/Title: Chief Strategist, XSTEP Foundation. [S4]


Rohit Bardawaj


– Area of Expertise: AI-readiness frameworks, data standards, governance, metadata and cataloguing.


– Role/Title: Representative of Mosby (statistical agency that calculates GDP at village/taluka level). [transcript]

Speaker 1


– Area of Expertise: (not specified)


– Role/Title: Moderator/host (unspecified). [S7]


Audience


– Area of Expertise: Varied (participants asking questions on data platforms, business models, etc.).


– Role/Title: Audience members / questioners. [S10][S11][S12]


Additional speakers:


(None identified beyond the list above)


Full session reportComprehensive analysis and detailed insights

The panel opened with Shalini Kapoor (Shalini) describing a fundamental bottleneck: enterprises and governments hold vast quantities of information in fragmented PDFs, legacy systems and isolated silos. Because artificial intelligence-especially large-language models (LLMs)-“thrives on data” [2] and much of this data is “digitised but stays where it is” [6], AI cannot retrieve the answers users need. She illustrated the problem with a concrete case-an entrepreneur in Nagpur looking for a biotechnology-plant subsidy is unable to locate the relevant government notification because it is hidden in a siloed document, and her queries to LLMs and conventional search tools return nothing [7-13][14-15]. The “information divide” is compounded by a lack of trust in sharing data with AI systems [5-6].


The scale of the challenge was underscored by an example of an organisation that serves 3 000 entities and must handle five million new compliance queries each year [23-27]. Such a volume of “new compliances” generated by multiple government bodies creates a massive problem that can only be bridged if the data is made interoperable, useful and AI-ready [28-29].


Rohit Bardawaj (Rohit) then shifted the discussion to the need for a shared institutional definition of AI-readiness. He asked whether the ecosystem already has a “uniform definition” and argued that a consensus framework-comprising a “core + aspirational” model-is essential [33-46]. According to Rohit, AI-ready data must be accompanied by a machine-readable catalogue, rich metadata, a context file and a business glossary; without these artefacts the data cannot be safely and reliably consumed by AI [160-205][184-205]. He further stressed that any framework should be open, federated and avoid a single data owner, with a designated data steward orchestrating the ecosystem [181-184][177-184]. Rohit also described the MCP server-a lightweight connector that lets any LLM plug into a catalogued dataset via a standard URI, analogous to a USB-C socket, enabling seamless integration without leaving the user’s workflow [221-240].


Prem Ramaswami (Prem) presented Google’s Data Commons as a concrete, open-source realisation of that vision. Data Commons ingests diverse public datasets, converts them into a structured, machine-readable knowledge graph, and layers an AI search engine on top, thereby improving the chance that an LLM can answer a query correctly [55-58][59-60]. The platform is deliberately “federated”-each organisation retains local governance of its data while still contributing to a common graph [61-64]. Prem highlighted a bottom-up use case: a small retailer can upload its own CSV of store-level sales, which then automatically overlays with roughly 50 000 public datasets already in Data Commons, allowing the retailer to model location risk and de-risk decisions that would otherwise be “a costly shot in the dark” [277-283][298-302]. He also noted that AI can be statistically safer than human-only decisions, citing road-traffic-death statistics [144-147].


Ashish Srivastava (Ashish) added a practitioner’s perspective on why data must be more than just structured. He described how fragmented health and education datasets impede integrated decision-making, and argued that AI-ready data must be interoperable, contextualised (through domain-specific glossaries), and verifiable because many public surveys are merely “declared data” without independent validation [92-102][103-108][124-130]. In his own work, Ashish combines glossaries with LLMs to improve translation of domain-specific terminology [102-106][112-118].


The panel then turned to the reliability of AI outputs. Rohit recounted a recent paper by two undergraduates that showed identical prompts fed to the same LLM on the same dataset produced two different analyses, underscoring the need for benchmarks [75-82]. Shalini confirmed that her team is developing a benchmark to test answer stability across multiple LLMs and repeated queries, noting that “the same question … asked multiple times … can give different answers” [84-88]. Ashish reinforced this concern, stating that LLMs should be treated as a small component (10-15 % of a solution) and that robust guardrails, human-in-the-loop risk assessment and verification are essential to maintain trust [125-130].


When asked whether making alternative, secondary data AI-ready is a technical or governance problem, Rohit conducted an audience poll and concluded that it is primarily a governance issue that requires standards, a federated stewardship model and clear policy before any technical solution can succeed [162-170][176-180]. He reiterated the need for a data steward-potentially the National Statistics Office (NSO)-to catalogue datasets in machine-readable JSON, attach metadata and context files, and standardise codes and dimensions [181-221][184-221].


Shalini highlighted the tension between Retrieval-Augmented Generation (RAG) architectures and pure LLM approaches, emphasizing that data sovereignty and the need to keep sensitive data under local control prevent a single-model solution [310-322].


She introduced the “data boarding pass”, a check-list-based certification that signals a dataset has met AI-readiness criteria (catalogue, metadata, context, glossary). Once certified, the dataset can be instantly onboarded by B2B users, policymakers or researchers [353-363]. Shalini also presented the GIVE framework (Guarantee, Incentive, Value, Exchangeability) as a model for a sustainable data economy, arguing that incentives are needed for data owners to contribute and that value can be monetised while ensuring exchangeability [380-389][391-399]. Rohit clarified that the NSO is publicly funded, so research-use data is free, but commercial use is subject to a policy-driven pricing structure [380-388].


An audience member raised concerns about the business model for maintaining high-quality data platforms. The response highlighted that public funding covers research use, while commercial licences generate revenue, and that the GIVE model provides a “formalised” mechanism for pricing and incentives [372-376][380-388][391-399]. Shalini further noted that without a clear incentive structure, “the data economy is actually running without a formal mechanism” [390-399].


The discussion also revealed points of agreement and divergence. Both Shalini and Rohit agreed that statistical data collection is bottom-up, i.e., gathered at the field level rather than imposed centrally[269-276]. Prem argued that, despite imperfections, AI can be statistically safer than human-only decisions [144-147]; Ashish warned that AI should remain a minor (10-15 %) component of any solution, requiring extensive human oversight [125-130]. Technically, Rohit’s checklist-centric approach (catalogues, metadata, context files) differed from Prem’s emphasis on a knowledge-graph-centric, federated stack [184-221][55-64].


In conclusion, the panel converged on five pillars for AI-ready data: (1) a common, detailed definition that includes cleaning, linking, safety, trust, machine-readable catalogues, metadata and context files; (2) a governance-first, federated stewardship model to avoid single-point ownership; (3) the necessity of benchmarks and human-in-the-loop guardrails to ensure trustworthy AI outputs; (4) the importance of domain-specific glossaries or knowledge graphs for contextualisation; and (5) a sustainable data-economy model that aligns incentives, value and exchangeability. Action items include drafting the AI-readiness framework slide deck (Rohit), publishing machine-readable catalogues and glossaries (Rohit), extending Data Commons with contextualisation features (Prem), formalising the data-steward role and commercial licensing policy (NSO/Rohit), developing the answer-stability benchmark (Shalini), and promoting the data-boarding-pass and GIVE mechanisms to catalyse a formal data market (Shalini). The discussion closed with an invitation to visit the exhibition booth for a live demonstration and a reminder that building AI-ready data infrastructures is a long-term journey that must begin now to avoid future “holes in the rails” [408-410][401-406].


Session transcriptComplete transcript of the session
Shalini Kapoor

Deep work on working on fragmented data silos. As you all know that AI, it thrives on data. And today, most of the LLMs, what they have done is, they’ve definitely scraped internet and they’re doing really well. But the value of the work or what an answer an LLM would give is present based on what it can fetch from the actual data, which means in enterprises and organizations, there’s a wealth of information. There’s a wealth of information stuck in PDFs, stuck in documents, which people have a fear of not giving it to AI. So there is a fear, there’s lack of trust today, and that data… data, it stays where it is, like digitized. So, for example, there is there could be an entrepreneur, say in Nagpur, wanting to know about the scheme applied for the biotechnology plant that she wants to put up in Nagpur.

Now, if you see the MSME industry has a scheme for her, for women, for biotechnology. And, you know, it’s very good subsidy that that’s available. But where is it stuck? It’s stuck in a government notification which came out, which she’s not aware of. And what she is doing is she’s actually going to LLMS and asking that question and she’s not getting it. She’s also searching it on various places. She doesn’t get it. So that’s the divide, the information divide, which is existing. And the information which is there has which which is there stuck in documents or in even digitized form. That has to be AI ready. so that in a safe, trusted, and these two are very important, safe and trusted manner, the data can be linked, made useful and then made available.

Now, this is a long journey. This is a long journey. It’s not an easy journey because the data journey is about how you clean the data, you make it ready, you link it, you make it relevant, you make it useful and then present it in a manner so that the choice, and you want to have a choice of various elements of, you know, I mean, we live in the age of choice, right? We don’t want to be locked into anything particular. So that’s the data problem that we have in front of us. The opportunity is humongous because there is, I’ll give you an example, 3 ,000, I’m talking to an organization which does 3 ,000. Thank you. entities and the 3 ,000 entities actually manage 5 million new compliances in a year.

They have those kind of queries, 5 million queries on new compliances. Forget existing compliances because there are new compliances which get generated by the government, by various bodies and then they have to search. So the problem is humongous and it can be bridged. It can be bridged but we have to think about how to make data interoperable useful and AI ready. So with that background, I’d like to get into our panel and talk to some of the experts that we have today. My first question is to Rohitji who is from Mosby. So India generates a vast amount of statistical and administrative data. Mosby actually for all of you, it calculates a GDP for India. they have the source of all the data at village and taluka level so the data is there but as you think about making data AI ready what do you think is the responsibility of institution how and yours is an institution to make the data trusted safe and available to all

Rohit Bardawaj

thank you Salini ji good morning everyone so trusted safe and ready for everyone AI ready my I like all of you to take a step back on this and just let us understand do you have uniform definition of what is AI readiness at this point in time do we have and I’ll not say that it’s not there in the ecosystem it’s there in the ecosystem that but do we have an agreement about it so there are two issues we need to understand when we talk about AI readiness of data. One is that, so let me just go back to today’s conversation I had with one of my colleagues over WhatsApp group, you know, we all are very active there.

So one of my papers has just been accepted in one of the largest conference and it’s about AI readiness of data and he asked me what’s so great about it. So I asked why, what is not so great about it? So he asked me, I put Bangla into ChatGPT and it completely understands. So what’s new you are doing? So the point I’m trying to make is people are not aware what it takes to make data AI ready. We all understand and then he asked me that no, but it’s not understanding and he talked about some of the dialect of this country and we have a huge number of dialects and Salneji, I asked him and he asked me that how do I train ChatGPT on this dialect?

I said, it’s not my job, it’s Sam Holtzman’s job. So the issue here. is that we don’t know. And that is the biggest responsibility of our institutions like MOSB to make people aware what AI readiness is all about. And then AI readiness means if I start, you know, talking about there should be a context file, there should be semanticity, there should be metadata, but many of us sorry about that, many of us it looks it would not make sense. So the first idea is to create a framework agreed framework, say people not only me, it’s not about my way or highway, me all of us work together create that framework, put it up for people to know.

I would do, the first thing I would do and I plan to do it literally is try to create a slide deck saying what AI can see and what human can see. So my folder, if it has 10 versions of budget 1, 2, 3, 4, 5, 6 and if I ask a question from that folder budget some answer will come from budget one some answer will come from budget two because unlike human where i am focused on this question ai is designed to take scan the entire thing available so it’s a big difference between human and ai i can be focused ai when once i give a thing to ai it will just scan everything it has in its domain so i would say starting point and just you know not taking much of a time uh starting point should be that let us create this framework let us have a shared understanding let us have a core ai readiness part and an aspirational ai readiness part and work on

Shalini Kapoor

yeah i think that’s very relevant because you cannot leapfrog into everything you have to be like i mean you can have aspirational but the foundation is very very important and and everybody joining that foundation that that that foundation exercise is is really important um i’ll go to you preb uh and talk about let’s talk about data Data Commons aims to make public data more accessible and usable. You’re from Google, and you have put all this in open source. You’ve been working on US Census data being available. Tell us some more about your experiments and how Data Commons is kind of ready or prepared to work on this challenge.

Prem Ramaswami

Thank you for having me here on this panel today. I think one of the areas I’ll start with is the importance of coming to that understanding on AI -ready data, but understanding that the field itself is moving quite quickly at the same time. So whatever agreements we come to today in six months, it feels like we’re dealing with a brand new technological landscape that we’re staring down. What Data Commons tried to do was say that if we can get… If we can get our data in that machine -readable format, which means… structured, which means machine -readable metadata also, and a format where that format specification is not stuck behind a 500 -page PDF, right? Can we make that in a way that the machine can understand it, interpret it, and then use it?

Our theory behind this is that idea of a knowledge graph from that data combined with the large language model gives you a much better chance of success to answer your question. So at Data Commons, what we try to do is we try to bring multiple data sets globally together in a common knowledge graph and then put an AI search engine on top of it so that you can quickly access that data. You can play with this yourself at datacommons .org. But what we did is we open -sourced the entire stack because this idea that that data is centralized with one source is also the dangerous part, and it shouldn’t be, right? The data should be federated.

It should be located at every organization and governed locally by the organizations that are… using it. And so one of the things we’ve done by open sourcing that stack is allowed, for example, the United Nations, the United Nations Statistical Department to use data commons as their back end. And so, you know, UNSDGs, WHO data, ILO data, so on and so forth, is all stored in this common interoperable database now, where instead of a data analyst spending 80 % of their time renaming column headers, they can actually focus on the data analysis so that we can get the impact and the outcomes we want to see. Hope that helped answer the question.

Shalini Kapoor

Yes, yes, no, absolutely. I’ll poke you a little bit more to understand on data commons, what’s a vision you have?

Prem Ramaswami

So a very simple vision, right, which is make data aware decision making the easy answer to take. Today, right now, the majority of the world is flying blind, whether you’re one of those 74 million MSMEs in India. you can’t afford a bevy of computer scientists and data scientists that you can hire you pay a tax to play with any data if you’re a policymaker thinking about climate change poverty education health these are holistic problems it’s no longer i can go to one ministry pull one spreadsheet and solve poverty i need to endemically understand how does education how does health outcomes how does income and economy how do all of these affect poverty locally right and that’s the problem we have today that the world is a multi -dimensional problem the other problem is our brains are not inherently multi -dimensional our brains are great in three dimensions you add a fourth dimension which is time and we’re okay right like look at climate change you add time and it’s greater than our lifetime we can’t think about it which is why we’re not solving it right but the majority of problems are 50 60 dimensional problems machines are really good at this but by the way.

And humans are good at using tools that are good at doing things we’re not. And this is where we have to approach AI as a tool we can use. Not as the answer, but as a tool we can use to derive the answer, to supplement our brains in the areas we’re

Shalini Kapoor

I’ll poke you a little bit more, but later on.

Rohit Bardawaj

Saniji, I just want to take a second stab on that. And just a quick interjection on that. I’m a statistician. So I’ll be very happy if some of my work can be done by AI, you know, all those lab language models. I just read a paper today in the morning. It’s been written by two undergraduates from a Canadian university. And they said, and they proved it, that if you give same prompt to AI with the same data set, it gives you two types of analysis. So this is something I just wanted to flag. That we should not be really gung -ho about things, which is still untested. But yes, I would be the first to accept adopt an AI and use it for my work, but it needs to be, as you rightly put it, trustworthy.

Shalini Kapoor

Yeah, I just comment on this, the stability of an answer, that’s what you’re talking about. We are actually working to create a benchmark onto this because the same thing we are doing, like Amul AI was launched today in the morning, I mean, by the Prime Minister, and the same thing applies to Bharat Vistar, and we are actually working to see that the same question if you ask, multiple times across LLMs, and also to one LLM many times by different farmers, both options, you get different answers. And that, can we make it as a benchmark? That’s what we are working at also because this is a benchmark which is needed really on the ground, right? So that’s a part, so I wanted to comment.

I’ll go to Mr. Ashish. You’re from the industry, and you work with IIIT, . Bangalore. Tell us more about the research in the data area, plus how institutions can help build it all together.

Ashish Srivastava

Right. So I think my perspective is more as a practitioner because the last almost three decades I’ve been a solution builder. So I have seen data not from the data side, but from the solution side, trying to exploit it, trying to use it for the solutions. And I’ll come to the institution part of it. But, you know, when I look at the data and the challenges which are there associated with it, then for the last 10, 12 years, I’ve been for AI, for social problems or digital, like women in child health. I worked for almost a decade. Now, one of the problems that I realized is that the world is fast moving where you don’t manage a transaction.

You manage a journey. OK, and that is the agentic AI and all those things that we are talking about. Now, when I was working a few years back on the women and child data. I realize how fragmented it is the two main data sets if you look at a child’s health his anthropometric data, his nutrition data is with women and child development through their Anganwadi program if you look at the birth data, the humanization data and a lot of other data it is with the health and family welfare department and if you have to have an integrated decision making across for that child what needs to be done and then you have to look at both the data but that burden of orchestration comes on to the person who is solution making the data does not by itself flows through the workflow and that is one of my biggest problem that we have to solve that we look at data sets in isolation but we don’t look at how it flows through the process the second thing which I said the contextualization we all have read the book at least some of them that raw data is an oxymoron data always resides in a particular context and with some standardization associated with it so that you can make some sense out of it.

Now with education, when we are working recently, we realized that LLMs are becoming increasingly good, at least with the main languages, not with all the dialects, but in good translation. The moment they hit any domain -specific vocabulary, that’s when they start failing. Even the class 6th physics question, all these frontier models, is not able to properly translate. So we came up with a solution of using a glossary combined with the LLM so that it does a decent job in terms of overall translation. The user is transparent to contextualization. And the third thing which I faced a lot is that when we talk of public data, a lot of it is declared data and not verified.

Not verifiable data. Especially when a lot of planning depends on surveys. and lot of survey data is actually declared data whether you have a hypertension or not yes, no, whether you have this problem yes, no, what is the verification no doctor has actually verified that and you are going to make a decision based on that so in my opinion the AI ready data has to solve these three big problems it has to be interoperable it has to be contextual and it should actually the third problem that I was saying that you know verifiable, it should be verifiable and governable as an extension of that

Shalini Kapoor

very relevant I think you have posed the right challenge so Prem I am going to come to you right what is how let’s just pick one of them which is contextualization because I am increasingly seeing that domain information is needed and people are creating these glossaries to add like even in Agri when we had to roll out like we are going to do like we are going to do Mahavista, we actually created glossary of 5000 terms which is it is in Marathi so it has to be in Marathi and those terms being used and I know we did some experiments and we have created a sandbox environment you have done it for India so why don’t you explain that how contextualization and domain can be added to Google Data Commons and how it can be helpful

Prem Ramaswami

I think this idea of contextualization and localization is very important at the end of the day these are large language models, language being the key word there they’re not data models and so to what Mr. Bhardwaj said earlier what you want to be able to do is use them to write code to manipulate data because code is language but you don’t necessarily want them to be producing data on their own and one of those problems that you have today is also those large language models are essentially created largely off the web which has its own biases inherent in it, both language and locality -wise. And then on top of that, the example you used on the full folder of all the budgets, right?

The example I like to use for this is actually if you ask a large language model about a celebrity that recently had a breakup, they’ll tell you they’re together because it doesn’t know what just happened over the last month, right? It’s very sad. And so this is where you can use, though, the combination of, you know, you called it a glossary, I always call it a knowledge graph. What is that factual basis of information that I can put together? Now, it’s always going to be a subset of the whole, right? I might be able to cover maybe 0 .1 % of the world’s information with a knowledge graph. But if I can ground it in those facts, can I then utilize the intelligence of the large model to then help me produce some knowledge from those facts or fill in the gaps in those facts?

And so this, I think, is an opportunity that we actually have in the technology to move it forward. This is one of the areas that we’re actively working on as a team. But again, to do that, you first need that glossary of facts, right? This is where having that knowledge graph of statistical data, even if imperfect at this moment, because it is survey collected. It is dependent on the quality of the question asked, the error bar shown, the quality of that metadata, so on and so forth. But it is a starting point from which you can get more information and use that intelligence to potentially even find those outliers or areas that don’t match what you might be hearing on the ground.

So that’s the opportunity I think that we have.

Ashish Srivastava

Because I absolutely agree with you, but I will say it in more direct terms. Because sometimes we feel that LLMs or in previous version, the AI models are the solution. They are not the solution. They are only one of the inputs to the solution. And they comprise 10%, 15 % of what you’re trying to do. It is what is the rest of 85%. is doing yes llm will give different answer how are you compensating with guardrails human in the loop risk assessment these are the tools which are available today so i if you have to build because at the end of it it’s a probabilistic model okay come what may and i was talking to a mathematician from mit and he explained why it will never become perfect why it is it is grounded that fact is grounded in mathematics that it is it cannot ever become as perfect that every time consistent that we are wanting it to be ever because then you are taking the main source of its creativity away from it so what you have to focus is outside not inside that that’s all i ever wanted to say

Prem Ramaswami

if i agree with you completely and i started by saying it’s a tool right and we use tools to supplant ourselves not to replace ourselves right to supplement our knowledge not to replace our knowledge so i do agree with you it’s a tool but we have to be careful throwing the baby out with the baby and we have to be careful with the baby and we have to be careful with the baby and we have to be careful with the baby and we have to be careful with the bathwater here in the sense that That tool now makes things available to the average person. It upskills the average person in a way that they couldn’t themselves before.

So if we immediately go to put guardrails, prevent access, things like that, we’re preventing a large part of society. And I’ll say as somebody who worked on Google Search for many years, there were many arguments in Google Search that we, for example, shouldn’t put health information on search. Because the average person isn’t smart enough to be able to deduce information about their own health from Google. But the average person can’t afford a doctor also, right? There are endemic problems in society that prevent you from doing that. So does the answer to that question suffer, or does the answer to that question do less harm and give people a pathway that they can learn from? And so that’s an important question to ask ourselves here as we think about AI, which is, yes, it is imperfect at this moment.

Can we understand? Can we educate? Can we work inside the system that exists? we can’t ignore it either. We can’t say it made one mistake, therefore I will not use it. And I will also call out the imperfection of us as humans is also very much there, right? So there are many times we look at these systems and, you know, we look at, you know, a way more autonomous vehicle and we said, look, it had six accidents last year. The 30 ,000 deaths from car accidents in the U .S. a year, right? And so statistically speaking, this is still much safer, right? And so these are the sorts of examples that we have to look at, understand where to apply it, how to apply it, and what the overall societal good is from using it.

Shalini Kapoor

Yeah. No, thanks. I think a very relevant discussion that we are having, and there’s always a fight between should we have RAG architecture or should we just, you know, teach, give all to LLM to do it because it has more capacity and more, you know, GPU. But either or is not possible. There’s like, it’s like so much about the world. It’s like, you know, it’s like, you know, it’s like, you know, it’s like, you don’t want to give you maybe want to keep the data and the sovereignty comes in. it a lot. And this has been a discussion in the last two days. Most of the panels that I have been that you want to keep your data.

Countries want to keep the data with themselves and they actually don’t want to train because choice of LLMs is like you want a lot of choice and you want to use here, there, everywhere. So I’ll come back to you Rohitji and see we talked about administrative data and you talked about a framework. So my question is that how do you think alternate data, secondary data beyond administrative data, how can that be also brought in and your framework which you talked about that there should be a foundational framework if that framework is adopted by industry. One, is it possible? And two, what kind of data economy it can start?

Rohit Bardawaj

So this is early morning. Let me take an audience poll on it. How many of you think that what Salini asked is a governance issue? Or is it a, I mean, just raise your hand if you feel it’s a governance issue. Anyone who feels it’s a governance issue? How many of you feel it’s a technological issue? What she asked. How to make alternative data ready for AI. That’s what the question was. So how many of you feel it’s a technology? There’s no prizes for it. There’s no punishment for it. So feel free to raise your hand the way you think. It’s a technology. So, okay. So I am with that gentleman. I feel it’s a governance issue.

And I’ll also work on it. So what are we talking about? We are talking about data generated from different sources, be it alternative data sources, be it like administrative data sources. The panelist with my co -panelist just talked about getting data from different sources not aligned to each other. So it’s a governance issue which we need to understand first. We need to create. And, of course, I completely agree with Salini when she said that we need a federated model. Perhaps Prem said that. We need a federated model. There cannot be one whole sole owner for a data. of this country or for that matter for any country what as as somebody needs to play the role of data steward somebody needs to orchestrate this data ecosystem and that perhaps being from nso i have my own biases i’ll say nso can do it but of course that’s something for the people to decide now let’s understand this what do we need when we need ai ready data we need first a cataloging of it i’m just going to take one minute on it cataloging of it you should have everything catalog any industry any government organization this is my data set these are the indicators these are the definitions and so on and so forth i’m not getting that deep into it you need a catalog of your data and if that’s not there second thing is that catalog should not be pdf that catalog should be as she was saying machine readable json file probably you need a catalog of your data and if that’s not there you need a catalog of your data and many other ways are there but let’s talk about you JSON file.

Second point, you should have metadata for it. If you don’t have metadata for it, I mean other day I was with another panel with Prem, I said the thing which irritates me the most is lack of metadata. I don’t know. I’ve been driving in blind. I don’t know what the word frequency means. It may mean hundreds of things. So you should have metadata and again not in PDF. So when I’m, whatever I’m talking about is, I’m not, I mean JSON or XML, there are so many ways, but machine readable. Let’s put it that way. Third is, you should have a context file. So now machine has read it. Now but it wants to know that where do I find the meaning of frequency?

So machine should have a context file where the source is written. You go there and see. You will find the meaning of frequency. So metadata will not have frequency, meaning of frequency. It will only write frequency means quarterly. So machine now needs to understand what does that frequency means. So that’s what she was talking about and Tim again was talking about. We need to have a, that makes us, bring us to the, we need to have a business glossary. We need to have a business glossary. He also talked about a knowledge graph. I mean, just a sophisticated version of business glossary. That we need to have. So once we have sorted this out, we need to work, what type of codes are we working?

So the gentleman just beside me just talked about two data sources using different codes for different thing. I mean, same thing. So then we have to standardize that codes. And then lastly, we have to structure our data. Data needs to go in a structured database. It should be defined and that’s not new I’m talking about. It should be defined by dimensions. It should be defined by attributes. It should be defined by its role. So time means temporal. You can’t write time and expect LLM to understand what does time mean. You have to say time means temporal. And once you have these ready, these available in a, so there are two use cases. And just last, last quick of the One is that am I using it for my own use case?

Am I training my own model for it? Then I can put all these in one file and feed it to my model. But if I’m expected to create a MCP for my database, then I have to create separate files, put it up in a URI or URL where any model can go, the connector can direct it to that model, that place, that resource place, and then the things happen. And this is all I’m talking from my personal experience when we, and Salneji knows about it, when we developed our own MCP server

Shalini Kapoor

loving it the amount of reach out which has happened to use the data data sets for you know you can actually find out you can ask a question of what’s been the price of how has the price of moong dal been in the last whole year or whole quarters or month wise so that that capability is there now and it has happened because it was always there they do the calculation of the wholesale price index the commodity price index so that in from the data was there it’s just that now it is ai ready for people to consume take and and then ask and it is connected to claude and chat jpt ashish i’ll go to i’ll go to you uh for because building on what uh roheji stopped at which is the use cases and you come from the solution part of it uh how do you visualize and imagine solutions and use cases and how do you visualize and imagine solutions and use cases you combining say administrative data and alternate data I’m not going into personal data because there’s a lot of consent there but at least a lot of secondary sources of data which is available and how do we combine and make it more powerful

Ashish Srivastava

I think as you rightly pointed out I come from the solution perspective and a solution now with agentic AI coming in we look at every solution in form of a journey. We are going past the mechanism of point solution that you ask it reverts back to the answer and now the use case has to decide at which part of the journey what data is that you need and that will dictate whether it is additional data sets which are outside or it is a public data set it will be due. The only challenge which I see here is the who is accountable for that data Thank you. Thank you. in the solution at the API level, at the policy engine level, which are actually going along with the solution, and it should happen, it should be enforceable automatically.

If you are thinking that a human being will actually enforce that policy, it will break. It will break in no time. So that is what we are trying to do, is to create those reusable artifacts as DPIs or DPGs, it will fall into one of those categories. But where it allows those policies to be set for a data set in an easy reusable way so that everybody doesn’t have to recreate from scratch those kind of policies, and then that’s the way to move forward.

Shalini Kapoor

You mentioned your lab. I’m sorry, I just spoke you into that. Tell us more about your lab. What more work they are doing?

Ashish Srivastava

So that’s my current job. Previously I was heading a Gen AI company, by the way, and I will talk separately later on PDF challenge, which we thought we had solved it. We didn’t fully, but we were on the way. But the current lab, which is very exciting, it’s a collaboration between Microsoft and IIIT Bangalore. A4I stands for AI Innovation for Inclusion Initiative. That means we create large scale. The idea is not here to run pilots that we do this small thing here, we diagnose, not that. It should be population scale and we want to launch it as a DPG so that it can be largely. So we are working on education, school education area.

We are working with teachers in terms of making their life easy. We are working in terms of accessibility. How blind children can actually be taught STEMs so that they can actually become a mathematician. They can hope to become a physicist, mathematician. Today it’s very difficult. How to even read a book? And the third one we are doing is working with the last mile health workers. Our current solution is a rack based AI combination, but we are looking at exactly that problem that you mentioned that either it is this or that. I think there are plenty of answers which are in between. The. That was what we are exploring.

Shalini Kapoor

Thank you. Thank you so much. Prem, I’ll again build on the concept that we were discussing on the use cases, which can be. I mean, I just want you to paint a picture of if you have data in knowledge graphs, like what you mentioned, if the data is there and data commons is present. I just want you to visualize that what more use cases can be possible with secondary data. How can India benefit and not just India, Global South benefit from this? And please feel free to paint the use cases which you have built in the sandbox environment that you have. You can just take those examples.

Prem Ramaswami

Yeah, I’ll give two very. These might not be exactly where the sandbox is today, but where it could go tomorrow. Right. And so I’ll give two very different examples here. One is. At the end of the day, the Ministry of Statistics does a lovely job collecting as much information as they can. The whole ministry does. The government does. it’s a top -down data collection.

Shalini Kapoor

I’m sorry, I’ll just interrupt you. I think Rohitji will say it’s not top -down. It’s actually at the field level, it’s bottom -up.

Prem Ramaswami

That’s fair, that’s fair.

Shalini Kapoor

He will say that, it’s bottom -up.

Prem Ramaswami

That’s fair, that’s fair. You’re correct, it’s bottom -up. That said, we have alternate data sources also that are there. Sometimes they supplement and they further show, yes, the data collected is correct. At times they disagree. And those disagreements are also interesting to understand to the point of where is the survey question flawed or where is the civil society seeing something or has visibility into something that we don’t have access to. And so the more of these data sets that come together, these points of friction, again, this is where the human intelligence comes in. Show me the points of friction. I have a haystack full of needles. Which needles do I pay attention to? Right? So this is one example if I’m at the government or the statistics, you know, ministry of statistics level.

Now let’s go to the completely opposite end. I’m a small business owner. I’m setting up a physical shop. Where should I set it up? Right? Where I set it up depends on mobility traffic, depends on the demographics and affordability in that space, depends on all types of things. Right? It’s a large data question. But that MSME owner is often ill -equipped to answer any of those questions, is often taking a shot in the dark. And that shot in the dark is a costly shot in the dark if they’re wrong. Right? Because they are taking the full risk of that decision. Now with the data commons that we’re building, the question becomes can we reduce that risk for that individual?

Can we help them model, understand, de -risk the decision they’re making? And that’s what we’re doing. And that’s what we’re doing. based on the audience they want, based on the footfalls they want, based on the location that they’re choosing. That’s a very specific example now. But these are two very opposite examples of how bringing all of this data together, which we often think about as more aligned towards, you know, the international organizations or the government minister, but is actually usable on the ground by an individual too.

Speaker 1

Tell us a bit more about, like if suppose someone wants to put up a Data Commons instance, how can they get started?

Prem Ramaswami

It’s actually quite simple. It’s easy enough that I can do it myself, which means you can. But it’s datacommons .org is an open source platform. We have a 20 -minute guide to get started. You can set the whole thing up on your computer, have your CSV data set, bring it in. And the thing is, once you bring one data set in, it overlays with all the data sets already in Data Commons. This creates sort of a network effect between the two. To the data, right? So once I bring in, you know, if I am a chain store in India trying to figure out that next store location, if I bring in all my per store sales revenue data once, then suddenly I can compare that to the 50 ,000 data sets and overlay them that are already in data comments.

Before, if I wanted to do this as a chain store in India, I would normally have my people come up with maybe 10, 12 different hypotheses. Because then I have to get those 10, 12 different data sets and I have to form 13 different data transforms, right? So they’re all in the same format. That prevents us from being able to have that level of creativity we want where we can look across the entire landscape of the problem set. And so this is sort of one of the things.

Rohit Bardawaj

Right answer. And it was a matter of trust for NSO also. That, you know, people are getting different answers for the data which is created by NSO. That makes sense. It makes us look toward MCP server. A, it is open. so it makes our data interoperable for all the almost all the AI system. I am not saying all the AI system. Otherwise what would happen? Be aware that every LLMs have their own standards of API. So you create those APIs first and then you somehow manage the LLM to approach that API. With this connector, it’s like C socket for the phone charger, if I may use the parallel, where you can just plug in any C socket you can use for anything.

That’s what the MCP is. So data comes and the LLM comes and plugs into MCP. And it allows any LLM to come. But what you have to do now, that you have to connect that small tool with that LLM. So that’s a one minute job and it’s available on our website. You go there, www .mosfet .gov .in and the offering section, everything is available. You can do it in one minute. One minute, maybe two minutes at the most. Anyone. But still there is one challenge, which I must tell you, is that somehow need to ensure that this becomes a default tool. The user does not have to add it. Somebody says somebody forgets it. Then the same situation starts happening again.

So right now people have to add it to their tool. But the biggest advantage I see is that people don’t have to come out of their workflow. So if I have taken a very costly pro cloud, then I don’t have to come out of it. Go to my portal to get the data analysis. I can keep using the intelligence of cloud or chat GPT. I don’t have a preference for that. With the verified data, as he talked about, verified data of Mospy. And the use cases are innumerous on the web now. I mean people have just lapped it up. My favorite is that there is a Tamil song which talks about a lot of grains.

So one of the messages I got, and I’ll share the link also. It’s on Twitter also. I mean X now. That somebody created a CPI for all the grains which was talked about in that CPI is consumer price index which basically talks about inflation uh which talks about all the grains and they just took the grain out of the uh song that you know now wheat and and created CPI index for it and they have named it like p index or something which is like songs name so I’m not very conversant pardon me for that in Tamil but I’ll share that link so that’s my favorite use case so people so what I mean to say that people can use the data the way they like it that’s the that’s the bottom line and that’s that’s what the NSO’s idea

Shalini Kapoor

is most interesting use case I would have I would have seen and I really want to see uh it and and say yeah yeah I’ll have a look at it so one more thing which I want to tell uh the audiences that uh the work uh see several like the use case uh Rohitji mentioned about that someone can just pick the data uh so we have created a data and we have created a data and we have created a data and we have created a data and we have created a data and we have created a data and we have created a data and we have created a data and we have created a data and we have created a concept called as data boarding pass concept called as data boarding pass concept called as data boarding pass so this is a data boarding pass so this is a data boarding pass This is like for AI ready India.

This is a physical copy, but actually the concept is that once your data is ready and it is it has a set of checklists. Which it passes, then as a B2B player, you could be a policymaker, you could be a researcher, you could be a market player wanting to build on top of it. You can take this, you know, this concept of data boarding pass and get onboarded onto the date or for the data usage so that you can pick the data and then start using it in your applications. So data boarding pass is, say, at a district level, you have and I’m just painting a scenario. You have a data commons where graph knowledge graph and data have been all combined together, created all together, right context and everything.

And some organization. Now wants to know, say, the automobile. MSME manufacturer wants to access it and give information to dealers as to where scooters are being sold, where motorcycles are being sold and what’s been the income of of that region over a period of time that that can be possible now. Right. So the data boarding pass enables it, makes it possible. And if you want to physically see it, how this exactly works, visit our booth at a step foundation on Hall three in on the first floor. Do visit that. And my team would be there to show you the actual generation of the data I think we have given a lot of things. I want to just, you know, we have less time, but I want to take a couple of questions from the audience.

So feel free to ask. We have four minutes so we can have like two, three questions from the audience. audience. I saw that first, sorry, and then I saw you. So next to you. Yeah, please go ahead. Can someone give him a mic, please? Otherwise, I’ll hand mine.

Audience

Thank you very much. I wanted to ask you about the business models of these platforms because it is obviously extremely important to have high -quality data, but high -quality data is also expensive to collect, to maintain in the time. So did you work, besides, on how you can maintain these kind of platforms during the time? Does it have to be, I don’t know, publicly paid or whatever models you may have? And it’s also for everybody, I think.

Shalini Kapoor

Go ahead, then I’ll also add. Then I’ll also add.

Rohit Bardawaj

So, Jasmo, I just have a quick clarification on that. And National Statistics Office India is fully funded by the Government of India. It’s a… I mean, as we all know, National Statistics Office India is fully funded by the Government statistics office over all over are like public funded through public money. So it’s our job to create data and make it available to the public. At the same time, just one quick disclaimer on that, that open data is not free data. So somebody has paid for it. So when depending on the use, we provide the data. So if the use is research and things like those, I’m not getting into details of it, then it’s free.

But if the resource, you know, the use is commercial, then, of course, there is a system. There is a policy for it. And people have to pay accordingly.

Shalini Kapoor

Yeah. So I’ll also answer it because we have done a good amount of work. I would encourage you to see a paper that I’ve put up on our People Plus AI website, which talks about the give data, give model for data. G is guaranteed trust. And we talked about it. I is incentive. Incentive. Why should I bring the data? What will I get it get from it? The V is the value. If the data has no value, nobody is interested. And E is exchangeability. right which is can i share the data so i’ll focus on the i the incentive there has to be an incentive for someone to bring the data and there has to be an incentive for someone to use the data and that value will be monetized that is the data economy if you ask me this data economy is actually running without a formal mechanism there’s good amount of money people in selling data buying data lead generation i mean there’s huge amount of things which are happening this formalizes that so they will be but what will be the price that the economy the data economy i mean that has to stabilize that has to happen at region level with private sectors so we have been working in that direction so that the incentive model is clear but the actual price is is a discovery mechanism

Audience

and it’s very uh very interesting to hear all this that’s amazing one of the very key scenario that we see every day and we get little bit trouble is we see a road making getting made and stuck after few days I mean yeah it might not feel good but that’s how it is because it somewhere it feels like a disconnection in the data or somewhere decision in the policy making so do we have some way to kind of get this kind of pieces applied in those like know whatever the tender ecosystem or whatever that like you know you have a road made and then a duck for a pipeline after a very short window

Shalini Kapoor

yeah maybe I’ll answer it see if you see India has put the whole digital public infrastructure in place these are the DPI thinking whether UPI Aadhaar DigiLocker DigiYatra they were about digital rails which were put together this data infrastructure that we talked about today is going to be that rails is it going to be dug up maybe maybe maybe no problem Promises, right? Is it going to be dug up? Are there going to be holes in that? Maybe. But I think it’s a journey that if we don’t do it and don’t start it now, it’s going to hit us later on. So no promises, but yes. Rohit, do you have to add anything on that?

Rohit Bardawaj

I just wanted to add that we need to keep working on these data sharing platforms and all the philosophies we just talked about, like accessibility, sharing, analysis, use of AI, and things will improve slowly but steadily, I’m very sure about it.

Shalini Kapoor

Time is up, and the next session is going to start. So thank you so much for listening in to the AI -ready data, and please visit the booth to see it actually in action. Thank you. Bye. Bye. Bye. Bye. Thank you. Thank you. you you you you Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (31)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Enterprises and governments hold vast quantities of information in fragmented PDFs, legacy systems and isolated silos.”

The knowledge base explicitly notes that valuable information remains trapped in PDFs, documents and isolated systems across enterprises and government organizations, confirming the claim.

Confirmedhigh

“The ‘information divide’ prevents entrepreneurs and citizens from accessing relevant data such as government notifications.”

The source describes an information divide where entrepreneurs and citizens cannot access relevant data, corroborating the statement.

Additional Contextmedium

“A lack of trust in sharing data with AI systems compounds the information divide.”

The knowledge base highlights the need for a trust infrastructure so users feel comfortable with AI outputs, adding nuance to the claim about trust issues.

External Sources (113)
S1
Safe and Responsible AI at Scale Practical Pathways — Ashish Srivastava brought a practitioner’s perspective, highlighting three critical challenges: data interoperability ac…
S2
Safe and Responsible AI at Scale Practical Pathways — – Ashish Srivastava- Prem Ramaswami
S3
https://dig.watch/event/india-ai-impact-summit-2026/building-scalable-ai-through-global-south-partnerships — Thank you, Sunil. Are we I think we have a change of plans. Thank you so much. And Sunil, if you could please stay on st…
S4
Building Scalable AI Through Global South Partnerships — – Sunil Wadhwani- Shalini Kapoor
S5
Safe and Responsible AI at Scale Practical Pathways — – Shalini Kapoor- Ashish Srivastava
S6
Safe and Responsible AI at Scale Practical Pathways — – Rohit Bardawaj- Audience – Rohit Bardawaj- Prem Ramaswami
S7
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S8
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S9
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S10
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S11
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S12
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S13
Collaborative AI Network – Strengthening Skills Research and Innovation — This comment provides a systematic framework for thinking about data preparation for AI, moving beyond generic discussio…
S14
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S15
https://dig.watch/event/india-ai-impact-summit-2026/building-public-interest-ai-catalytic-funding-for-equitable-compute-access — And here, India is not waiting for permission. India is not waiting for permission. India is showing that it can be done…
S16
HIGH LEVEL LEADERS SESSION I — Additionally, for people impacted by decisions made from collected data, trust in the institutions collecting the data n…
S17
https://dig.watch/event/india-ai-impact-summit-2026/regulating-open-data_-principles-challenges-and-opportunities — A sort of symbolic nod to open data. It can turn into an unguarded channel through which value, agency and even sovereig…
S18
https://dig.watch/event/india-ai-impact-summit-2026/safe-and-responsible-ai-at-scale-practical-pathways — And some organization. Now wants to know, say, the automobile. MSME manufacturer wants to access it and give information…
S19
From Innovation to Impact_ Bringing AI to the Public — Audience questions and Sharma’s responses highlight specific applications: agricultural models that can analyse visual d…
S20
https://dig.watch/event/india-ai-impact-summit-2026/from-india-to-the-global-south_-advancing-social-impact-with-ai — So good evening. My name is Ashish Pratap Singh. I am the CEO of Prasima AI. My father runs an MSME business in Lucknow….
S21
WS #55 Future of Governance in Africa — While technology is important for advancing governance, it must be accompanied by proper infrastructure and public aware…
S22
Nri Collaborative Session Data Governance for the Public Good Through Local Solutions to Global Challenges — Indigenous data sovereignty and Pacific context Legal and regulatory | Development | Infrastructure Nancy identifies d…
S23
Digital politics in 2017: Unsettled weather, stormy at times, with sunny spells — Policy silos are reducing the effectiveness of digital policy. As the issue of data governance (Trend 5) shows, it is di…
S24
https://dig.watch/event/india-ai-impact-summit-2026/how-small-ai-solutions-are-creating-big-social-change — When we build LLM, we benchmark them, we evaluate the performance on benchmarks. And we have seen, like, there are only …
S25
Building the Next Wave of AI_ Responsible Frameworks & Standards — “The second most important element in this framework is to ensure these safety benchmarks are co -created with the indus…
S26
UNSC meeting: Artificial intelligence, peace and security — Switzerland:Thank you, Madam President. We are grateful to the Secretary General, Antonio Guterres, for participating in…
S27
Main Session on Artificial Intelligence | IGF 2023 — Seth Center:IAEA is an imperfect analogy for the current technology and the situation we faced for multiple reasons. One…
S28
Multistakeholder Partnerships for Thriving AI Ecosystems — How to address data fragmentation and silos that exist even within individual enterprises
S29
How to make AI governance fit for purpose? — All speakers recognize that AI’s global nature requires international cooperation and coordination, though they may diff…
S30
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Adham Abouzied emphasized the need for comprehensive governance structures that encourage data and intellectual property…
S31
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Audience:Thank you very much. My name is Auke Aukepals, and I work for KPMG in the responsible AI practice. And first of…
S32
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Pellerin Matis: I think government can really learn from the private sector because there is lots of technologies and …
S33
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Given the strategic importance of data for both AI and the digital economy, a collaborative approach involving multiple …
S34
Strategy — – Make better decisions – AI can provide timely analytics and data-driven insights to make better decisions, for example…
S35
Democratizing AI Building Trustworthy Systems for Everyone — “Because if I had to point to anything that’s holding back AI today, it’s not capability, it’s reliability, right?”[62]….
S36
Who Watches the Watchers Building Trust in AI Governance — “and also the multi -turn nature of AI”[9]. “They can still be jailbroken with enough effort or in edge cases, and it’s …
S37
Connecting open code with policymakers to development | IGF 2023 WS #500 — Helani Galpaya:Okay, I mean I’ll go on the data part I think. Sort of the superficial answer is it’s actually very diffi…
S38
AI shows promise in supporting emergency medical decisions — Drexel University researchers studied howAI can aid emergency decisions in pediatric traumaat Children’s National Medica…
S39
Research shows AI complements, not replaces, human work — AI headlines often flip between hype and fear, but the truth is more nuanced. Much research is misrepresented, with task…
S40
How AI Drives Innovation and Economic Growth — Kremer argues that while there are forces that may widen gaps, AI has significant potential to narrow development dispar…
S41
Comprehensive Discussion Report: AI’s Existential Challenge to Human Identity and Society — This scenario encapsulates the broader dilemma facing humanity: when AI consistently provides superior performance, what…
S42
Indias AI Leap Policy to Practice with AIP2 — The speakers demonstrated strong consensus on fundamental prerequisites for AI diffusion: skills development, clear gove…
S43
From Technical Safety to Societal Impact Rethinking AI Governanc — Both speakers support government involvement but disagree on scope – Ioannidis wants to keep core technology development…
S44
AI outperforms humans in debate persuasiveness — AI can be morepersuasivethan humans in debates, especially when given access to personal information, a new study finds….
S45
Importance of Professional standards for AI development and testing — Despite coming from different perspectives, both speakers agree that ethics should be flexible and contextual rather tha…
S46
Global AI Policy Framework: International Cooperation and Historical Perspectives — Despite coming from different backgrounds (diplomatic/legal vs academic), both speakers advocate for patience and carefu…
S47
Driving Indias AI Future Growth Innovation and Impact — But there was also a lot of fear around AI about trust factors, about privacy, data, sovereignty, multiple issues about …
S48
Data first in the AI era — This provided a unifying framework for understanding all the various tensions discussed – between convenience and privac…
S49
The Foundation of AI Democratizing Compute Data Infrastructure — And they could be partly technological and partly policy -based or protocol -based. And a combination of this will ensur…
S50
Adoption of agentic AI slowed by data readiness and governance gaps — Agentic AI is emerging as a new stage ofenterprise automation, enabling systems to reason, plan, and act across workflow…
S51
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Alex Moltzau: Yes, thank you so much. My name is Alex Maltzau. And I work as a second national expert in the European AI…
S52
Panel #1 : « La gouvernance du numérique au service de l’inclusion : enjeux, freins, et opportunités » — Au lieu que les États pensent avoir toutes les compétences pour résoudre les problèmes, il faut adopter une approche inv…
S53
https://dig.watch/event/india-ai-impact-summit-2026/safe-and-responsible-ai-at-scale-practical-pathways — Yeah, I’ll give two very. These might not be exactly where the sandbox is today, but where it could go tomorrow. Right. …
S54
Data Governance in the Context of Emerging Technologies: Promoting Human-Centred and Development-Oriented Societies   — In the context of this data-driven economy, the governance of this key asset should be tackled in a multilayered way. On…
S55
Data governance — The fact that free flow of data across national and corporate borders facilitates economic development and contributes t…
S56
Why science metters in global AI governance — Finally, let us be clear. Science informs, but humans decide. Our goal is to make human control a technical reality, not…
S57
Safe and Responsible AI at Scale Practical Pathways — “guardrails human in the loop risk assessment these are the tools which are available today …”[95]. “If we immediately…
S58
Can we test for trust? The verification challenge in AI — **Chris Painter** highlighted the need for standardization of frontier safety policies and dangerous capability evaluati…
S59
Democratizing AI Building Trustworthy Systems for Everyone — The participant points out that trustworthiness depends on system responsiveness, accessibility and reliability at the e…
S60
Elections and the Internet: free, fair and open? | IGF 2023 Town Hall #39 — Data needed for policy making needs to reflect their specific local contexts
S61
Overcoming the fragmentation of the digital governance: what role for the Global Digital Compact and e-trade rules? (South Centre) — The analysis explores ongoing negotiations surrounding global digital governance and highlights the need for increased e…
S62
Closing remarks – Charting the path forward — Al Mesmar emphasizes the importance of unified policy approaches that can adapt to technological changes while maintaini…
S63
Open Forum #14 Data Without Borders? Navigating Policy Impacts in Africa — Data fragmentation within countries hinders effective data integration and utilization for decision-making, which needs …
S64
Safe and Responsible AI at Scale Practical Pathways — -Data Fragmentation and Silos: The discussion highlighted how valuable information remains trapped in PDFs, documents, a…
S65
Nri Collaborative Session Data Governance for the Public Good Through Local Solutions to Global Challenges — Examples include delays in issuing birth certificates in Papua New Guinea due to lack of coordinated data systems, and F…
S66
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Data is extremely siloed and still available in paper format in many situations
S67
AI and Digital in 2023: From a winter of excitement to an autumn of clarity — At thetechnical level, data needs standards in order to be interoperable. Here, the work of standardisation and technica…
S68
Collaborative AI Network – Strengthening Skills Research and Innovation — This comment provides a systematic framework for thinking about data preparation for AI, moving beyond generic discussio…
S69
From Innovation to Impact_ Bringing AI to the Public — Audience questions and Sharma’s responses highlight specific applications: agricultural models that can analyse visual d…
S71
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Pellerin Matis: I think government can really learn from the private sector because there is lots of technologies and …
S72
How Small AI Solutions Are Creating Big Social Change — The fourth one is safety. When we build LLMs, usually we do some safety alignments with reinforcement learning, but thes…
S73
Town Hall: How to Trust Technology — The discussion revolves around the topic of artificial intelligence (AI) and large language models (LLMs). One viewpoint…
S74
Connecting open code with policymakers to development | IGF 2023 WS #500 — Helani Galpaya:Okay, I mean I’ll go on the data part I think. Sort of the superficial answer is it’s actually very diffi…
S75
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Develop marketplace mechanisms for incentivizing data contributors through revenue sharing models
S76
Defending the Cyber Frontlines / Davos 2025 — The discussion began with a serious, concerned tone as panelists outlined cyber threats and challenges. As the conversat…
S77
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — The tone began optimistically with audience engagement but became increasingly concerned and urgent as panelists reveale…
S78
Day 0 Event #256 Truth Under Siege: Tools to Counter Digital Censorship — The discussion maintained a serious, concerned tone throughout, reflecting the gravity of the challenges being discussed…
S79
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S80
Comprehensive Report: Cyber Fraud and Human Trafficking – A Global Crisis Requiring Multilateral Response — The tone began as deeply concerning and urgent, with speakers emphasizing the gravity and scale of the problem. However,…
S81
Revamping Decision-Making in Digital Governance and the WSIS Framework — The discussion maintained a constructive and collaborative tone throughout, with speakers building upon each other’s poi…
S82
Smart Regulation Rightsizing Governance for the AI Revolution — The discussion began with a notably realistic and somewhat pessimistic assessment of global cooperation challenges, but …
S83
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — The discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realisti…
S84
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S85
Regional experiences on the governance of emerging technologies NRI Collaborative Session — The overall tone was collaborative and solution-oriented. Participants shared insights from their regions in a construct…
S86
Afternoon session — The discussion began with a collaborative and appreciative tone as various stakeholders shared their visions and commitm…
S87
Final plenary session and adoption of the interim report — The necessity to monitor red lines while finding agreement outside these lines was highlighted.
S88
Webinar session — The discussion maintained a diplomatic and constructive tone throughout, with participants demonstrating nuanced thinkin…
S89
Pathways to De-escalation — The overall tone was serious and somewhat cautious, reflecting the gravity of cybersecurity challenges. While the speake…
S90
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S91
Using AI to tackle our planet’s most urgent problems — The tone is passionate and advocacy-driven throughout, with the speaker maintaining an urgent, morally-charged perspecti…
S92
Business Engagement Session: Sustainable Leadership in the Digital Age – Shaping the Future of Business — The discussion maintained a consistently collaborative and optimistic tone throughout. It began with academic framing bu…
S93
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S94
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S95
Is the AI bubble about to burst? Five causes and five scenarios — Behind the diminishing returns are conceptual and logical limitations of Large Language Models (LLMs), which cannot be r…
S96
Steering the future of AI — 2. **Persistent memory**: Current LLMs cannot maintain long-term memory across interactions. 3. **Reasoning capabilitie…
S97
How AI Is Transforming Diplomacy and Conflict Management — He argues that relying solely on large language models is problematic because their fluency is not verifiable in interna…
S98
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — The analysis discussed various aspects of language models (LLMs) and artificial intelligence (AI). One key point raised …
S99
Building Inclusive Societies with AI — Arundhati Bhattacharya, Chairperson and CEO of Salesforce India, emphasized that India’s scale demands digital solutions…
S100
WSIS Action Line C7: E-Agriculture — Garba advocated for integrated policy frameworks and emphasised that private sector telecommunications providers require…
S101
WS #97 Interoperability of AI Governance: Scope and Mechanism — Yik Chan Chin: Thank you, Olga. So, I speak on behalf of the PNAI because I’m the co-leader of the subgroup on the inte…
S102
Skilling and Education in AI — “Five second response, I think the one action that we need to take is improve the trust infrastructure and make sure tha…
S103
Law, Tech, Humanity, and Trust — Samit D’Cunha: Thanks, Joelle. That’s a really fair and, I think, necessary question. Maybe I’ll actually answer this qu…
S104
Government notices · GoewermentskennisGewinGs — There have been concerns raised, however, about the efficacy of the current structures. These stem mainly from ICASA’s…
S105
Operationalizing data free flow with trust | IGF 2023 WS #197 — David Pendle:as we aim to build trust? Thanks Tamim. So I sit on Microsoft’s law enforcement national security team whic…
S106
Laying the foundations for AI governance — Xue explains that there is a shared uncertainty about future risks and problems, with both regulators and companies lack…
S107
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Armando José Manzueta-Peña:Well, thank you, Luca, for the presentation. I’m more than thrilled to be present here and to…
S108
Building Population-Scale Digital Public Infrastructure for AI — These key comments fundamentally shaped the discussion by progressively deepening the analysis from technical implementa…
S109
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Mlindi Mashologu: As the country, South Africa, we assume the G20 presidency and I think it’s important to note our bann…
S110
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — A significant portion of the discussion focused on the need for cross-border collaboration and harmonized policy framewo…
S111
Accelerating Structural Transformation and Industrialization in Developing Countries: Navigating the Future with Advanced ICTs and Industry 4.0 — Very low level of disagreement. The speakers were largely aligned on goals and strategies, with differences mainly in em…
S112
Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — Elena Plexida: Thank you, Wolfgang. Thank you very much. Hello everyone. Yes, exactly. As you said, I work for one of th…
S113
High-level AI Standards panel — Paul Gaskell: Thank you, Bilel. So, I mean, as a government, we recognize that digital standards really matter. So we’re…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Shalini Kapoor
9 arguments128 words per minute2572 words1200 seconds
Argument 1
Data trapped in PDFs and lack of trust hampers AI use (Shalini Kapoor)
EXPLANATION
Shalini points out that a large amount of valuable information resides in PDFs and other documents that organisations are reluctant to share with AI systems. This mistrust prevents AI from accessing and leveraging that data effectively.
EVIDENCE
She notes that information is “stuck in PDFs, stuck in documents” and that there is “a fear, there’s lack of trust today” which keeps the data where it is, even though AI thrives on data [5-6].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 highlights that valuable information remains locked in PDFs and fragmented silos, and a lack of trust prevents organisations from sharing data with AI systems, underscoring the need for interoperable, trusted data.
MAJOR DISCUSSION POINT
AI‑readiness of data & fragmented silos
Argument 2
AI‑ready data must be cleaned, linked, safe, trusted and interoperable (Shalini Kapoor)
EXPLANATION
She emphasizes that for data to be AI‑ready it must undergo cleaning, linking, and be presented in a safe and trusted manner. Interoperability and proper structuring are essential to make the data useful for AI applications.
EVIDENCE
She states that “the data has to be AI ready… safe, trusted manner, the data can be linked, made useful and then made available” and later describes the process of cleaning, linking, making data relevant and useful [17][20-22].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 stresses that AI‑ready data must be safe, trusted, cleaned and linked to become useful, while S13 provides a systematic framework that breaks down these preparation steps.
MAJOR DISCUSSION POINT
AI‑readiness of data & fragmented silos
AGREED WITH
Rohit Bardawaj, Prem Ramaswami
Argument 3
Trustworthy, safe and publicly accessible data is a core institutional responsibility (Shalini Kapoor)
EXPLANATION
Shalini asks the panel what responsibility institutions have to ensure data is trustworthy, safe, and openly available. She frames this as a duty of public bodies to make data usable for AI while protecting its integrity.
EVIDENCE
She poses the question to Rohit: “what do you think is the responsibility of institution how and yours is an institution to make the data trusted safe and available to all” [31-32].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 and S14 discuss the importance of trust, provenance and public accessibility of data as institutional duties for enabling AI.
MAJOR DISCUSSION POINT
Role of institutions & governance frameworks
Argument 4
Example of a 5,000‑term Marathi glossary for agricultural AI use‑cases (Shalini Kapoor)
EXPLANATION
Shalini describes a large domain‑specific glossary created in Marathi to support agricultural AI applications. The glossary contains thousands of localized terms that enable contextual understanding by AI models.
EVIDENCE
She mentions that “we actually created glossary of 5000 terms which is it is in Marathi” and that it was used in agricultural AI experiments [84-86].
MAJOR DISCUSSION POINT
Contextualisation & domain‑specific glossaries
AGREED WITH
Ashish Srivastava, Prem Ramaswami
Argument 5
MSME compliance queries involving millions of annual data points (Shalini Kapoor)
EXPLANATION
Shalini highlights the massive scale of compliance queries faced by micro, small and medium enterprises (MSMEs), noting millions of new compliance questions each year. This illustrates the data volume challenge that AI‑ready solutions must address.
EVIDENCE
She cites an organization handling “3,000 entities” that manage “5 million new compliances in a year” and the associated query load [23-27].
MAJOR DISCUSSION POINT
Practical use cases & applications
Argument 6
“Data boarding pass” concept to onboard and monetize AI‑ready datasets for B2B use (Shalini Kapoor)
EXPLANATION
Shalini introduces a “data boarding pass” framework that certifies datasets as AI‑ready through a checklist, enabling businesses and policymakers to access and monetize the data. It aims to streamline data onboarding and create a market for trusted data assets.
EVIDENCE
She describes the concept as a physical and digital checklist that, once passed, allows B2B players to “pick the data and then start using it in your applications” and gives a concrete scenario involving automobile MSME manufacturers [353-362].
MAJOR DISCUSSION POINT
Practical use cases & applications
Argument 7
Sustainable data economy needs incentives, clear value, and exchangeability for contributors (Shalini Kapoor)
EXPLANATION
Shalini outlines a GIVE model (Guaranteed trust, Incentive, Value, Exchangeability) to motivate data providers and users. She argues that a clear incentive structure is essential for a functional data economy.
EVIDENCE
She references a paper on the “GIVE” model, explaining each component-trust, incentive, value, exchangeability-and stresses that incentives must be clear for contributors and users [391-399].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 outlines the GIVE model—guaranteed trust, incentive, value, exchangeability—as essential for a sustainable data economy.
MAJOR DISCUSSION POINT
Business models & sustainability of data platforms
AGREED WITH
Audience, Rohit Bardawaj
Argument 8
Technology alone cannot solve data silos without accompanying governance (Shalini Kapoor)
EXPLANATION
Shalini argues that purely technical solutions are insufficient; effective governance frameworks are required to address data silos. She stresses the need for policies that balance data sovereignty with AI capabilities.
EVIDENCE
She remarks that “you don’t want to give you maybe want to keep the data and the sovereignty comes in” and that “countries want to keep the data with themselves” highlighting the governance dimension [149-152].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S21 notes that technology must be paired with proper governance structures, and S23 warns that policy silos hinder effective data governance.
MAJOR DISCUSSION POINT
Governance vs. technology debate for alternative data
AGREED WITH
Rohit Bardawaj, Prem Ramaswami
Argument 9
Developing benchmarks to measure answer stability across LLMs and users (Shalini Kapoor)
EXPLANATION
Shalini notes ongoing work to create benchmarks that assess whether repeated queries to LLMs produce consistent answers across models and users. Such benchmarks are intended to improve trust in AI outputs.
EVIDENCE
She explains that they are “working to create a benchmark” by testing if the same question yields the same answer across LLMs and multiple users, using examples like Amul AI and Bharat Vistar [84-88].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 points out variability in LLM answers and the need for benchmarking; S24 and S25 discuss the creation and co‑creation of benchmarks to improve reliability.
MAJOR DISCUSSION POINT
Trust, stability and benchmarking of AI outputs
AGREED WITH
Rohit Bardawaj, Ashish Srivastava
R
Rohit Bardawaj
9 arguments185 words per minute2308 words746 seconds
Argument 1
Institutions need a shared definition and framework for AI readiness (Rohit Bardawaj)
EXPLANATION
Rohit argues that without a common definition of AI readiness, institutions cannot coordinate efforts to prepare data for AI. He calls for a consensus framework that outlines the necessary standards and processes.
EVIDENCE
He questions whether a “uniform definition of what is AI readiness” exists and stresses the need for an “agreed framework” that institutions can adopt [33-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 emphasizes the lack of a uniform AI‑readiness definition and calls for a consensus framework among institutions.
MAJOR DISCUSSION POINT
AI‑readiness of data & fragmented silos
AGREED WITH
Shalini Kapoor, Prem Ramaswami
Argument 2
AI‑ready data requires machine‑readable catalogs, metadata and context files (Rohit Bardawaj)
EXPLANATION
Rohit outlines the technical components needed for AI‑ready data: a machine‑readable catalog (preferably JSON), comprehensive metadata, and a context file that explains domain‑specific terms. These elements enable AI systems to interpret data correctly.
EVIDENCE
He details the need for a “catalog of your data” in JSON, accompanying “metadata” and a “context file” that clarifies meanings such as “frequency” [184-205].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S13 provides a systematic approach that includes machine‑readable catalogs, rich metadata and domain context files as core components of AI‑ready data.
MAJOR DISCUSSION POINT
AI‑readiness of data & fragmented silos
AGREED WITH
Shalini Kapoor, Prem Ramaswami
Argument 3
Structured data with standardized codes and dimensions is essential (Rohit Bardawaj)
EXPLANATION
Rohit emphasizes that data must be standardized, with uniform codes and clearly defined dimensions, to be usable by AI. Without such structure, AI models cannot reliably interpret fields like time or frequency.
EVIDENCE
He discusses standardizing codes, defining dimensions and attributes, and clarifying that “time means temporal” to make data machine-understandable [208-221].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S13 stresses the need for standardized codes, clear dimensions and attributes to make data machine‑understandable for AI.
MAJOR DISCUSSION POINT
AI‑readiness of data & fragmented silos
Argument 4
MOSB should lead creation of an agreed AI‑readiness framework (Rohit Bardawaj)
EXPLANATION
Rohit states that the Ministry of Statistics and Programme Implementation (MOSB) has a key responsibility to spearhead the development of a national AI‑readiness framework. This leadership would help align stakeholders around common standards.
EVIDENCE
He says “the biggest responsibility of our institutions like MOSB to make people aware what AI readiness is all about” [43].
MAJOR DISCUSSION POINT
Role of institutions & governance frameworks
Argument 5
A data steward and federated model are needed to govern alternative data sources (Rohit Bardawaj)
EXPLANATION
Rohit proposes appointing a data steward and adopting a federated data model to manage alternative data sources. This approach ensures that no single entity owns all data and that governance is distributed.
EVIDENCE
He mentions the need for a “federated model” and a “data steward” to orchestrate the ecosystem, noting that “there cannot be one whole sole owner for a data” [181-184].
MAJOR DISCUSSION POINT
Role of institutions & governance frameworks
AGREED WITH
Shalini Kapoor, Prem Ramaswami
Argument 6
Identical prompts can yield different analyses; consistency is a trust issue (Rohit Bardawaj)
EXPLANATION
Rohit highlights research showing that the same prompt given to AI with the same dataset can produce divergent analyses, undermining trust in AI outputs. He warns against being overly enthusiastic before the technology is reliably tested.
EVIDENCE
He references a paper where “the same prompt to AI with the same data set, it gives you two types of analysis” [80-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 reports that the same prompt can produce divergent analyses across LLMs, highlighting a trust problem that benchmarking (S24) aims to address.
MAJOR DISCUSSION POINT
Trust, stability and benchmarking of AI outputs
AGREED WITH
Shalini Kapoor, Ashish Srivastava
Argument 7
Integrating alternative data is primarily a governance challenge, not just a technical one (Rohit Bardawaj)
EXPLANATION
Rohit asserts that the main obstacle to incorporating alternative data lies in governance rather than technology. He encourages the audience to view it as a policy and stewardship issue.
EVIDENCE
He conducts a poll asking whether the issue is “governance” or “technology” and concludes it is a governance issue [160-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S21 and S23 argue that governance, not technology, is the main barrier to incorporating alternative data sources.
MAJOR DISCUSSION POINT
Governance vs. technology debate for alternative data
AGREED WITH
Shalini Kapoor, Prem Ramaswami
Argument 8
Federated stewardship and policy frameworks are essential before technical solutions (Rohit Bardawaj)
EXPLANATION
Rohit reiterates that before deploying technical tools, a federated stewardship model and clear policy frameworks must be established to manage data responsibly. Governance sets the foundation for any technical implementation.
EVIDENCE
He emphasizes that “we need a federated model” and that “we need to understand first” the governance aspects before technical work [177-184].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S21 stresses the need for federated stewardship models and clear policy frameworks as prerequisites for any technical implementation.
MAJOR DISCUSSION POINT
Governance vs. technology debate for alternative data
Argument 9
NSO is publicly funded; commercial use may be charged under a policy (Rohit Bardawaj)
EXPLANATION
Rohit explains that the National Statistics Office (NSO) receives public funding, making its data free for research but potentially chargeable for commercial applications. A policy governs the pricing for commercial use.
EVIDENCE
He states that “NSO is fully funded by the Government” and that “if the use is commercial, then there is a policy and people have to pay accordingly” [380-388].
MAJOR DISCUSSION POINT
Business models & sustainability of data platforms
AGREED WITH
Shalini Kapoor, Audience
P
Prem Ramaswami
7 arguments188 words per minute2119 words672 seconds
Argument 1
Data Commons offers an open‑source, federated stack that lets organisations govern data locally (Prem Ramaswami)
EXPLANATION
Prem describes Data Commons as an open‑source platform that aggregates data globally while allowing each organization to retain local governance. This federated approach prevents centralised control and supports diverse data owners.
EVIDENCE
He notes that Data Commons “open-sourced the entire stack” and is used by entities like the United Nations Statistical Department, enabling local governance of data [55-64].
MAJOR DISCUSSION POINT
Knowledge graphs, Data Commons and open‑source solutions
AGREED WITH
Shalini Kapoor, Rohit Bardawaj
Argument 2
Data Commons aggregates global datasets into a knowledge graph with an AI search layer (Prem Ramaswami)
EXPLANATION
Prem explains that Data Commons combines multiple datasets into a common knowledge graph and places an AI‑powered search engine on top, allowing rapid data discovery and analysis.
EVIDENCE
He states that Data Commons “bring multiple data sets globally together in a common knowledge graph and then put an AI search engine on top of it” [59-60].
MAJOR DISCUSSION POINT
Knowledge graphs, Data Commons and open‑source solutions
Argument 3
Open‑sourcing prevents single‑point ownership and enables local governance (Prem Ramaswami)
EXPLANATION
Prem argues that by open‑sourcing the stack, no single entity can monopolise the data, and organisations can manage their own data locally. This decentralisation enhances trust and adaptability.
EVIDENCE
He mentions that open-sourcing “prevents single-point ownership” and that the UN uses Data Commons as a backend, illustrating distributed governance [61-64].
MAJOR DISCUSSION POINT
Knowledge graphs, Data Commons and open‑source solutions
Argument 4
Knowledge graphs can ground LLMs, fill gaps and improve answer accuracy (Prem Ramaswami)
EXPLANATION
Prem proposes that a knowledge graph provides factual grounding for large language models, allowing them to fill missing information and generate more accurate responses. The graph acts as a factual substrate for AI reasoning.
EVIDENCE
He explains that a knowledge graph can be used “to ground it in those facts” and then leverage LLM intelligence to fill gaps, improving answer quality [112-118].
MAJOR DISCUSSION POINT
Knowledge graphs, Data Commons and open‑source solutions
AGREED WITH
Shalini Kapoor, Ashish Srivastava
Argument 5
Overlaying a company’s own data with the commons creates network effects for richer analysis (Prem Ramaswami)
EXPLANATION
Prem illustrates that when an organisation uploads its own dataset to Data Commons, it automatically integrates with thousands of existing datasets, creating synergistic insights and reducing the need for multiple data transformations.
EVIDENCE
He describes a scenario where a chain store adds its sales data and instantly “overlays with the 50,000 data sets” already in Data Commons, enabling broader analysis [304-317].
MAJOR DISCUSSION POINT
Knowledge graphs, Data Commons and open‑source solutions
Argument 6
Imperfect AI can still be statistically safer than human‑only decisions (Prem Ramaswami)
EXPLANATION
Prem compares AI‑driven systems to human performance, noting that despite imperfections, AI can reduce overall risk compared to human‑only approaches, as illustrated by accident statistics.
EVIDENCE
He cites that “the 30,000 deaths from car accidents” in the U.S. are higher than AI-related accidents, suggesting AI is statistically safer [144-147].
MAJOR DISCUSSION POINT
Trust, stability and benchmarking of AI outputs
Argument 7
Decision support for small business location planning using Data Commons (Prem Ramaswami)
EXPLANATION
Prem provides an example where a small business owner can use Data Commons to evaluate factors such as mobility, traffic, demographics, and income to decide where to open a new shop, thereby de‑risking the investment decision.
EVIDENCE
He narrates a scenario of an MSME owner needing data on “mobility, traffic, demographics, affordability” and how Data Commons can provide that insight [289-301].
MAJOR DISCUSSION POINT
Practical use cases & applications
A
Ashish Srivastava
4 arguments151 words per minute1240 words491 seconds
Argument 1
LLMs struggle with domain vocabularies; glossaries/knowledge graphs improve performance (Ashish Srivastava)
EXPLANATION
Ashish observes that large language models perform well on general language but falter on domain‑specific terminology. Introducing glossaries or knowledge graphs helps bridge this gap and improves translation and understanding.
EVIDENCE
He explains that “LLMs are becoming increasingly good… but the moment they hit any domain-specific vocabulary, that’s when they start failing” and that they solved it by “using a glossary combined with the LLM” [102-106].
MAJOR DISCUSSION POINT
Contextualisation & domain‑specific glossaries
AGREED WITH
Shalini Kapoor, Prem Ramaswami
Argument 2
AI should be a tool that augments human solutions, not the sole answer (Ashish Srivastava)
EXPLANATION
Ashish stresses that AI models constitute only a small portion of a solution and must be combined with human oversight, guardrails, and risk assessment. AI is a supplement, not a replacement for human judgment.
EVIDENCE
He notes that “LLMs or AI models are the solution… they are only one of the inputs to the solution” and that guardrails and human-in-the-loop are necessary [125-130].
MAJOR DISCUSSION POINT
Contextualisation & domain‑specific glossaries
Argument 3
Human‑in‑the‑loop guardrails and risk assessment are required for reliable AI (Ashish Srivastava)
EXPLANATION
Ashish argues that to ensure trustworthy AI outputs, systems must incorporate human oversight, guardrails, and systematic risk assessments. This mitigates the variability and potential errors of AI models.
EVIDENCE
He references the need for “guardrails, human in the loop, risk assessment” as essential tools for reliable AI [125-130].
MAJOR DISCUSSION POINT
Trust, stability and benchmarking of AI outputs
AGREED WITH
Shalini Kapoor, Rohit Bardawaj
Argument 4
Education, health and inclusion solutions powered by AI and data (Ashish Srivastava)
EXPLANATION
Ashish describes projects that leverage AI for social sectors such as women and child health, education, and inclusion of marginalized groups. These initiatives aim to improve outcomes by providing data‑driven decision support.
EVIDENCE
He mentions a decade of work on “AI for social problems or digital, like women in child health” and later discusses work on education, health, and inclusion through AI [95-98][250-257].
MAJOR DISCUSSION POINT
Practical use cases & applications
A
Audience
1 argument172 words per minute200 words69 seconds
Argument 1
High‑quality data collection is costly; platforms must explore public‑private pricing mechanisms (Audience)
EXPLANATION
An audience member raises concerns about the sustainability of data platforms, noting that collecting and maintaining high‑quality data is expensive and may require mixed public‑private financing models.
EVIDENCE
The audience asks about “business models of these platforms” and whether they need “publicly paid or whatever models” given the cost of high-quality data [372-376].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 discusses the necessity of clear incentive models and sustainable financing for high‑quality data, while S15 highlights public infrastructure investments that can complement private funding.
MAJOR DISCUSSION POINT
Business models & sustainability of data platforms
AGREED WITH
Shalini Kapoor, Rohit Bardawaj
S
Speaker 1
1 argument136 words per minute23 words10 seconds
Argument 1
Request for guidance on launching a Data Commons instance (Speaker 1)
EXPLANATION
Speaker 1 asks the panel for practical advice on how an organization can set up its own Data Commons instance, indicating interest in adopting the discussed technology.
EVIDENCE
The speaker asks, “Tell us a bit more about, like if suppose someone wants to put up a Data Commons instance, how can they get started?” [303].
MAJOR DISCUSSION POINT
Business models & sustainability of data platforms
Agreements
Agreement Points
A shared understanding and technical framework for AI‑ready data is essential, including cleaning, linking, safe and trusted handling, machine‑readable catalogs, metadata and context files.
Speakers: Shalini Kapoor, Rohit Bardawaj, Prem Ramaswami
AI‑ready data must be cleaned, linked, safe, trusted and interoperable (Shalini Kapoor) Institutions need a shared definition and framework for AI readiness (Rohit Bardawaj) AI‑ready data requires machine‑readable catalogs, metadata and context files (Rohit Bardawaj) Data Commons offers an open‑source, federated stack that lets organisations govern data locally (Prem Ramaswami) Knowledge graphs can ground LLMs, fill gaps and improve answer accuracy (Prem Ramaswami)
All three speakers stress that data must be prepared through cleaning, linking and standardisation, and that a common, machine-readable definition (catalogues, metadata, context files) is needed to make data trustworthy and usable by AI systems [17][20-22][184-205][55-58][112-118].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with the ‘Data first in the AI era’ framework that calls for machine-readable catalogs, metadata and safe handling to enable trustworthy AI [S48]; similar recommendations appear in the Foundation of AI Democratizing Compute Data Infrastructure report emphasizing democratized data infrastructure [S49]; and the adoption barriers identified for agentic AI stress data readiness and governance gaps [S50]; multilayered data governance guidance also supports technical standards for AI-ready data [S54].
Governance, not just technology, is the primary hurdle for integrating fragmented and alternative data sources; a federated stewardship model and clear policy frameworks are required.
Speakers: Shalini Kapoor, Rohit Bardawaj, Prem Ramaswami
Technology alone cannot solve data silos without accompanying governance (Shalini Kapoor) Integrating alternative data is primarily a governance challenge, not just a technical one (Rohit Bardawaj) A data steward and federated model are needed to govern alternative data sources (Rohit Bardawaj) Data Commons offers an open‑source, federated stack that lets organisations govern data locally (Prem Ramaswami)
The panel agrees that policy and governance structures (federated model, data steward) must precede technical solutions for data interoperability [149-152][160-170][177-184][55-64].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for federated stewardship mirrors India’s AI Leap policy which prioritises clear governance frameworks for AI diffusion [S42]; the debate on the scope of government involvement in AI governance underscores the centrality of policy coordination [S43]; multilayered governance approaches for emerging technologies highlight governance as the key challenge over technology [S54]; and analyses of digital governance fragmentation call for unified policy to integrate fragmented data sources [S61][S62][S63].
Trustworthiness of AI outputs requires benchmarking, consistency checks and human‑in‑the‑loop guardrails.
Speakers: Shalini Kapoor, Rohit Bardawaj, Ashish Srivastava
Developing benchmarks to measure answer stability across LLMs and users (Shalini Kapoor) Identical prompts can yield different analyses; consistency is a trust issue (Rohit Bardawaj) Human‑in‑the‑loop guardrails and risk assessment are required for reliable AI (Ashish Srivastava)
All three highlight the need for systematic evaluation (benchmarks) and safeguards to ensure reliable AI results [84-88][80-82][125-130].
POLICY CONTEXT (KNOWLEDGE BASE)
Professional standards for AI development stress benchmarking and human-in-the-loop oversight as essential safeguards [S45]; safe and responsible AI pathways describe guardrails and risk-assessment tools that operationalise these principles [S57]; calls for standardisation of trust verification highlight the need for consistent safety policies across the industry [S58]; and broader trustworthiness frameworks emphasise reliability, accessibility and human oversight [S59][S56].
Domain‑specific glossaries or knowledge graphs are needed to contextualise data and improve LLM performance.
Speakers: Shalini Kapoor, Ashish Srivastava, Prem Ramaswami
Example of a 5,000‑term Marathi glossary for agricultural AI use‑cases (Shalini Kapoor) LLMs struggle with domain vocabularies; glossaries/knowledge graphs improve performance (Ashish Srivastava) Knowledge graphs can ground LLMs, fill gaps and improve answer accuracy (Prem Ramaswami)
The speakers concur that adding structured domain knowledge (glossaries, knowledge graphs) bridges the gap between raw data and LLM understanding [84-86][102-106][112-118].
A sustainable data economy requires clear incentives, value creation and appropriate pricing models for public and commercial use.
Speakers: Shalini Kapoor, Audience, Rohit Bardawaj
Sustainable data economy needs incentives, clear value, and exchangeability for contributors (Shalini Kapoor) High‑quality data collection is costly; platforms must explore public‑private pricing mechanisms (Audience) NSO is publicly funded; commercial use may be charged under a policy (Rohit Bardawaj)
All three acknowledge that financing high-quality data and defining incentive structures (GIVE model, public-private mix, commercial licensing) are essential for a viable data market [391-399][372-376][380-388].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses of AI-driven economic growth argue that targeted incentives and pricing models are needed to harness AI for development and reduce disparities [S40]; the role of free data flows in fostering economic development underscores the importance of pricing mechanisms for public and commercial data use [S55]; and the ‘Data first’ framework stresses collective benefit and value creation in a data-centric economy [S48].
Similar Viewpoints
Both stress that institutions must define clear standards and processes to make data trustworthy and usable by AI [17][20-22][33-46].
Speakers: Shalini Kapoor, Rohit Bardawaj
AI‑ready data must be cleaned, linked, safe, trusted and interoperable (Shalini Kapoor) Institutions need a shared definition and framework for AI readiness (Rohit Bardawaj)
Both view AI as an augmenting tool that, despite imperfections, can improve decision‑making when combined with human oversight [125-130][144-147].
Speakers: Prem Ramaswami, Ashish Srivastava
AI should be a tool that augments human solutions, not the sole answer (Ashish Srivastava) Imperfect AI can still be statistically safer than human‑only decisions (Prem Ramaswami)
Both highlight the variability of AI outputs and the necessity of safeguards to maintain trust [80-82][125-130].
Speakers: Rohit Bardawaj, Ashish Srivastava
Identical prompts can yield different analyses; consistency is a trust issue (Rohit Bardawaj) Human‑in‑the‑loop guardrails and risk assessment are required for reliable AI (Ashish Srivastava)
Both advocate a federated, decentralized approach to data stewardship before technical deployment [181-184][55-64].
Speakers: Prem Ramaswami, Rohit Bardawaj
A data steward and federated model are needed to govern alternative data sources (Rohit Bardawaj) Data Commons offers an open‑source, federated stack that lets organisations govern data locally (Prem Ramaswami)
Unexpected Consensus
Both a statistician (Rohit) and a solution‑builder (Ashish) agree that AI output variability is a critical trust issue requiring guardrails, despite their different professional backgrounds.
Speakers: Rohit Bardawaj, Ashish Srivastava
Identical prompts can yield different analyses; consistency is a trust issue (Rohit Bardawaj) Human‑in‑the‑loop guardrails and risk assessment are required for reliable AI (Ashish Srivastava)
The convergence of a data-centric researcher and a practitioner on the need for human oversight and consistency checks was not anticipated given their distinct roles [80-82][125-130].
Overall Assessment

The panel shows strong consensus on four pillars: (1) a common, technically detailed definition of AI‑ready data; (2) governance‑first, federated stewardship of data; (3) the necessity of benchmarks and human guardrails for trustworthy AI; (4) the role of domain‑specific glossaries/knowledge graphs; and (5) the need for incentive‑based data economy models.

High consensus across technical, policy and economic dimensions, indicating that future work should prioritize coordinated standards, federated governance structures, and sustainable financing mechanisms to unlock AI‑driven development.

Differences
Different Viewpoints
Characterisation of data collection approach (top‑down vs bottom‑up)
Speakers: Prem Ramaswami, Shalini Kapoor
Prem states that the data collection by the Ministry of Statistics is bottom-up [269-271] Shalini initially describes the data collection as top-down before being corrected [272-276]
Prem describes the government data pipeline as originating from the field (bottom-up), whereas Shalini first frames it as a top-down process, indicating a mismatch in how the flow of statistical data is perceived [269-276].
POLICY CONTEXT (KNOWLEDGE BASE)
A French panel on digital governance advocated an inverse, community-driven (bottom-up) approach to policy design, contrasting with top-down models [S52]; the Ministry of Statistics example illustrates challenges of a top-down data gathering strategy [S53]; and multilayered data-governance literature discusses balancing top-down standards with bottom-up participation [S54].
Extent to which AI can replace or supplement human decision‑making
Speakers: Prem Ramaswami, Ashish Srivastava
Prem argues that, despite imperfections, AI can be statistically safer than human-only decisions and can be used to de-risk choices [144-147] Ashish stresses that AI models constitute only 10-15 % of a solution, requiring guardrails, human-in-the-loop and risk assessment, and should not be treated as the sole answer [125-130]
Prem sees AI as a tool that can, in many cases, outperform human judgment, while Ashish cautions that AI should remain a minor component of solutions, emphasizing the need for extensive human oversight and guardrails [144-147][125-130].
POLICY CONTEXT (KNOWLEDGE BASE)
Studies in emergency medicine show AI can support but not replace clinicians, highlighting augmentation rather than substitution [S38]; broader workplace research confirms AI complements human work rather than displaces it [S39]; philosophical discussions on AI’s existential challenge to human expertise provide context on concerns about replacement [S41]; and evidence that AI can outperform humans in specific tasks (e.g., debate persuasiveness) adds nuance to the replacement debate [S44].
Preferred technical architecture for making data AI‑ready
Speakers: Rohit Bardawaj, Prem Ramaswami
Rohit outlines a technical stack centred on machine-readable catalogs, rich metadata and context files, plus standardised codes and dimensions [184-221] Prem promotes an open-source, federated knowledge-graph stack with an AI search layer that aggregates global datasets while allowing local governance [55-64][59-60]
Rohit focuses on cataloguing, metadata and context files as the core of AI-readiness, whereas Prem advocates a knowledge-graph-based, federated platform as the primary solution, reflecting divergent technical priorities [184-221][55-64].
POLICY CONTEXT (KNOWLEDGE BASE)
Recommendations for democratized compute and data infrastructure aim to avoid new dependencies and support interoperable AI-ready architectures [S49]; and the identified governance and data-readiness gaps for agentic AI stress the need for scalable, standards-based technical solutions [S50].
Unexpected Differences
Differing views on AI’s capacity to outperform human decision‑making
Speakers: Prem Ramaswami, Ashish Srivastava
Prem claims AI can be statistically safer than human-only decisions, citing accident statistics [144-147] Ashish warns that AI is only a small part of a solution and must be coupled with extensive human oversight and guardrails [125-130]
Given Prem’s background in large-scale data platforms, his confidence in AI’s safety is stronger than Ashish’s cautious stance, which is unexpected given both operate in AI-focused environments [144-147][125-130].
POLICY CONTEXT (KNOWLEDGE BASE)
The same body of evidence that AI can augment human decisions in healthcare and work contexts [S38][S39] is balanced by analyses of AI’s potential to surpass human performance in certain domains, raising questions about the future role of expertise [S41][S44].
Contrasting perception of statistical data flow (top‑down vs bottom‑up)
Speakers: Prem Ramaswami, Shalini Kapoor
Prem describes the data pipeline as bottom-up, originating from field-level collection [269-271] Shalini initially frames it as a top-down process before being corrected [272-276]
The mismatch in describing the direction of data flow was not anticipated, revealing differing mental models of how governmental statistics are generated [269-276].
POLICY CONTEXT (KNOWLEDGE BASE)
The inverse, bottom-up governance approach advocated in digital policy discussions contrasts with traditional top-down statistical data collection models, as highlighted in the French governance panel [S52] and the Ministry of Statistics case study [S53]; multilayered governance frameworks also address this tension [S54].
Overall Assessment

The panel shows moderate disagreement centred on technical implementation choices (catalog vs knowledge‑graph) and the role of AI relative to human decision‑making, while there is broad consensus on the need for governance frameworks, trust, and federated stewardship. The disagreements are substantive but not polarising, indicating that collaborative standard‑setting and pilot projects could reconcile the differing viewpoints.

Moderate – differing technical preferences and philosophical stances on AI’s authority, but shared commitment to governance, trust and open‑source solutions, suggesting that coordinated policy and technical work can bridge gaps.

Partial Agreements
Both concur that institutions must adopt a common framework to ensure data is trustworthy, safe and accessible, even though Rohit stresses the need for a formal definition first [31-32][33-46].
Speakers: Shalini Kapoor, Rohit Bardawaj
Shalini asks institutions to make data trusted, safe and publicly available [31-32] Rohit calls for a shared, agreed definition and framework for AI-readiness [33-46]
Both recognise variability in AI outputs as a trust problem and agree on the necessity of benchmarking to improve reliability [80-82][84-88].
Speakers: Rohit Bardawaj, Shalini Kapoor
Rohit cites research showing identical prompts can yield divergent analyses, highlighting trust issues [80-82] Shalini mentions ongoing work to create benchmarks that test answer stability across LLMs and users [84-88]
Both support a federated, decentralized governance model for data, differing only in the concrete implementation details [181-184][55-64].
Speakers: Rohit Bardawaj, Prem Ramaswami
Rohit proposes a federated stewardship model with a data steward to govern alternative data sources [181-184] Prem describes an open-source, federated Data Commons stack that lets each organisation govern its data locally [55-64]
Takeaways
Key takeaways
AI‑ready data must be cleaned, linked, safe, trusted, interoperable and presented in machine‑readable formats (catalogs, metadata, context files). A shared, agreed‑upon definition and framework for AI‑readiness is needed; institutions like MOSB/NSO should lead its creation. Federated stewardship and local governance are essential to avoid single‑point ownership while enabling data sharing. Open‑source stacks such as Google Data Commons can aggregate diverse datasets into a knowledge graph with an AI search layer, creating network effects when organisations overlay their own data. Domain‑specific glossaries or knowledge graphs are required to contextualise LLM outputs, especially for local languages and sector vocabularies. Trust and stability of AI answers are concerns; benchmarking across LLMs and human‑in‑the‑loop guardrails are being explored. AI should be treated as a tool that augments human decision‑making, not as a complete solution. Governance challenges (policy, stewardship, incentives) outweigh pure technical challenges when integrating alternative data sources. Sustainable business models need clear incentives, value propositions and exchangeability for data contributors; public funding covers research use, commercial use may be monetised.
Resolutions and action items
Rohit Bardawaj will draft a slide deck and an agreed‑upon AI‑readiness framework (core + aspirational components). Create machine‑readable data catalogs (JSON/XML) with metadata and context files for public datasets. Standardise codes, dimensions and create business glossaries/knowledge graphs for domain vocabularies. Prem Ramaswami will continue development of contextualisation features (glossary‑grounded LLMs) within Data Commons and provide guidance for setting up a Data Commons instance (20‑minute guide). NSO will formalise a data‑stewardship role and publish policies for commercial access to public data. Develop a benchmark suite to measure answer stability across LLMs and repeated queries (as mentioned by Shalini). Promote the “data boarding pass” concept to onboard AI‑ready datasets for B2B consumption. Ashish Srivastava’s lab will prototype reusable policy artifacts (DPIs/DPGs) for automated data‑governance in solutions.
Unresolved issues
No consensus yet on a precise, industry‑wide definition of “AI‑readiness”. How to systematically integrate and govern alternative/secondary data sources beyond administrative data. Exact funding and pricing mechanisms for a sustainable data economy; how incentives will be calibrated. Scalable process for creating and maintaining domain‑specific glossaries across many languages and sectors. Implementation details for automatic enforcement of data‑use policies at the API level. How to ensure consistent LLM outputs in practice; the benchmark is still under development. Privacy and consent handling for personal or sensitive data when building AI‑ready repositories.
Suggested compromises
Adopt a hybrid approach: combine Retrieval‑Augmented Generation (RAG) with LLM capabilities rather than relying solely on one technique. Use open‑source, federated architectures (e.g., Data Commons) to balance data sovereignty with broad accessibility. Apply guardrails and human‑in‑the‑loop checks while still leveraging AI’s speed and scalability. Accept that AI outputs will be imperfect but can be statistically safer than human‑only decisions; focus on risk assessment rather than perfection. Provide both free access for research/public good and a paid tier for commercial use to fund platform maintenance.
Thought Provoking Comments
Do we have a uniform definition of what AI readiness is? People are not aware what it takes to make data AI ready, and we need an agreed framework and a slide deck showing what AI can see versus what a human can see.
Highlights a foundational gap – the lack of a shared definition of AI‑ready data – and proposes creating a common framework, which is essential before any technical work can proceed.
Shifted the discussion from abstract problem statements to the need for standardization. It prompted subsequent speakers (Prem, Ashish) to talk about metadata, catalogs, and governance, and set the stage for Rohit’s later detailed checklist.
Speaker: Rohit Bardawaj
If we can get our data in a machine‑readable format (structured, with metadata) and put it into a knowledge graph, then layering a large language model on top gives a much better chance of answering questions correctly.
Introduces the concrete technical architecture (knowledge graph + LLM) and the principle of federated, open‑source data commons, moving the conversation from problem description to a viable solution model.
Steered the dialogue toward practical implementation, influencing Rohit’s later points about cataloging and APIs, and prompting Ashish to discuss contextualization and glossaries.
Speaker: Prem Ramaswami
Data is not a transaction; it is a journey. We need interoperable, contextual, and verifiable data. LLMs fail on domain‑specific vocabularies, so we should combine a glossary with the LLM to improve translation and understanding.
Broadens the perspective from static datasets to dynamic data flows and stresses the importance of context and verification, while offering a tangible remedy (glossary) for LLM limitations.
Deepened the conversation about data quality and contextualization, leading Prem to elaborate on knowledge graphs as factual bases and prompting Rohit to discuss metadata and business glossaries.
Speaker: Ashish Srivastava
We are seeing that the same prompt to an LLM with the same dataset can give two different analyses. We need benchmarks to measure stability of answers across models and repetitions.
Raises the critical issue of reproducibility and trust in AI outputs, calling for systematic benchmarking—a step toward responsible AI deployment.
Prompted Shalini to mention ongoing benchmark work (Amul AI, Bharat Vistar) and reinforced the need for evaluation frameworks throughout the panel.
Speaker: Rohit Bardawaj
LLMs are only 10‑15 % of what you need for a solution; the rest is guardrails, human‑in‑the‑loop, risk assessment. Probabilistic models will never be perfectly consistent, and we must focus on external controls, not just the model itself.
Challenges the hype around LLMs by emphasizing their limited role and the necessity of governance, risk, and human oversight.
Shifted the tone from optimism to caution, influencing Prem’s later remarks about using AI as a tool and reinforcing Rohit’s governance emphasis.
Speaker: Ashish Srivastava
We need a catalog of data in a machine‑readable JSON (or XML) file, with metadata, a context file, a business glossary, standardized codes, and structured storage – otherwise AI cannot reliably consume it.
Provides a concrete, step‑by‑step checklist for making data AI‑ready, translating abstract concepts into actionable items.
Served as a practical roadmap that other participants referenced (e.g., Prem’s knowledge graph, Ashish’s policy engine), and set up the later discussion of the “data boarding pass” concept.
Speaker: Rohit Bardawaj
The ‘data boarding pass’ – a physical (or digital) checklist that certifies data as AI‑ready, enabling B2B players, policymakers, and researchers to onboard and use the data instantly.
Introduces an innovative metaphor and operational tool for certifying and sharing AI‑ready data, bridging governance and usability.
Provided a tangible product concept that tied together earlier discussions on standards, benchmarks, and federated access, and gave the audience a concrete takeaway.
Speaker: Shalini Kapoor
Our GIVE framework – Guaranteed trust, Incentive, Value, Exchangeability – defines the economics of data sharing: why data owners should contribute and how value can be monetized while ensuring exchangeability.
Addresses the often‑overlooked business model aspect, linking technical readiness to sustainable incentives and market mechanisms.
Answered the audience’s question on business models, linked back to earlier points about trust and incentives, and rounded out the discussion by connecting technical, governance, and economic layers.
Speaker: Shalini Kapoor
Overall Assessment

The discussion evolved from a broad problem statement about fragmented data silos to a multi‑layered roadmap for AI‑ready data. Key turning points were triggered by comments that exposed foundational gaps (Rohit’s call for a shared definition of AI readiness), proposed concrete architectures (Prem’s knowledge‑graph + LLM model), highlighted practical challenges (Ashish’s journey metaphor and glossary solution), and demanded accountability (Rohit’s benchmark concern). These insights prompted participants to converge on a common language—metadata, catalogs, federated governance—and to envision operational tools such as the data boarding pass and the GIVE economic framework. Collectively, the highlighted comments steered the panel from abstract concerns to actionable strategies, balancing technical possibilities with governance, trust, and sustainability.

Follow-up Questions
Is there a uniform definition or agreed‑upon framework for “AI‑readiness” of data?
Rohit highlighted uncertainty about whether the ecosystem has a common definition of AI readiness, indicating the need to establish a shared standard.
Speaker: Rohit Bardawaj
How can a shared AI‑readiness framework (core and aspirational components) be created and adopted across institutions?
He proposed developing a collaborative framework to define AI‑ready data, suggesting a coordinated effort among stakeholders.
Speaker: Rohit Bardawaj
How can contextualization and domain‑specific glossaries be integrated into Google Data Commons to improve AI responses?
Prem was asked to explain adding domain glossaries to Data Commons, pointing to the need for methods to embed contextual knowledge.
Speaker: Prem Ramaswami
How can alternative or secondary data (beyond administrative sources) be incorporated into the AI‑ready data framework, and what kind of data economy could emerge?
Shalini queried the feasibility of extending the framework to non‑administrative data and its economic implications.
Speaker: Shalini Kapoor (to Rohit Bardawaj)
What sustainable business models (public funding, commercial licensing, incentives) can support the maintenance and growth of high‑quality data platforms?
The audience asked about financing mechanisms for data platforms, prompting discussion on public vs. commercial models.
Speaker: Audience member (addressed to Rohit and Shalini)
How can AI‑ready data be used to detect and resolve data gaps or disconnections in infrastructure projects (e.g., road construction, tender processes)?
The participant raised a practical problem of project delays due to data disconnects, seeking solutions via AI‑ready data.
Speaker: Audience member (addressed to Shalini)
Who should be accountable for data quality and governance in solution pipelines that combine multiple data sources?
Ashish identified accountability for data as a key challenge, indicating a need for clear responsibility mechanisms.
Speaker: Ashish Srivastava
Can a benchmark be created to measure answer stability across different LLMs and repeated queries?
She mentioned ongoing work on a benchmark to ensure consistent answers, highlighting a research gap in evaluation metrics.
Speaker: Shalini Kapoor
How can standards for AI‑ready data keep pace with the rapidly evolving AI landscape?
Prem noted that agreements made today may be obsolete in six months, underscoring the need for continual research and updates.
Speaker: Prem Ramaswami
What methods can combine knowledge graphs with LLMs to fill factual gaps and improve answer accuracy?
He discussed using knowledge graphs as factual backbones for LLMs, indicating a research direction for hybrid systems.
Speaker: Prem Ramaswami
How can reusable policy artifacts (DPIs/DPGs) be designed to enforce data governance automatically at API and policy‑engine levels?
Ashish highlighted the need for standardized, enforceable data policies to streamline compliance.
Speaker: Ashish Srivastava
What governance model should oversee a federated national data ecosystem, and who should act as the data steward?
He suggested a federated model with a designated steward (e.g., NSO) to orchestrate data sharing and governance.
Speaker: Rohit Bardawaj
What standards for metadata, context files, and machine‑readable catalogs (e.g., JSON) are needed to make data AI‑ready?
Rohit emphasized the importance of cataloging, metadata, and context files in machine‑readable formats for AI consumption.
Speaker: Rohit Bardawaj
How can the verification and trustworthiness of publicly declared survey data be ensured for AI applications?
He pointed out that many public datasets are unverified, raising the need for mechanisms to validate such data.
Speaker: Ashish Srivastava

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Panel Discussion Next Generation of Techies _ India AI Impact Summit

Panel Discussion Next Generation of Techies _ India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

Anirudh Suri opened the final summit session on next-generation tech entrepreneurship, introducing himself and three panelists: Malhar Bhide, co-founder and CTO of AI-driven biotech startup Origin Bio; Navrina Singh, founder and CEO of AI governance platform Credo AI; and Arvind Jain, founder and CEO of enterprise AI company Glean [1-8][10-13][15-16][19-21].


The discussion began by contrasting today’s AI-driven wave with the earlier consumer-internet era, with Arvind noting that each technological wave creates new startup opportunities but success still depends on identifying a strong business problem rather than chasing the hype [39-45]; he added that AI uniquely reshapes company structure, making traditional roles less fixed and allowing founders to “reinvent” organizations without a predefined blueprint [46-53].


When asked whether AI leads to leaner startups, Arvind affirmed that even a single founder can build a viable product, and that AI-assisted workflows reduce both headcount and capital requirements compared with previous waves [55-63][64-68]; Malhar explained that advances in AI have democratized knowledge, enabling his non-biologist co-founder and a five-person team to conduct wet-lab research, train models, and predict experimental outcomes, thereby lowering costs and accelerating development [80-92]; he also highlighted that rigorous research remains essential, as their AI-designed DNA sequences must meet strict scientific and regulatory standards before any therapeutic or commercial use [97-103].


Navrina emphasized that the critical new challenge is ensuring AI systems are reliable and compliant, describing how policy, governance, and risk-assessment frameworks act as a moat for companies by enforcing trustworthiness and regulatory guardrails such as HIPAA or FDA requirements [108-116][117-123]; she further argued that AI risk management has become dynamic, requiring continuous testing, benchmarking, and mitigation of issues like hallucinations across the entire AI supply chain [128-136].


On the broader market impact, Arvind expressed confidence that creative destruction will persist, noting that AI lowers the technical barrier so individual entrepreneurs can launch innovative products that eventually scale within larger firms [180-182]; Navrina added that the real threat is not AI itself but individuals who master AI tools, urging entrepreneurs to unlearn old habits and adopt rapid, AI-first development cycles [185-190].


Both Malhar and Arvind reflected on the Indian diaspora experience, saying that moving to the U.S. provides exposure to different talent pools and market dynamics, while staying in India offers unique insights into local healthcare data that can inform biotech breakthroughs [200-208][215-222].


Audience questions raised concerns about emerging AI-security threats and the perceived trade-off between governance and ROI; Arvind identified new attack vectors such as prompt injection and the need for hallucination detection, while Navrina countered that robust AI governance actually accelerates adoption and adds top-line value [240-246][249-255].


The session concluded with Anirudh urging continued two-way dialogue and wishing participants success in their entrepreneurial journeys, underscoring the summit’s aim to foster ongoing collaboration [262-268].


Keypoints

Major discussion points


AI-driven entrepreneurship is reshaping the classic “technology wave” model.


Arvind explains that while every wave (consumer internet, mobile, etc.) still requires ambition, a solid business problem, and a strong team, the AI wave uniquely alters company blueprints and roles, making the “human” function unclear and opening space for unconventional, AI-first organizations [39-46][47-53]. This translates into much leaner startups: founders can build MVPs with a single person and rely on AI to replace many traditional tasks, reducing headcount and capital needs [55-61][62-69].


AI democratizes knowledge and drives research-intensive, cross-disciplinary startups.


Malhar notes that AI has made expertise in fields like biology accessible to non-specialists, allowing a five-person team to conduct wet-lab experiments, train models from scratch, and use AI to predict experimental outcomes, thereby keeping research central to product development [80-92][97-103].


Policy, governance, and regulatory compliance have become core to AI product success.


Navrina stresses that beyond building technology, startups must ensure reliability, explainability, and adherence to sector-specific regulations (e.g., HIPAA, FDA) to create “trusted technology,” making AI governance a critical moat [108-123][128-137]. Both Malhar and Arvind confirm that their teams actively monitor safety guardrails and embed risk-assessment practices into product design [151-156][157-166].


Creative destruction remains alive, but AI lowers entry barriers and may accelerate disruption.


Arvind argues that despite the resources of big tech, innovation continues to emerge from individual entrepreneurs who can now build sophisticated products without deep engineering expertise, suggesting startups will keep driving breakthroughs [180-182]. Navrina adds that the real threat is not AI itself but “people who are so good with AI” out-competing incumbents, emphasizing rapid unlearning and experimentation [185-190].


New security challenges (e.g., prompt-injection, hallucinations) are spawning emerging AI-security fields.


An audience member raises concerns about AI misuse and hallucinations; Arvind responds that novel attack vectors like prompt injection are already appearing and that detecting and monitoring hallucinations represent fresh entrepreneurial opportunities [229-236][249-255].


Overall purpose / goal of the discussion


The panel aimed to explore how the current AI wave is transforming entrepreneurship-its business models, team structures, research emphasis, and interaction with policy-and to surface the opportunities and risks this creates for founders, investors, and regulators alike.


Overall tone


The conversation began with an upbeat, welcoming tone as the moderator introduced the panel and highlighted excitement about the AI era [1-8]. As the dialogue progressed, it shifted to a more analytical and reflective tone, dissecting structural changes, research imperatives, and regulatory complexities [39-69][108-137]. Toward the end, the tone became interactive and pragmatic, incorporating audience questions, highlighting concrete security challenges, and concluding with an encouraging, supportive note for aspiring entrepreneurs [229-255][262-268].


Speakers

Anirudh Suri – Moderator; founder of India Internet Fund (venture capital); author and podcast host of The Great Tech Game; non-resident fellow at a think-tank [S6]


Malhar Bhide – Co-founder and Chief Technology Officer of Origin Bio, a Y Combinator-backed AI-driven genetic-medicine startup [S7]


Navrina Singh – Founder and CEO of Credo AI, an AI-governance and trust-management platform; advisor to the White House on AI policy [S8]


Arvind Jain – Founder and CEO of Glean, an enterprise AI company that brings LLM capabilities to internal corporate data [S9][S11]


Audience – Various attendees who asked questions during the session (e.g., queries on AI security, AI governance, and startup challenges) [S1][S2][S3]


Additional speakers:


Rahul – 16-year-old young speaker previously interviewed for a podcast; mentioned as a panelist in the introduction (no further details provided).


Yash – Co-founder of Origin Bio alongside Malhar Bhide; mentioned in the discussion about the team’s background (no title beyond co-founder).


Navreena – (Typographical variant of Navrina Singh; already captured above).


Other audience members – Various unnamed participants who contributed questions or comments during the Q&A segment.


Full session reportComprehensive analysis and detailed insights

Anirudh Suri opened the closing session of the India AI Impact Summit by welcoming the audience, briefly outlining his role as founder of the India Internet Fund and author of The Great Tech Game, and introducing the three panelists: Malhar Bhide, co-founder and CTO of the Y Combinator-backed biotech start-up Origin Bio; Navrina Singh, founder and CEO of the AI-governance platform Credo AI, who also advises the White House on AI policy; and Arvind Jain, founder and CEO of the enterprise-AI company Glean, which integrates large-language-model capabilities with internal corporate data [1-8][10-13][15-16][19-21].


Arvind began by comparing the current AI-driven wave to the earlier consumer-internet era, arguing that every technological wave creates fresh entrepreneurial opportunities while the core ingredients for success-ambition, a compelling business problem and a strong team-remain unchanged [39-45]. He highlighted that, unlike previous waves where company blueprints (engineers, product managers, sales) were relatively fixed, AI fundamentally reshapes organisational structures and even the definition of “human” roles, allowing founders to “reinvent” their companies without a predefined template [46-53].


Suri then asked whether AI enables leaner start-ups. He cited examples of serial entrepreneurs who now need far fewer engineers and less capital to reach a minimum-viable product [55-61]. Arvind confirmed that a single founder can build a functional MVP, and that AI-assisted workflows dramatically reduce headcount and funding requirements [62-69].


Malhar illustrated how AI democratizes knowledge and permits cross-disciplinary ventures. He explained that, despite neither co-founder having a biology background, their five-person team can conduct wet-lab research, train models from scratch and use AI to predict experimental outcomes, thereby keeping costs low and accelerating development [80-92]. He added that their AI-designed DNA sequences must satisfy rigorous scientific validation and FDA-type clearance before any therapeutic or commercial use, making research the linchpin of product viability [97-103].


Navrina expanded the view to AI products more generally, arguing that trustworthiness, explainability and adherence to sector-specific regulations (e.g., HIPAA) constitute the true competitive moat; without such guardrails, even technically superior models cannot scale [108-123]. She further noted that AI risk is now dynamic-hallucinations, supply-chain testing, and continuous evaluation are required [128-136].


When Suri probed whether every AI start-up should create a dedicated risk-policy function, Malhar replied that, given their small size, the entire technical team collectively monitors AI-safety research and implements guardrails [151-156][152-156]. Arvind added that Glean relies on fact-checking, showing the full trail of sources, and compliance with existing regulations, even though the company does not maintain a separate policy team [169-170][170-174].


On macro-level market dynamics, Arvind asserted that creative destruction will persist: AI lowers the technical threshold so that even non-engineers can launch sophisticated products, and while large firms provide scale, most bold ideas actually come from a single person and innovation will continue to emerge from start-ups [180-182]. Navrina complemented this by warning that the real threat is not AI itself but individuals who master AI tools and can outpace incumbents, urging entrepreneurs to “unlearn” legacy habits and adopt rapid AI-first development cycles [185-190].


The panel also reflected on the Indian diaspora experience. Malhar highlighted that relocating to the United States fostered a risk-taking mindset and gave him insight into American market dynamics, while his Indian upbringing offered deep knowledge of local drug-discovery ecosystems, patient demographics and data collection practices [200-208]. Arvind echoed this, attributing the success of Indian founders in Silicon Valley to cultural drive, access to capital and a hunger to build great companies [215-222].


Audience questions shifted the focus to emerging security challenges. One participant asked whether a new field of AI-security would arise to address threats such as prompt-injection attacks and hallucinations [229-231]. Arvind responded affirmatively, noting that novel attack vectors are already appearing, and that detecting hallucinations and providing observability constitute a burgeoning entrepreneurial opportunity [232-236][249-255]. Navrina then emphasized that robust AI governance delivers clear ROI by accelerating third-party AI adoption, boosting productivity and contributing to top-line growth [236-246].


Finally, Suri concluded by stressing the importance of two-way dialogue throughout the summit, encouraging attendees to engage further with the panelists and wishing them success on their entrepreneurial journeys [262-268].


Key Themes


1. AI reshapes organisational blueprints while preserving core entrepreneurial principles.


2. AI democratizes expertise, enabling lean, research-intensive start-ups.


3. Governance, policy and regulatory compliance have become essential competitive moats.


4. Despite the rise of big-tech, creative destruction remains alive, driven by individuals who can harness AI tools effectively.


Session transcriptComplete transcript of the session
Anirudh Suri

Hi and welcome a very good afternoon to all of you thank you for staying on I know it’s the last day of a long productive summit and I think maybe not the last but the second last session maybe the third last session so thank you for being here I’m excited about this discussion that we’re going to have over the course of the next about half an hour or so, 35 minutes I’ll quickly introduce myself and then I’ll get our panelists to introduce themselves. We’re talking of course about the team of the next generation of tech entrepreneurs, tech founders, tech leaders in the world I’m Anirudh Suri, I run a venture capital fund called India Internet Fund and I’m Anirudh Suri And I’m also an author of a book called The Great Tech Game and a podcast by the same name that looks at the intersection of technology and geopolitics.

So I might bring in a little bit of geopolitics into our conversation, even though we’re talking mostly about tech founders. I’ll start with my left, Malhar. Today, I earlier had the opportunity to interview a very young panelist, a young speaker, Rahul. Who’s, I think, one of the youngest speakers at the summit. He’s 16. We did a podcast with him earlier in the afternoon. And so I’m especially delighted to have another young entrepreneur, college dropout. On my left, Malhar, if you can briefly introduce yourself.

Malhar Bhide

Yeah, thank you. Hi, I’m Malhar. I’m the co -founder and chief technology officer of Origin Bio. We’re a Y Combinator startup that is using AI to make safer genetic medicines for diseases. Like cancer.

Anirudh Suri

Thanks Malhar. Navrina

Navrina Singh

Absolutely not a college dropout not very young but I’m the founder and CEO of Credo AI we are an AI governance and trust management platform. For the past five years I’ve also been advising the White House on AI policy and work very closely with governments across the globe to really think about what AI guardrails should look like so to your point I think there is a very strong intersection of technology and policy happening right now excited to be here. Thank you

Anirudh Suri

Arvind, last but not least.

Arvind Jain

Thank you everyone my name is Arvind I’m the founder and CEO of Glean. Glean is an enterprise AI company about seven years old and think of us like Google or ChatGPD but inside your company. Glean is a place where you can go and ask any questions or give it some tasks and it uses all of the world’s knowledge just like how ChatGPD does it but also uses all of the internal company’s data and knowledge to help people with their questions or their tasks.

Anirudh Suri

Incredible. I think we have a great set of panelists across various sectors and I think various angles of the AI entrepreneurship market. We’re going to have a focus on how entrepreneurship is evolving, right, in this session. I’m sure in other sessions in this summit you’ve heard a lot about deep tech entrepreneurs. You’ve heard about probably all sorts of AI entrepreneurs. Of course, you’ve heard some of the largest AI companies take the stage, etc., including our very own Sarvam out of India. But now I want to focus on how entrepreneurship is evolving. Before I ask my first question to the panelists, can I have a quick show of hands? How many of you here are entrepreneurs?

Big chunk. Wannabe entrepreneurs? Ex -entrepreneurs? People who decided to… too much and went into the corporate world, let’s say? A few. Good. So I think the biggest number is still entrepreneurs and want to be entrepreneurs. I think for some of us who’ve seen the previous waves of technology, innovation, like for example, most recently the consumer internet wave and of course there have been multiple waves prior to that. And for those of you who are not familiar with the history of technological waves, I really encourage all of you to study the previous waves because often you get to learn a lot from how the earlier waves of technological innovation panned out, what kind of entrepreneurs, what kind of companies succeeded.

But a little bit with that historical context in mind, Arvind, I want to come to you first. Compared to the wave of the consumer internet where we saw forms like of course the Googles and the Facebooks of the world, the social media platforms, but then also the cab hailing platforms and a lot of marketplaces and consumer focused platforms emerged. We’ve seen a lot of and at least in India I know this is the period over the last 10 -15 years where entrepreneurship has become a buzzword, a desirable profession, you can drop out of college and your parents will still be happy about it right, compared to that wave how is today’s wave of AI driven entrepreneurship looking to you, could you draw out for the audience and for us how these two waves might be similar for entrepreneurs and how they might be different?

Arvind Jain

yeah, so well first, you know, I think whenever there’s a new technology wave, it creates a lot of opportunities for new companies to get started that’s the right time for somebody to jump into the entrepreneurial journey and, you know, we’ve been through many of these in the past two or three decades, you know, starting with like you said consumer internet to mobile to social and of course now AI, and each one of these opportunities are similar in some ways like, you know, to start a company to be successful at it you have to have, like, first of all, the ambition to make, you know, to make one, to make a company. You have to have that sort of, you know, real deep desire to, you know, to be able to take the risk and have the courage to actually go start something.

You have to follow the recipe, the right recipes, which is finding, you know, the right business problem to solve, like something that actually creates a lot of business value in there. And it’s not so much about the technology trend. It’s, you know, you can use the technology trend as a way to see if you can solve that business problem better, but it always starts with the right business problem. And then, of course, like, you know, do the rest of, you know, the entrepreneurship journey, which is about, you know, building a great team, having a clear vision, working hard, and making things happen, right? So those are, like, you know, those things don’t change, like, you know, through these ways.

But I think one thing that is actually very unique about… the new AI wave is that it’s not only, I think we always had a blueprint in terms of what an organization needs to look like. When we went from consumer internet to mobile as two big technology trends, the shape of your company, how you build it, what kind of people you need to hire, those things were not actually changing that much. The blueprint was clear that you’re going to hire some engineers, you’re going to have product managers, some sales people. But now with AI, everything changes. In fact, the role of human itself is unclear in what roles seem to exist. In some sense, there’s some more challenge for the AI entrepreneurs, but also more opportunity to actually not know the basics.

You can actually start and chart a journey without actually knowing how to start a company. Because reinventing yourself, thinking AI first, can actually help you build an organization, which is very unconventional. And maybe that is what is going to create big success for you in the future.

Anirudh Suri

Do we expect, Arvind, do we expect now startups to be even leaner given AI? Because I’ll give you a couple of examples. I have friends who are second -time, third -time entrepreneurs who successfully started, exited startups in the consumer internet wave. And now when they’re starting up in the AI era, they seem to have much lower number of team members to start with to get to that minimum viable product. They seem to have much fewer people doing the coding for them. Many, I would say, a significantly smaller requirement even for capital as a result, because of course in the early days sometimes employee costs or early employee costs are quite high. So do you expect this to continue, leaner startups?

Arvind Jain

Absolutely. In terms of the product and how far you can go with a very, very lean team, in fact a team of one person, is actually, it’s incredible. can actually build a lot with, you know, with that low cost. So certainly, like, you know, that is, that is a, I would say that, you know, but it’s not like, you know, ultimately when you build a company, you know, with scale, people actually are your asset. And at some point you’re going to start growing, but you can do a lot before. And I think one of the reasons why companies will be more lean now is because it’s always, you know, on an entrepreneur’s mind, you’re always thinking about, especially when you don’t have enough resources, enough funding, you’re always thinking about any piece of work that needs to happen.

You know, can the machine do it? Can AI do it? And so that sort of like that mindset of like, you know, like, hey, you know, I’m going to actually use AI to do most of my, most of the work that needs to get done in this company. It is actually, that is what is going to actually create significant efficiencies and a way to defeat like, you know, the incumbents.

Anirudh Suri

Great. Let me move to you, Malhar. We had the opportunity to speak a little bit prior to the session. Thank you very much. And you’ve, of course, started very recently. So you started technically your first startup, I’m assuming, first startup in the AI age. And so talk to us about how you are both viewing entrepreneurship today. Is it any different than maybe, I’m sure you might have read some books or met entrepreneurs who started earlier, venture capitalists who started companies earlier in the earlier waves. What’s your sense on the question I just asked Arvind as well? And secondly, how are you being AI first in the company that you’ve started? How are you leveraging AI, not just in the product itself, but even in the sort of organization, so to say?

Malhar Bhide

Yeah, I think one thing is that because of how good AI has gotten, knowledge has gotten a lot more democratized. And so there’s less of an excuse to. actually be able to work in different fields in this sort of cross -disciplinary nature. For context, my co -founder, Yash and I, we’ve never studied biology. Our team is five people. One of the people from our team has graduated. Only one person has studied biology and they’re not the same people, the same person either. So I think AI has actually allowed us to study a lot more, read a lot more papers, reach out to more scientists, learn from them, reach out to more customers, understand what exactly they want.

And we sort of use AI throughout. We do fundamental research. We train our own models from scratch. We do research in the wet lab. We’re starting a lot of wet lab work where we use AI to actually be able to predict the results of these wet lab experiments so that we can be cost efficient and ensure that we can work with a very limited budget. I think in some sense what hasn’t changed from when we grew up and we were watching movies like The Social Network is even then it felt like the people who succeeded the most were people who didn’t want permission. The example being Mark Zuckerberg. Even if you look at Jeff Bezos starting Amazon, it wasn’t like Barnes and Nobles that started a website to sell books.

I think with AI that sentiment hasn’t changed, but it’s probably easier to materialize and we’ve definitely gotten the benefit of that.

Anirudh Suri

Are you finding that research is more and more critical to your work compared to maybe earlier waves of startups?

Malhar Bhide

Yeah, I think it’s definitely critical. We’re working on using these AI models to actually design novel DNA sequences. They act as switches. This involves training the models from scratch, working with… public data, starting and warranting experiments to actually get our own proprietary private data. So the entire thing actually hedges on the product or our research producing an output that is biologically and scientifically viable. Even when we want to sell to biotech companies and pharma companies, or if we ever want to pursue our own therapeutic program, there are very rigorous requirements for the thing to actually work. An example, of course, being the FDA and needing clearance throughout, but even actually starting the clinical trial process is a lot of work and actually has a requirement of things actually working.

Anirudh Suri

I’m going to keep coming back to this theme of research for a reason, but before I do that, Navreena, I want to bring you in. We, of course, started off the conversation saying about talking about the intersection of AI with, I mentioned geopolitics, you mentioned policy, you said you’re working closely with folks in D .C., in the White House, and otherwise. This intersection of AI and policy. while for policy wonks like some of us I also work at a think tank as a non -resident but other than for us policy wonks who are looking at that intersection of AI and policy talk to us about why for let’s say a Malhar of an Arvind who are not necessarily spending that much time in DC and with folks in the White House and the policy crowd why is understanding or maybe dealing with this intersection of AI and policy and geopolitics why is it important?

Is it? And if it is, why?

Navrina Singh

Absolutely and just by way of background I’m an engineer by training, spent 20 years building AI products in research and development at companies like Qualcomm, Microsoft so it is I do want to ground it in why I think as a technology policy is becoming really critical is going back to something that Malhar said which is really interesting right? It’s when we think about the new AI wave, to actually go from zero to one right now is very, very easy. I think what really becomes really interesting is do you actually get that product to be extremely reliable? It’s robust. Can you explain those systems? There’s a combination of, I would say, scientific measurements to build that trust that needs to happen.

But there’s another thing that needs to happen, which is all about making sure these systems work within the regulatory domains that especially require a lot of risk assessment and management. And so what we are seeing is the true moat that is happening for companies like Malhar is not just the technological innovation, because it is, you know, you’re able to do that much faster with a leaner team. But it is how do you do that consistently within the boundaries of the constraints and guidelines? That’s one of the guardrails that a regulatory ecosystem causes. So just as an example, we work with Fortune 500 companies in financial services, in health care, and they are really finding that they’re depending a lot on third -party AI, like maybe tools like Glean.

But how does Glean work in the context if you are building, let’s say, a customer service chatbot? You want to make sure that the chatbot not only is aligning to your brand guidelines, but it is not toxic. It’s highly reliable. It’s doing the things it’s supposed to do. And if it is within the context of a regulatory sector, it is following, let’s say, HIPAA compliance, et cetera, right? So as you can imagine, now it’s not just about building technology, but it is about building trusted technology that can work in the context that we are talking about. And that’s the exciting intersection of, I would say, policy, governance, and tech that I see.

Anirudh Suri

If I can go a bit deeper on that. So has it changed? Because regulation, of course, regulatory risk, policy risk. all of these risks are always there. So Qualcomm, Microsoft, a Jio, a Tata, you take any large company anywhere in the world, of course they’ll have regulatory and policy risk people and of course that’s a big part of what they are tracking, etc. What has changed, if anything?

Navrina Singh

A lot has changed. And just I think another thing I want to ground us in is it’s not just about regulation. When you start thinking about AI risk, just because of the way these large language models are built, unless you ground it in real data, there are issues of hallucination. Do you actually, are you getting the right outputs? And can these systems be reliable? What kind of evaluation benchmarks do you have? Have you actually done the testing across your entire AI supply chain, etc.? So the thing is now it is not a static tech. It’s actually a very dynamic technology that when it starts to operate in e -commerce, either customer context, or it starts to operate in regulatory context, you have to prove that you can do it reliably.

So I would say that’s the biggest shift that we are seeing with AI and some of the applications.

Anirudh Suri

The other theme that I want to keep going on with this, Navreena, and Malhar, Arvind, please feel free to chime in, is I think at the core of our engagement with AI on the policy front is the fact that the technology is moving so fast and governments realizing the fact that AI as a technology has massive ramifications on people, on existing structures, are saying, hey, listen, let’s rein it in before it goes out of our control. So there’s also a question of control here. So governments want to control so that for both reasons. I think one is governments generally don’t like to give up control to the private sector too much anyways, anywhere around the world.

But the other bigger piece here is when the technology, as you were saying, Navreena, is moving so fast, ultimately if some massive harm happens to people, ultimately the governments and political leaders know that they’ll have to be accountable. So it seems to me that it’s the nature of AI as a technology also, and it’s massive ramifications it’s making. policy and geopolitics risk a big part of what entrepreneurs have to keep in mind. The question I have for you though is has this become a function that every team has to have? So any startup has to have a CTO, has to have a CEO, has to have maybe a CFO and then product manager, etc. Is this becoming a role that is critical?

Malhar, do you have someone looking at this? Arvind, do you have people? Of course you’re a larger company. So let me start with Malhar. Do you have someone looking at this kind of risk?

Malhar Bhide

I think we do have the benefit of being a smaller company that’s not entirely putting things out for public use where they can be harmful until we’ve gone through all of the regulatory requirements and until we have tested things out in a setting where it is safe. But I think from a research perspective people who are working on AI biology models such as us they work a lot on ensuring these models are safe whether it comes to other people being able to design dangerous pathogens and they’re not going to be able to be able to So we very actively in our entire team keep up with that research. We study that research. Everyone on our team is technical.

We study how they actually enforce those guardrails. So for when we actually need to start making things and actually turn them into products, we are actually able to implement that.

Arvind Jain

Yeah, so first, like, you know, so we are enterprise focused. And in that sense, like, you know, users of our product, they come to Glean and they ask questions which are serious in nature. It defines, like, you know, what work they’re going to do, what decisions they’re going to make. So first of all, you have to be absolutely sure that even though the core foundation of the AI technology is, you know, it’s a stochastic, you know, modeling, it can make mistakes, it’s probabilistic. You have to sort of work on top of that and ensure that you can actually deliver precise and accurate results. And like, you know, refrain from answering questions or doing tasks if you’re not sure.

So so we like it’s a big part of like our product experience is how do you actually use a safely and securely? How do you sort of, you know, do that constant sort of judgment and evaluation of the work that it produces? And do fact checking so that ultimately, you know, not only are you delivering the right answers or task execution, but you’re actually showing the full trail of like, you know, where the answers are coming from, what are authoritative information is being used that is human generated. So so that is very core to just like in terms of, you know, what product experiences that we deliver. But I think you’re also talking about the question of like is like how important it is to think about policy, to think about like.

You know. like setting, working with governments and actually ensuring that, you know, the right regulations and laws are in place. For us, like, you know, as an enterprise company, you know, that, you know, we don’t actually think a whole lot about that. But it is important to sort of have, you know, these rules and regulations in place because otherwise AI can actually do significant damage, you know, in the industry.

Anirudh Suri

Great. The other dynamic I now want to move on to is, you know, the tech industry, as I think I’m sure all of us have seen, is in many ways, tech and entrepreneurship is defined by this idea of creative destruction. Large companies come as startups. They become big. By the time they get big, there’s a whole new wave of tech coming in and then a whole new set of startups come in and disrupt the incumbents. We’ve seen that again if you go into history, time after time, wave after wave, that’s happening. So my question to all of you. is that principle of creative destruction going to continue with AI? Or are we going to see that the big companies of today, the big tech firms of today, with the amount of capital they have, the amount of talent that they can hire with the kind of balance sheets that they have, the global scope of these companies, et cetera, their ability to shape policy, right?

Is this wave of big tech firms different? Or can we expect that the principle of creative destruction of them getting disrupted sooner than later is likely to continue? I’ll start off maybe, Arvind, with you, and then I’ll come to Naveen and Mahal.

Arvind Jain

Yeah, well, I mean, I think this creative destruction or disruption, rather, you know, it happens, and like always, you will see that, you know, over the last 20 years, companies like Google, you know, Microsoft, you know, these are big… giants and they have all the resources in the world and all the policy making power but yet when you think about innovation that happens in the tech industry often it actually happens outside of those companies and that’s because I think the spirit of entrepreneurism is actually alive, it’s an innate human thing and most bold ideas actually come from a single person who wants to actually who’s passionate about solving a problem and so I don’t think AI is going to actually change that in fact if there’s any indications AI is actually going to make it even more, it’s going to make it easier for people to actually create really interesting products and to serve large players because now there’s more power in their hands you don’t even need to be an engineer, you don’t need to be an AI scientist to actually use these amazing technologies and build and sort of turn your ideas into real products with very few reasons.

resources. So what I expect to see happen is that more and more innovation that is going to happen again in startup land. But ultimately, of course, I think the larger companies are well -established, they have large customer bases, so the model in the industry tends to be that new products are always, innovation comes from startups, but then innovation scales at larger companies.

Anirudh Suri

Malhar, I’m going to come to Navrina in a second, but Malhar, I want to ask you a slightly different question. Navina, did you want to add to that?

Navrina Singh

You should not be worried about another person or even like AI taking your job. You should really be worried about a person who’s so good with AI actually replacing you. So I think so I have started to think about disruption rather than like in context of big tech or startups like individuals. Right. What what are creators and entrepreneurs going to create just when they can unlearn very fast rather than, you know, we don’t have a playbook right now for how you should be succeeding in the age of AI. So can, you know, new set of entrepreneurs use these tools, unlearn very fast old habits and be open and willing to try new ways of building faster?

I think that’s the construct that’s more healthier than thinking in the context of a company.

Anirudh Suri

Great. Thanks, Navina. Malhar, I want to come to you now and ask you, we’ve spoken about how companies or startups are changing both internally. They might look different, might be leaner. more research focused in the age of AI. You’ve spoken about the importance of policy and regulation and especially in the world of AI, but I want to ask you now about the entrepreneur themselves. Given we’re sitting in India, I’m going to ask you the question from the perspective of India. You’re an Indian entrepreneur, grew up in Bombay, working out in San Francisco, part of the Y Combinator batch, dropped out of UIUC. Tell me from your perspective, Malhar, what does the Indian entrepreneur building a startup in the US today look like?

Is it any different from earlier generations of Indian entrepreneurs in the US? One. And do you find some difference that you can maybe point to between an Indian entrepreneur who’s grown up in India and then is working there, starting up there in the US, versus someone who’s grown up in the US?

Malhar Bhide

Yeah, I think something that has been quite fruitful for me moving to America is that there’s an entire process of actually leaving where you’ve grown up and going somewhere and setting up things entirely new from there. And I think that sort of sets some sort of tone and precedence in even the work you do at your startup in terms of just taking risks, the people you hire, the things you do. I think in some sense that has perhaps stayed the same over multiple years because that procedure has not really gotten easier. It might be easy to get information, to book flights, to stay in contact with people, but the act itself has still been incredibly hard.

I think that is one big difference, and I think something that also probably contributes for more specifics is someone who’s grown up, let’s say, in America, is more aware of systems in America, how do you sell to people in America, what are talent distributions like in America, and how do you sell to people in America? And I think that’s something that has been quite fruitful for me moving to America. Thank you. someone like me who’s grown up in India. I think if we believe that India has a large role to play in things like drug discovery in the coming decade, I know a lot about how drug discovery works here, how hospitals work here, how data is collected, how many patients are treated, how diverse the patient body here is.

So I think there are those very specific advantages to it as well.

Anirudh Suri

If I go to Arvind to ask the same question, can I just see a quick show of hands? Anyone has questions or quick comments? I want to make the last few minutes interactive if you’d like. Any burning questions or comments in the audience? Okay. So while Arvind’s answering, just raise your hand so I have a sense of the room, and then we’ll try and get to you. Arvind, this is your second startup, so I think you might have some perspective on this.

Arvind Jain

Yeah, well, I think, first of all, in technology, a lot of startups actually are started by Indians, folks in Silicon Valley or, of course, here. One thing which I… I think is interesting in the U .S. and Silicon Valley is the… you know, there is availability, you know, of capital. There’s belief, you know, and specifically in the Indian diaspora in terms of, like, you know, their ability to go and build great companies. You know, today, look at tech, you know, in the tech industry, you know, even in the large enterprises, you know, there are a lot of Indian folks who are CEOs. And I think what has actually made that happen is fundamentally, I think, you know, we are more hungry.

You know, we are, like, you know, I think there’s something about, like, in our culture, you know, and where we are as a nation, there is, you know, that drive, you know, that, you know, Indian people have, you know, and which is what is actually creating, you know, this incredible success, you know, for success stories, you know, for all of us. So I think that that’s one thing that I would say. I routinely, like, you know, of course, you know, I had the same thing, you know, I had the desire to make something. You know, big and, you know, that. continues to drive me, but what I see from, I work with a lot of young folks, a lot of people have actually joined our companies and then went on this entrepreneurship journey and I continue to see that same pattern that it is the folks who actually grew up here and then relocated to the US they are the ones who are most likely to start companies and become entrepreneurs.

Anirudh Suri

I want to quickly open up to the audience, I know we don’t have probably mics there so I’m going to can I see a quick show of hands again? 1, 2 and 3. So we have less than 4 minutes, so what I’m going to ask you is in 15 seconds give us a question or comment. 3 hands I see. We’ll start off here and come to you and then come to you. Hello,

Audience

Yep, I have a question for Malhar. So you have a multi -discipline startup right now, so and you also told that you’re not from biology field like you’re not affiliated from biology field so the question is how did you like find this problem to solve great we’ll take these two also you can just shout out while the mic comes Yeah. So my question is so I mean we had this technology and it became a boon and bane and then what started evolving with technology was cyber security the field of cyber security right so now we have this AI and also we have the fear of how AI is being used for the boon and the bane and also you have the additional fear of hallucinations and of course so all these equivalent to cyber security are you going to have something like AI security or is there a new field that will come up and also how can you handle this hallucination I mean you can give a relevancy score to the output?

Hi, my question is for Navina, actually. I also work in an AI governance company called Protego. I had attended sessions today with Amazon and Zoom, especially these big leaders are saying that if we do governance at this stage, we will not see the ROI from AI, and it’s going to stop innovation in some manner. What’s your take on that? How do you advocate AI governance, especially with your hands -on with G42?

Anirudh Suri

Great. So I’m sorry, but I mean, I’m sure we can speak. I literally have a timer that’s in two and a half minutes. Navina, I’m going to give you less than a minute, and then we’ll go across the room.

Navrina Singh

Yeah, it’s funny. If you were in the Amazon room and they made this comment because they were our first customers, so I’m surprised to hear that. But having said that, you know, it’s actually very clear. We are seeing very clear ROI on AI governance. If you have a very clear visibility, risk management practice, you can actually adopt third -party AI much more faster. And you’re seeing… much more, obviously, productivity gains with that. Secondly, when you have governance, your AI deployment increases, so you can actually deploy more products faster to customer, but also products that can be more trusted by the customers, and as a result of which, you are just adding more to top line.

So happy to share more details from our customers.

Anirudh Suri

I think you can take maybe the cyber question, and then Malhar will come to you for this. Are we going to see a new field of AI and cybersecurity?

Arvind Jain

Oh, that’s right, yeah. So absolutely, I think AI is a very new technology, and it’s actually very gameable. So there’s a new form of attacks, like prompt injection are coming into place. It’s actually a rapidly evolving new field with a lot of entrepreneurship opportunities. It’s about how you actually… So I’m going to turn it over to Malar. control like what data, what information actually goes to AI models so that they actually get to work on good safe data but then also like whatever output comes back from AI the responses that comes back from AI how do you sort of make sure that those are not attack vectors and similarly I think the other thing you mentioned the related point of hallucinations.

The hallucination is actually a core sort of feature of the current AI technology unfortunately like this is how it’s built and so again from a discipline perspective I think companies that actually detect hallucinations that can monitor it, provide observability on it is also again like a good area and a field of discipline.

Anirudh Suri

Good entrepreneur. Anytime there’s a problem there’s an opportunity. Malhar you have something? 16, 15?

Malhar Bhide

I think for my co -founder and I both it started off as this deep intellectual interest more than anything, being college students that was really what we had to go off with. We were always interested in DNA and how your body regulates different cells, how it sort of maintains healthy functioning and what can really be learned by mining the genome and understanding things from that. So it started off with that and then I think after that we treated it very empirically, talking to customers, talking to scientists, talking to doctors who know a lot more in this field. I think that was sort of the start and how it continued.

Anirudh Suri

Great. I think we are out of time, but I do hope that all of you have taken something away from the session. I hope that this summit has been a two -way conversation. I think it’s more important that, and I want to end with this remark, it’s very important that people sitting on the stage, whether it’s us or other panels, listen to all that you have to say and ask and show. because I think the summit must be a two -way conversation. I think that’s a very important piece, especially since so many students and so many entrepreneurs and would -be entrepreneurs have come here. So please do take the time to find the panelists if you want subsequently.

And now let me end with one best of wishes to all of you in your entrepreneurship journeys and to you. And we hope to see all of you back again here soon. And thank you all for staying here.

Related ResourcesKnowledge base sources related to the discussion topics (22)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Anirudh Suri opened the closing session of the India AI Impact Summit and acted as moderator of the panel.”

The knowledge base lists Anirudh Suri as the moderator for a panel at the India AI Impact Summit, confirming his role in the session [S6].

Confirmedmedium

“Suri asked whether AI enables leaner start‑ups, citing examples of serial entrepreneurs needing far fewer engineers and less capital to reach an MVP.”

The transcript excerpt in the knowledge base includes the exact question about leaner start-ups in the AI era, confirming that Suri raised this point [S4].

Additional Contextmedium

“AI start‑ups are achieving significant revenue with very small teams, sometimes as few as 20 employees, challenging traditional VC funding models.”

A knowledge-base entry describes a trend in Silicon Valley where AI companies with lean teams (as few as 20 people) generate tens of millions in revenue, providing additional context to the claim [S64].

Additional Contextmedium

“AI allows founding teams to be dramatically smaller—for example, a five‑person team can accomplish work that previously required fifty people.”

The source notes that intelligence abundance lets a founding team of five perform tasks that used to need fifty, supporting the claim about dramatically smaller teams enabled by AI [S101].

Additional Contextlow

“Arvind compared the current AI‑driven wave to the earlier consumer‑internet era, stating that each technological wave creates fresh entrepreneurial opportunities while the core success ingredients remain the same.”

Discussion in the knowledge base highlights that AI is seen as reshaping jobs and industries in a manner similar to past technological waves, adding nuance to Arvind’s comparison [S95].

Additional Contextlow

“Malhar’s five‑person team can conduct wet‑lab research, train models from scratch and use AI to predict experimental outcomes, keeping costs low and accelerating development.”

Research on OpenAI’s GPT-5 demonstrates AI being applied to wet-lab biology, illustrating that AI-assisted wet-lab work is feasible and providing supporting context for Malhar’s claim [S15].

External Sources (103)
S1
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S2
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S3
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S4
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-next-generation-of-techies-_-india-ai-impact-summit — Arvind, last but not least. Big chunk. Wannabe entrepreneurs? Ex -entrepreneurs? People who decided to… too much and …
S5
Closing Ceremony and Chair’s WSIS+20 Forum High-Level Event Summary — Audience:Thank you so much. My name is Anand. I’m from Nepal. And this is the first time I’m attending WSIS process. Tha…
S7
Panel Discussion Next Generation of Techies _ India AI Impact Summit — – Arvind Jain- Malhar Bhide- Navrina Singh – Malhar Bhide- Arvind Jain
S9
Sticking with Start-ups / DAVOS 2025 — – Arvind Jain: Co-Founder and CEO of Glean Arvind Jain: Well, India has fundamentally changed in the last two decades….
S10
Driving U.S. Innovation in Artificial Intelligence — 11. Samir Jain – Vice President of Policy, Center for Democracy and Technology 12. Sean Domnick – President, American As…
S11
Panel Discussion Next Generation of Techies _ India AI Impact Summit — – Arvind Jain- Malhar Bhide – Arvind Jain- Navrina Singh
S12
https://dig.watch/event/india-ai-impact-summit-2026/indias-roadmap-to-an-agi-enabled-future — VLSI and as it turns out there are a host of issues that need if you ask me serious discussion and brainstorming. Primar…
S13
AI/Gen AI for the Global Goals — Speakers unanimously agreed on AI’s significant potential to drive progress towards the UN Sustainable Development Goals…
S14
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — In addition, the speakers propose the concept of a circular economy of intelligence. This entails leveraging expertly cu…
S15
OpenAI’s GPT-5 shows a breakthrough in wet lab biology — New researchhas been publishedby OpenAI, examining whether advanced AI models can accelerate biological research within …
S16
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — AI alone will not create economic opportunities, but the delivery of AI in our field through manufacturing and products …
S17
Advancing Scientific AI with Safety Ethics and Responsibility — All of these things don’t respect national borders, right? So, how it’s going to spread. If people using VPN or other th…
S18
Importance of Professional standards for AI development and testing — – Proper oversight and validation processes are essential
S19
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Technology is moving at an incredibly fast pace, and this rapid advancement is seen in various sectors such as AI, semic…
S20
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — Patel highlights a lack of trust in AI systems due to safety and security concerns. He stresses the need for runtime gua…
S21
AI Collaboration Across Borders_ India–Israel Innovation Roundtable — Both countries represents minority ethnic minorities cultural ethnic minorities so but we have to be the guardians of th…
S22
Artificial intelligence (AI) – UN Security Council — During another session, one speaker highlighted that”Technical explainability is crucial for ensuring transparency and a…
S23
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — But that takes courage, that is going against the grain and it takes vision. First, trust. It’s trust. Trustability. Tr…
S24
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Why don’t you specialize in a domain? But those are things like even fundamental things. I would say. The big leap is go…
S25
Responsible AI in India Leadership Ethics & Global Impact — Yeah, I think definitely the regulations are required, especially because AI can go berserk. And, you know, there are, I…
S26
Driving Indias AI Future Growth Innovation and Impact — The innovate side really comes down to. Areas like skilling, which I know when Minister Chaudhry joins us, we will get i…
S27
Responsible AI in India Leadership Ethics & Global Impact part1_2 — So what we do has a direct impact on the safety of the customers that we carry. And the notions are well embedded in the…
S28
AI will not replace people – but people who use AI will replace people who do not | IBM’s Report — According toIBM’s report, executives estimate that around 40% of their workforce will need to reskill due to implementin…
S29
AI could replace 2.4 million jobs in US by 2030| Forrester’s report — According to a recent report from Forrester, an influential analyst firm, it is projected that Generative AI will replac…
S30
[Briefing #51] Internet governance forecast for 2019 — A clearer understanding of AI can help policymakers and businesses take action more quickly on the future of work strate…
S31
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — This represents an unexpected shift in framing AI’s impact on entrepreneurship – rather than viewing AI purely as an ena…
S32
Embracing the future of e-commerce and AI now (WEF) — The analysis highlights the transformative impact of emerging technologies on global trade. Specifically, blockchain, ar…
S33
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — “We also, along with my colleague Vinod, are large investors in Sarvam, which is providing sovereign AI capabilities to …
S34
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S35
AI Development Beyond Scaling: Panel Discussion Report — Choi advocates for AI democratization where AI reflects human knowledge and values, serves all humans rather than just t…
S36
How AI Drives Innovation and Economic Growth — Kremer argues that while there are forces that may widen gaps, AI has significant potential to narrow development dispar…
S37
DC-IoT Progressing Global Good Practice for the Internet of Things | IGF 2023 — Maarten Botterman:Yes, thank you for that, Wout. What we see is the rapid developments make it more and more difficult a…
S38
WS #362 Incorporating Human Rights in AI Risk Management — This comment shifted the discussion from regulatory compliance to values-driven governance, influencing later speakers t…
S39
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S40
How AI Drives Innovation and Economic Growth — And I would like to, you know, separate advanced economies from emerging or developing economies. So when it comes to ad…
S41
Secure Finance Risk-Based AI Policy for the Banking Sector — “It talks about how generative AI has lowered the barriers for a lot of these threat actors”[135]. “But one of the more …
S42
Hard power of AI — The combined evolution of AI and quantum computing also raises concerns for global security. These emerging technologies…
S43
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Arun Shetty made a crucial distinction between safety and security concerns in AI systems. Safety issues involve models …
S44
Challenging the status quo of AI security — This seemingly simple example is profoundly thought-provoking because it illustrates how AI agents can be manipulated to…
S45
Biology as Consumer Technology — In the biotech space, large companies should allow research and development (R&D) to play out and let failures fail whil…
S46
How AI Drives Innovation and Economic Growth — Akcigit distinguishes between two layers of AI development in advanced economies. The application layer has low entry ba…
S47
Panel Discussion Next Generation of Techies _ India AI Impact Summit — Is this wave of big tech firms different? Or can we expect that the principle of creative destruction of them getting di…
S48
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — An audience member emphasizes the importance of research and continuous stakeholder engagement in policy formulation. Th…
S49
9821st meeting — Finally, AI’s transformative potential is matched by its complexity, demanding careful and evidence-based governance. Po…
S50
Importance of Professional standards for AI development and testing — – Proper oversight and validation processes are essential
S51
Policy Network on Artificial Intelligence | IGF 2023 — In conclusion, the analysis of the speakers’ points highlights a range of issues in the intersection of technology, gove…
S52
Global AI Policy Framework: International Cooperation and Historical Perspectives — Jovan Kurbalija reinforced this historical framing with concrete examples, highlighting what he called the “original sin…
S53
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — Some variation emerged in discussions of implementation priorities. Alrayes emphasised top-down economic philosophy and …
S54
Comprehensive Discussion Report: AI’s Existential Challenge to Human Identity and Society — Both speakers, despite their different professional backgrounds (historian/philosopher vs. neuroscientist/educator), une…
S55
Silicon Valley moves to influence AI policy — Silicon Valley insidersare preparing to pour over $100 millioninto next year’s US midterm elections to influence AI poli…
S56
Secure Finance Risk-Based AI Policy for the Banking Sector — “Governance in the AI era must however be embedded into systems design”[1]. “Embedded governance means integrating accou…
S57
Laying the foundations for AI governance — Artemis Seaford: So the greatest obstacle, in my opinion, to translating AI governance principles into practice may actu…
S58
Agentic AI in Focus Opportunities Risks and Governance — This comment reframed the entire policy discussion by highlighting that we’re entering uncharted territory in governance…
S59
Setting the Rules_ Global AI Standards for Growth and Governance — And I think… similar with some of the controls that might need to be kind of used to manage some of the risks if there…
S60
How AI Drives Innovation and Economic Growth — And when I say incumbents, those firms that have more than 1 ,000 employees. In around 2000, 50 % of employees used to w…
S61
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-drives-innovation-and-economic-growth — Thank you very much. And so, of course, creative destruction is an important driver of economic growth in the long run. …
S62
The Intelligent Coworker: AI’s Evolution in the Workplace — Measuring Success and Productivity The audience member points out a practical challenge where AI provides fragmented ti…
S63
Building a Digital Society, from Vision to Implementation — – Chukwuemeka Cameron Hines cites research from Gary Marcus presented at Web Summit showing that despite companies bein…
S64
AI startups in Silicon Valley rethink VC funding with leaner teams and strategic growth — In Silicon Valley, a notable trend isemergingas AI startups achieve significant revenue with leaner teams, challenging t…
S65
Driving Indias AI Future Growth Innovation and Impact — So we all need to really invest more and then deploy more. But the most important thing that you should see is that ther…
S66
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S67
AI/Gen AI for the Global Goals — Priscilla Boa-Gue argues for the creation of supportive policy environments to foster AI startups. This includes develop…
S68
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — ROI doesn’t come from creating a very large model. 95% of the work can happen with models which are 20 billion or 50 bil…
S69
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Mr. Chief of State Mr. Chief of Government For Brazil it is a satisfaction to participate in the artificial intelligence…
S70
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — This represents an unexpected shift in framing AI’s impact on entrepreneurship – rather than viewing AI purely as an ena…
S71
Strategic Action Plan for Artificial Intelligence — The more than 300 AI-driven start-ups and scale-ups in the Netherlands (over 9000 FTEs) operate mainly in the field of b…
S72
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — “We also, along with my colleague Vinod, are large investors in Sarvam, which is providing sovereign AI capabilities to …
S73
Embracing the future of e-commerce and AI now (WEF) — The analysis highlights the transformative impact of emerging technologies on global trade. Specifically, blockchain, ar…
S74
AI/Gen AI for the Global Goals — Speakers unanimously agreed on AI’s significant potential to drive progress towards the UN Sustainable Development Goals…
S75
Panel Discussion Next Generation of Techies _ India AI Impact Summit — “knowledge has gotten a lot more democratized”[30]. “actually be able to work in different fields in this sort of cross …
S76
Day 0 Event #172 Major challenges and gaps in intelligent society governance — Min Jianing: This is from Beijing. Today, I’m going to talk about 10 Epidemiological Questions on Generating Artific…
S77
Artificial intelligence: a catalyst for scientific discovery and advancement — While concerns about AI’s dangers abound, experts believe that it can greatly accelerate scientific progress and lead to…
S78
AI Development Beyond Scaling: Panel Discussion Report — Choi advocates for AI democratization where AI reflects human knowledge and values, serves all humans rather than just t…
S79
Keynote-Roy Jakobs — This comment introduces a systems-thinking perspective that acknowledges the complexity of AI implementation beyond just…
S80
WS #362 Incorporating Human Rights in AI Risk Management — This comment shifted the discussion from regulatory compliance to values-driven governance, influencing later speakers t…
S81
Setting the Rules_ Global AI Standards for Growth and Governance — And maybe before the next introduction, just so you can get a flavor, we have standard setters and measurers. We have pe…
S82
360° on AI Regulations — In conclusion, the analysis reveals that AI regulation is guided by existing laws, and there is a complementary nature b…
S83
How AI Drives Innovation and Economic Growth — And I would like to, you know, separate advanced economies from emerging or developing economies. So when it comes to ad…
S84
How AI Drives Innovation and Economic Growth — Akcigit distinguishes between two layers of AI development in advanced economies. The application layer has low entry ba…
S85
Who Benefits from Augmentation? / DAVOS 2025 — Ravi Kumar S. argues that AI can democratize knowledge by providing expertise at one’s fingertips. This can lower entry …
S86
Brainstorming with AI opens new doors for innovation — AI is increasingly embraced as a reliable creative partner, offering speed and breadth in idea generation. In Fast Compa…
S87
Inclusive AI Starts with People Not Just Algorithms — I guess for EQ, education disruption, EQ, to work on EQ is arts, and that’s a disruption. I’m glad AI came, and hence In…
S88
Hard power of AI — The combined evolution of AI and quantum computing also raises concerns for global security. These emerging technologies…
S89
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Arun Shetty made a crucial distinction between safety and security concerns in AI systems. Safety issues involve models …
S90
Emerging Shadows: Unmasking Cyber Threats of Generative AI — Data poisoning and technology evolution have emerged as significant concerns in the field of cybersecurity. Data poisoni…
S91
Inside OpenAI’s battle to protect AI from prompt injection attacks — OpenAIhas identifiedprompt injection as one of the most pressing new challenges in AI security. As AI systems gain the a…
S92
Powering AI Global Leaders Session AI Impact Summit India — -Sam Altman: CEO and co-founder of OpenAI (mentioned but did not speak in this transcript) -Speaker: Role/title not spe…
S93
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — And I want to acknowledge the countries that came forward to really put this initiative together, starting first, of cou…
S94
The Innovation Beneath AI: The US-India Partnership powering the AI Era — It feels a lot like the leverage in terms of optimizing hardware and infrastructure is only going to get better, and it’…
S95
Harnessing Collective AI for India’s Social and Economic Development — Kushe Bahl believes that AI will fundamentally reshape jobs rather than just replacing them outright. He suggests this t…
S96
Comprehensive Discussion Report: AI Agents and Fiduciary Standards — Pentland presented a future where AI agents would handle virtually every business and government process, essentially ad…
S97
IBM CEO’s take on AI’s influence on the business landscape — IBM’s CEO, Arvind Krishna, has left no room for doubt – AI is set to revolutionize the business world. Earlier this year…
S98
Entry-level work transforms in the agentic AI era — Labour market data in the USrevealsan unexpected shift in graduate outcomes, with humanities students outperforming seve…
S99
Manufacturing’s Moonshots Are Landing . . . Are You Ready for the Next Wave? — They raise questions about whether the industry has truly benefited from these changes. Flexibility in supply chain plan…
S100
Rights and Permissions — For most of the last 40 years, human capital has served as a shield against automation, in part because machines …
S101
https://dig.watch/event/india-ai-impact-summit-2026/keynote-bejul-somaia — This is the constraint that AI is dissolving. When intelligence becomes abundant, when a founding team of five can do th…
S102
Reasoning AI to be unpredictable, says OpenAI co-founder — At the NeurIPS conference in Vancouver, Ilya Sutskever, co-founder of OpenAI,predictedthat artificial intelligence will …
S103
What is it about AI that we need to regulate? — The discussions across multiple IGF 2025 sessions revealed significant concerns about the implications of developed coun…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Arvind Jain
1 argument172 words per minute1847 words640 seconds
Argument 1
Indian diaspora’s cultural drive and access to capital fuel high entrepreneurial success in Silicon Valley
EXPLANATION
Arvind observes that Indian entrepreneurs benefit from a cultural hunger for success and strong access to capital, which together drive a disproportionate number of Indian CEOs and founders in Silicon Valley. This cultural and financial ecosystem fuels their achievements.
EVIDENCE
He cites the prevalence of Indian CEOs in large tech firms, the availability of capital, and a cultural drive that motivates Indians to build great companies, noting that those who grew up in India and moved to the US are most likely to start ventures [215-222].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel discussion notes that cultural and socioeconomic factors drive Indian entrepreneurial success and that Indians dominate Silicon Valley manpower, supporting the claim [S6][S12][S9].
MAJOR DISCUSSION POINT
Indian entrepreneurs operating in the US versus India
M
Malhar Bhide
5 arguments187 words per minute985 words314 seconds
Argument 1
AI democratizes knowledge, letting non‑experts launch cross‑disciplinary startups
EXPLANATION
Malhar argues that AI has made knowledge widely accessible, enabling founders without formal expertise in a domain—such as biology—to start interdisciplinary ventures. This democratization reduces the barrier to entry for cross‑disciplinary entrepreneurship.
EVIDENCE
He notes that AI has democratized knowledge, allowing his team-none of whom studied biology-to read papers, contact scientists, and understand customers, while using AI throughout their workflow [80-87].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel highlights AI’s democratization of research capabilities, noting Bhide’s team training models from scratch and conducting wet-lab work without traditional credentials [S6][S15].
MAJOR DISCUSSION POINT
Evolution of AI entrepreneurship
AGREED WITH
Anirudh Suri, Arvind Jain
Argument 2
Small teams can conduct advanced research (e.g., DNA design) using AI, reducing wet‑lab costs
EXPLANATION
Malhar describes how his five‑person team leverages AI to design novel DNA sequences and predict wet‑lab experiment outcomes, dramatically cutting costs and enabling sophisticated research without large labs. AI thus empowers small startups to perform high‑level scientific work.
EVIDENCE
He explains that they train models from scratch, use AI to predict wet-lab results for cost efficiency, and operate with a limited budget while conducting fundamental research [90-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bhide’s approach of using AI to predict experimental outcomes and cut costs is documented in the discussion, illustrating how small teams can perform advanced DNA design [S6][S15].
MAJOR DISCUSSION POINT
Leaner startups enabled by AI
Argument 3
Deep research is essential; AI models must produce biologically viable outputs and meet strict validation
EXPLANATION
Malhar emphasizes that their AI‑driven approach to designing DNA sequences requires rigorous scientific validation, including compliance with FDA regulations and clinical trial standards, to ensure biological viability. Research quality is therefore central to product success.
EVIDENCE
He details using AI to design DNA switches, training models from public and proprietary data, and the necessity of meeting FDA clearance and clinical trial requirements for biotech applications [97-103].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for rigorous validation, FDA compliance, and biosafety integration is emphasized in AI-biology safety literature and professional standards [S15][S17][S18].
MAJOR DISCUSSION POINT
Centrality of research and AI‑first approach
AGREED WITH
Navrina Singh
Argument 4
Startup teams actively monitor AI safety and enforce guardrails to prevent misuse
EXPLANATION
Malhar states that his team continuously studies AI safety research, especially concerning the creation of dangerous pathogens, and implements guardrails throughout product development. This proactive monitoring ensures responsible AI use.
EVIDENCE
He mentions that the team keeps up with research on AI-biology model safety, studies guardrails, and integrates these safeguards when turning research into products [151-156].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The conversation references ongoing AI safety research, guardrails, and runtime safeguards to prevent misuse of AI-biology models [S17][S19][S20].
MAJOR DISCUSSION POINT
Intersection of AI with policy, governance, and risk
AGREED WITH
Navrina Singh, Anirudh Suri
DISAGREED WITH
Anirudh Suri, Arvind Jain
Argument 5
Relocating to the US fosters risk‑taking mindset and market insight, while Indian background offers domain expertise in local ecosystems
EXPLANATION
Malhar reflects that moving to the United States forced him to take risks and gave him insight into American market dynamics, whereas his Indian upbringing provides deep understanding of India’s drug‑discovery ecosystem, hospitals, and patient data. Both perspectives bring distinct advantages to his startup.
EVIDENCE
He describes how moving to America set a risk-taking tone and gave market knowledge, while his Indian roots give him expertise in Indian drug discovery, hospital operations, and patient data diversity [200-208].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bhide’s reflections on relocation and the complementary insights from Indian and US ecosystems are captured in the panel, and cross-border collaboration is discussed as a source of domain expertise [S6][S21][S24].
MAJOR DISCUSSION POINT
Indian entrepreneurs operating in the US versus India
N
Navrina Singh
5 arguments179 words per minute880 words294 seconds
Argument 1
Success now hinges on reliability, explainability, and regulatory compliance, not just tech
EXPLANATION
Navrina argues that beyond building AI technology, startups must ensure their systems are reliable, explainable, and operate within regulatory frameworks to gain trust. These factors constitute the true competitive moat in the AI market.
EVIDENCE
She highlights the need for robust, explainable systems, scientific measurement for trust, and compliance with sectoral regulations such as HIPAA, emphasizing that trusted technology is essential [108-123].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UN Security Council remarks and AI standards stress explainability, reliability, and regulatory compliance as essential for trust, aligning with the argument [S22][S18][S20][S23].
MAJOR DISCUSSION POINT
Evolution of AI entrepreneurship
AGREED WITH
Malhar Bhide
DISAGREED WITH
Arvind Jain
Argument 2
Scientific measurement, robustness, and trustworthiness are required for AI products to be adopted
EXPLANATION
Navrina stresses that AI products must undergo rigorous scientific validation and demonstrate robustness before they can be widely adopted, especially in regulated sectors. Trust is built through measurable performance and adherence to standards.
EVIDENCE
She references the need for scientific measurements to build trust, ensuring systems are reliable, robust, and meet regulatory risk-assessment requirements [108-113].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Professional standards and scientific measurement are highlighted as prerequisites for adoption, reinforcing the need for robustness and trustworthiness [S18][S22][S23].
MAJOR DISCUSSION POINT
Centrality of research and AI‑first approach
Argument 3
AI governance and trust‑management are critical moats; compliance with sectoral regulations (HIPAA, etc.) is mandatory
EXPLANATION
Navrina explains that AI governance platforms provide trust‑management capabilities that act as a moat, ensuring products meet sector‑specific regulatory requirements such as HIPAA. Governance thus becomes essential for market adoption.
EVIDENCE
She describes how AI governance and trust-management help companies comply with regulations, citing examples in financial services and healthcare where third-party AI must meet brand, toxicity, reliability, and HIPAA standards [114-123].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI governance platforms, sectoral regulations like HIPAA, and documented ROI of compliance are discussed in governance literature, supporting the moat claim [S20][S25][S30][S19].
MAJOR DISCUSSION POINT
Intersection of AI with policy, governance, and risk
AGREED WITH
Malhar Bhide, Anirudh Suri
Argument 4
Individuals who master AI will replace jobs; rapid unlearning and new playbooks are the real disruptive forces
EXPLANATION
Navrina contends that the real threat is not AI itself but people who become highly proficient with AI tools, enabling them to outpace others. She calls for entrepreneurs to unlearn old habits quickly and adopt new AI‑centric playbooks.
EVIDENCE
She states that one should worry about a person skilled with AI replacing you, and emphasizes the need for rapid unlearning and new approaches in the AI age [185-190].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports on workforce reskilling and job displacement by AI illustrate that individuals proficient with AI become the disruptive force [S28][S29][S30].
MAJOR DISCUSSION POINT
Creative destruction: big tech vs. startup disruption in the AI era
AGREED WITH
Arvind Jain
DISAGREED WITH
Arvind Jain
Argument 5
Governance delivers clear ROI by accelerating safe AI adoption, boosting productivity and top‑line growth
EXPLANATION
Navrina argues that implementing AI governance yields tangible returns: it speeds up safe AI deployment, improves productivity, and enables revenue growth by building trusted products for customers. Governance is therefore not a barrier but a value driver.
EVIDENCE
She notes that clear risk-management practices enable faster adoption of third-party AI, increase productivity, and allow more trusted products to be deployed, ultimately adding to top-line growth [240-246].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Governance’s ROI benefits are reiterated in AI governance discussions, matching the audience’s point [S30][S20][S25].
MAJOR DISCUSSION POINT
Role of AI governance ROI concerns
DISAGREED WITH
Audience member
A
Audience
2 arguments173 words per minute253 words87 seconds
Argument 1
New attack vectors (e.g., prompt injection) create a nascent AI‑cybersecurity field with entrepreneurial opportunities
EXPLANATION
An audience member points out that emerging AI attack techniques such as prompt injection represent a new cybersecurity frontier, opening opportunities for startups to develop protective solutions. This mirrors how previous technological shifts spawned dedicated security sectors.
EVIDENCE
The audience asks whether AI security will become a new field, mentioning prompt injection attacks and the need for AI-security solutions, and wonders about handling hallucinations with relevance scores [231-252].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Emerging AI attack techniques such as prompt injection and the need for guardrails are identified as new cybersecurity opportunities [S19][S20].
MAJOR DISCUSSION POINT
Emerging AI security challenges and hallucinations
Argument 2
Governance delivers clear ROI by accelerating safe AI adoption, boosting productivity and top‑line growth
EXPLANATION
An audience participant raises concerns that AI governance might hinder ROI, prompting a response that governance actually provides measurable returns through faster, safer AI deployment and increased productivity. The discussion underscores governance as a strategic investment.
EVIDENCE
The audience mentions ROI concerns about AI governance, and Navrina responds that governance improves visibility, risk management, speeds adoption, and drives top-line growth [236-246].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Governance’s ROI benefits are reiterated in AI governance discussions, matching the audience’s point [S30][S20][S25].
MAJOR DISCUSSION POINT
Role of AI governance ROI concerns
DISAGREED WITH
Audience member, Navrina Singh
A
Anirudh Suri
5 arguments166 words per minute2279 words820 seconds
Argument 1
Entrepreneurs should study previous technology waves to learn which business models and strategies succeed or fail
EXPLANATION
Suri argues that understanding the history of past innovation cycles gives founders valuable insights into patterns of success, helping them avoid repeating past mistakes and better position their ventures.
EVIDENCE
He explicitly encourages the audience to study earlier waves of technological innovation, stating that it helps learn a lot about how earlier waves panned out and which companies succeeded, and he frames this advice for those unfamiliar with the history of technological waves [35-36].
MAJOR DISCUSSION POINT
Learning from historical technology waves
Argument 2
AI enables startups to operate with much leaner teams and lower capital requirements
EXPLANATION
Suri points out that the AI era allows founders to build minimum viable products with far fewer engineers and less funding, making the early‑stage startup model more resource‑efficient.
EVIDENCE
He cites examples of second- and third-time entrepreneurs who, when launching AI-focused ventures, needed far fewer team members to develop an MVP, required less coding effort, and faced significantly lower capital needs compared to the consumer-internet era [57-61].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel notes AI’s role in enabling small teams to conduct advanced research with limited capital, exemplified by Bhide’s work and GPT-5 wet-lab advances [S6][S15].
MAJOR DISCUSSION POINT
Leaner startups enabled by AI
AGREED WITH
Arvind Jain, Malhar Bhide
Argument 3
Understanding the intersection of AI, policy and geopolitics is essential for entrepreneurs because rapid AI development triggers government regulation and accountability
EXPLANATION
Suri stresses that AI’s fast‑moving impact on society leads governments to seek control and accountability, making it crucial for founders to engage with policy, governance and geopolitical considerations.
EVIDENCE
He references the need to bring geopolitics into the conversation early on [3] and later outlines how governments want to control AI, are wary of ceding power to the private sector, and will be held accountable for any large-scale harms, asking why entrepreneurs should care about this intersection [138-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple sources stress the importance of understanding AI policy, geopolitics, and regulatory landscapes for entrepreneurs [S19][S20][S25][S30].
MAJOR DISCUSSION POINT
AI‑policy and geopolitical implications for entrepreneurship
Argument 4
Every AI‑driven startup should create a dedicated role or function to manage AI risk, policy and governance
EXPLANATION
Suri suggests that as AI becomes more regulated and risky, having a specific person or team responsible for AI‑related risk management will become a critical organizational component.
EVIDENCE
He asks the panel whether startups now need a dedicated risk or policy role, framing it as a potentially critical function for all companies, and specifically queries Malhar and Arvind about having such a role in their organizations [144-150].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The recommendation aligns with literature on AI safety, guardrails, and organizational structures that advocate dedicated risk and governance functions [S19][S20][S30].
MAJOR DISCUSSION POINT
Emergence of AI risk and governance roles in startups
AGREED WITH
Navrina Singh, Malhar Bhide
DISAGREED WITH
Malhar Bhide, Arvind Jain
Argument 5
Summits and panels should be two‑way conversations where speakers actively listen to audience input
EXPLANATION
Suri emphasizes that effective knowledge sharing requires dialogue, not just one‑way presentations, and encourages participants to engage directly with panelists.
EVIDENCE
In his closing remarks he states that the summit must be a two-way conversation, highlighting the importance of listening to the audience and inviting further interaction after the session [262-266].
MAJOR DISCUSSION POINT
Importance of interactive, two‑way dialogue in knowledge‑sharing events
Agreements
Agreement Points
AI enables startups to operate with much leaner teams and lower capital requirements
Speakers: Anirudh Suri, Arvind Jain, Malhar Bhide
AI enables startups to operate with much leaner teams and lower capital requirements Leaner startups enabled by AI AI democratizes knowledge, letting non‑experts launch cross‑disciplinary startups
All three speakers note that the AI wave lets founders build MVPs with very small teams and reduced funding. Anirudh cites examples of second- and third-time founders needing far fewer engineers and less capital [57-61]. Arvind says a single-person team can build a product and that AI creates efficiencies that keep teams lean [62-65][66-68]. Malhar points out that AI has democratized knowledge, allowing his five-person, non-biology team to conduct advanced research and keep costs low [80-87][90-92].
POLICY CONTEXT (KNOWLEDGE BASE)
The application layer of AI lowers entry barriers, allowing small firms to compete with incumbents and drive creative destruction, as highlighted in economic analyses of AI-driven growth [S46]. Recent observations of Silicon Valley AI startups achieving significant revenue with teams of 20-30 people further illustrate this trend [S64].
Deep research and rigorous validation are essential for AI‑driven products
Speakers: Malhar Bhide, Navrina Singh
Deep research is essential; AI models must produce biologically viable outputs and meet strict validation Success now hinges on reliability, explainability, and regulatory compliance, not just tech
Both speakers stress that beyond the technology, AI products must be scientifically robust and meet regulatory standards. Malhar describes how their DNA-design models must pass FDA-type validation and rigorous scientific testing [97-103]. Navrina emphasizes the need for reliability, explainability, and sector-specific compliance (e.g., HIPAA) as the true moat for AI companies [108-113][114-123].
POLICY CONTEXT (KNOWLEDGE BASE)
Professional standards call for thorough oversight and validation throughout AI development cycles, emphasizing the need for evidence-based testing before deployment [S50]. This aligns with broader calls for rigorous, research-backed governance frameworks for complex AI systems [S49].
AI governance, policy, and risk management are critical moats and should be institutionalised
Speakers: Navrina Singh, Malhar Bhide, Anirudh Suri
AI governance and trust‑management are critical moats; compliance with sectoral regulations (HIPAA, etc.) is mandatory Startup teams actively monitor AI safety and enforce guardrails to prevent misuse Every AI‑driven startup should create a dedicated role or function to manage AI risk, policy and governance
All three agree that formal governance structures are essential. Navrina argues that AI governance delivers ROI and builds trusted products, citing examples in finance and health [108-123][240-246]. Malhar notes his team continuously studies AI safety research and implements guardrails [151-156]. Anirudh explicitly asks whether startups need a dedicated risk/policy function, highlighting its growing importance [144-150][138-144].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy experts stress that continuous stakeholder engagement and research-driven policy formulation are foundational for robust AI governance [S48]. Embedding accountability, transparency, and risk controls into AI lifecycles is advocated in sector-specific guidelines such as the banking AI policy framework [S56]. Challenges in translating governance principles into practice are also documented, underscoring the need for dedicated institutional structures [S57].
Creative destruction will continue; AI empowers individuals and startups to disrupt incumbents
Speakers: Arvind Jain, Navrina Singh
Creative destruction will continue with AI Individuals who master AI will replace jobs; rapid unlearning and new playbooks are the real disruptive forces
Both see AI as a catalyst for ongoing disruption. Arvind states that innovation will keep emerging from startups and that AI makes it easier for anyone to build products [180-182]. Navrina adds that the real threat is a person skilled with AI, urging rapid unlearning and new approaches [185-190].
POLICY CONTEXT (KNOWLEDGE BASE)
Analysts describe AI’s application layer as a catalyst for creative destruction, enabling startups to challenge large incumbents and reshaping market dynamics [S46]. Historical perspectives on creative destruction reaffirm its role as a long-term driver of economic growth, now accelerated by AI advances [S61].
Similar Viewpoints
Both highlight that AI allows very small, cross‑disciplinary teams to launch sophisticated products, reducing the need for large engineering hires and heavy capital outlays [62-65][80-87][90-92].
Speakers: Arvind Jain, Malhar Bhide
Leaner startups enabled by AI AI democratizes knowledge, letting non‑experts launch cross‑disciplinary startups
Both argue that scientific rigor, reliability, and regulatory compliance are the decisive factors for AI product success, whether in biotech or broader sectors [108-113][114-123][97-103].
Speakers: Navrina Singh, Malhar Bhide
Success now hinges on reliability, explainability, and regulatory compliance, not just tech Deep research is essential; AI models must produce biologically viable outputs and meet strict validation
Both see AI as empowering individuals and startups to drive disruption, keeping the cycle of creative destruction alive [185-190][180-182].
Speakers: Navrina Singh, Arvind Jain
Individuals who master AI will replace jobs; rapid unlearning and new playbooks are the real disruptive forces Creative destruction will continue with AI
Both reference historical technology waves to contextualise the current AI wave, noting that while patterns repeat, AI uniquely enables leaner ventures [35-36][39-44].
Speakers: Anirudh Suri, Arvind Jain
Entrepreneurs should study previous technology waves to learn which business models and strategies succeed or fail Leaner startups enabled by AI
Unexpected Consensus
Regulatory compliance and safety are central concerns across vastly different domains (biotech vs. enterprise AI)
Speakers: Malhar Bhide, Navrina Singh
Deep research is essential; AI models must produce biologically viable outputs and meet strict validation Success now hinges on reliability, explainability, and regulatory compliance, not just tech
Despite operating in distinct sectors-biotech drug discovery and enterprise AI governance-both speakers converge on the necessity of rigorous validation, safety, and regulatory adherence as the primary moat for AI products [97-103][108-113][114-123].
POLICY CONTEXT (KNOWLEDGE BASE)
Comparative studies of biotech and AI highlight that both sectors face stringent safety and compliance requirements, with startups expected to navigate regulatory landscapes while fostering innovation [S45]. Sector-specific governance models, such as risk-based AI policies for finance, illustrate the cross-domain emphasis on embedded compliance [S56]. Policy frameworks that support AI startups while ensuring safety are advocated in global AI policy discussions [S67].
AI‑driven lean teams are feasible even in highly specialized fields like synthetic biology
Speakers: Arvind Jain, Malhar Bhide
Leaner startups enabled by AI AI democratizes knowledge, letting non‑experts launch cross‑disciplinary startups
Arvind, speaking about enterprise software, and Malhar, building biotech solutions, both assert that AI reduces the need for large specialized teams, a surprising alignment given the technical depth of synthetic biology [62-65][80-87][90-92].
POLICY CONTEXT (KNOWLEDGE BASE)
The biotech literature notes that lean, agile startups can succeed in high-tech domains like synthetic biology, leveraging AI to reduce capital intensity and team size [S45]. Real-world examples of AI startups operating with minimal staff while generating multi-million dollar revenues reinforce this feasibility [S64].
Overall Assessment

The panel shows strong convergence on four core themes: (1) AI dramatically lowers the resource threshold for launching startups; (2) rigorous research, validation, and regulatory compliance are non‑negotiable for AI products; (3) AI governance and dedicated risk functions are viewed as essential competitive moats; (4) the historic pattern of creative destruction persists, with AI empowering individuals to disrupt incumbents.

High consensus across speakers on the strategic implications of AI for entrepreneurship, indicating that future policy and investment frameworks should prioritize support for lean AI ventures, embed governance structures, and foster research excellence to sustain innovation.

Differences
Different Viewpoints
Whether every AI‑driven startup should create a dedicated AI risk, policy and governance function
Speakers: Anirudh Suri, Malhar Bhide, Arvind Jain
Every AI‑driven startup should create a dedicated role or function to manage AI risk, policy and governance Startup teams actively monitor AI safety and enforce guardrails to prevent misuse We don’t actually think a whole lot about that (policy), but it is important to have regulations in place
Suri asks if a specific risk-governance role is becoming critical for all startups [144-150]. Malhar replies that their small team collectively studies AI safety research and implements guardrails without a separate role [151-156]. Arvind adds that his company does not focus heavily on policy, though he acknowledges its importance [169-170].
POLICY CONTEXT (KNOWLEDGE BASE)
Practitioners argue that institutionalising a separate governance unit may be costly for early-stage firms, while others point to the necessity of embedded risk controls as a competitive moat [S56][S57]. The debate mirrors broader industry discussions about the balance between agility and regulatory diligence, especially as Silicon Valley actors lobby for lighter regulation [S55].
Impact of AI governance on return on investment (ROI)
Speakers: Audience member, Navrina Singh
Governance delivers clear ROI by accelerating safe AI adoption, boosting productivity and top‑line growth Governance delivers clear ROI by accelerating safe AI adoption, boosting productivity and top‑line growth
An audience participant argues that AI governance may hinder ROI, suggesting companies see no return at early stages [236-237]. Navrina counters that governance provides clear ROI through faster, trusted AI deployment and increased productivity [240-246].
POLICY CONTEXT (KNOWLEDGE BASE)
Empirical studies show mixed ROI outcomes: fragmented time-savings often fail to translate into measurable business value, raising questions about governance overheads [S62]. Trust deficits and insufficient governance have been linked to poor ROI in AI deployments [S63], while other analyses argue that cost-effective, well-governed models can enhance returns [S68].
Degree of emphasis on policy and regulatory compliance for AI products
Speakers: Arvind Jain, Navrina Singh
We don’t actually think a whole lot about that (policy), but it is important to have regulations in place Success now hinges on reliability, explainability, and regulatory compliance, not just tech
Arvind downplays the need for deep policy engagement, stating his company does not think much about it [169-170], while Navrina stresses that reliability, explainability and regulatory compliance are the true competitive moats for AI ventures [108-123].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy formulation literature stresses the importance of research-driven, stakeholder-engaged approaches to AI regulation, yet implementation varies across sectors [S48]. Risk-based governance models advocate embedding compliance checkpoints throughout the AI lifecycle, suggesting a high emphasis is warranted [S56]. Conversely, some commentators note that existing frameworks may need adaptation to keep pace with autonomous AI systems [S58].
Drivers of creative destruction in the AI era
Speakers: Arvind Jain, Navrina Singh
Creative destruction will continue as startups disrupt big tech; AI makes it easier for individuals to build products Individuals who master AI will replace jobs; rapid unlearning and new playbooks are the real disruptive forces
Arvind argues that the entrepreneurial spirit will keep startups disrupting large firms, with AI lowering barriers for product creation [180-182]. Navrina counters that the real threat is highly AI-skilled individuals who can outpace others, emphasizing the need to unlearn old habits [185-190].
POLICY CONTEXT (KNOWLEDGE BASE)
Scholarly work differentiates between foundational AI infrastructure (high barriers) and the application layer (low barriers), identifying the latter as the primary engine of creative destruction and market reallocation [S46]. Historical analyses of economic cycles reaffirm that AI-enabled disruption continues the long-standing pattern of creative destruction [S61].
Unexpected Differences
Policy emphasis between two AI founders
Speakers: Arvind Jain, Navrina Singh
We don’t actually think a whole lot about that (policy), but it is important to have regulations in place Success now hinges on reliability, explainability, and regulatory compliance, not just tech
Both are founders of AI-focused companies, yet Arvind downplays policy engagement while Navrina places policy and regulatory compliance at the core of competitive advantage, an unexpected divergence given their similar industry positions [169-170][108-123].
POLICY CONTEXT (KNOWLEDGE BASE)
Panel discussions at AI impact summits have highlighted divergent policy priorities among founders, reflecting differing views on regulation, risk, and growth strategies [S47]. Such debates illustrate how personal leadership styles shape organizational policy emphasis.
Audience claim that AI governance reduces ROI versus Navrina’s claim of ROI benefits
Speakers: Audience member, Navrina Singh
Governance delivers clear ROI by accelerating safe AI adoption, boosting productivity and top‑line growth Governance delivers clear ROI by accelerating safe AI adoption, boosting productivity and top‑line growth
The audience’s skepticism that governance hampers ROI contrasts sharply with Navrina’s assertion that governance directly drives ROI, revealing an unexpected tension between practitioner concerns and expert advocacy [236-237][240-246].
POLICY CONTEXT (KNOWLEDGE BASE)
Audience feedback points to perceived ROI erosion due to governance burdens, whereas other experts argue that well-designed governance can unlock higher returns by mitigating risk and building trust [S62][S63][S68]. This tension mirrors ongoing industry debates about the cost-benefit balance of AI oversight.
Overall Assessment

The panel largely concurred that AI is reshaping entrepreneurship by enabling leaner teams, democratizing knowledge, and creating new research capabilities. However, clear disagreements emerged around the necessity of dedicated AI risk/governance roles, the perceived ROI of AI governance, the weight of policy compliance, and the primary drivers of creative destruction.

Moderate – while participants share a common optimism about AI’s opportunities, they diverge on governance structures, policy emphasis, and strategic focus. These divergences suggest that future entrepreneurial ecosystems will need to balance rapid innovation with evolving governance and policy frameworks, influencing how startups allocate resources and design organizational roles.

Partial Agreements
Both agree that AI allows startups to be leaner and operate with minimal resources, but Suri focuses on overall product development and capital efficiency [55-61], whereas Malhar highlights how AI enables a five‑person team to perform sophisticated scientific research and cut wet‑lab expenses [90-92].
Speakers: Anirudh Suri, Malhar Bhide
AI enables startups to operate with much leaner teams and lower capital requirements Small teams can conduct advanced research (e.g., DNA design) using AI, reducing wet‑lab costs
Both see Indian entrepreneurs succeeding globally, but Arvind attributes success to cultural drive and capital access [215-222], while Malhar points to personal risk‑taking experience from relocation and deep domain knowledge of India’s biotech landscape [200-208].
Speakers: Arvind Jain, Malhar Bhide
Indian diaspora’s cultural drive and access to capital fuel high entrepreneurial success in Silicon Valley Relocating to the US fosters risk‑taking mindset and market insight, while Indian background offers domain expertise in local ecosystems
Takeaways
Key takeaways
Fundamental entrepreneurial principles (ambition, problem‑focus, team building) remain constant across tech waves, but AI reshapes the company blueprint and the role of humans. AI democratizes knowledge, enabling founders without deep domain expertise to launch cross‑disciplinary startups and to conduct advanced research with very small teams. Leaner, AI‑first startups can build MVPs with one or a few founders, reducing headcount and capital requirements while still needing to scale people as they grow. Deep research and scientific validation are critical for AI‑driven products, especially in regulated fields such as biotech and healthcare. AI governance, trust‑management, and regulatory compliance have become core competitive moats; enterprises must embed fact‑checking, safety checks, and policy alignment into their products. Creative destruction will continue: AI lowers barriers for solo founders, and the biggest disruption may come from individuals who master AI rather than from incumbent big‑tech firms. Indian entrepreneurs in the US benefit from exposure to capital and market dynamics, while their Indian background provides domain insights (e.g., drug‑discovery ecosystem) that can be leveraged. New AI‑security challenges (prompt injection, hallucinations) are emerging, creating a nascent field of AI‑cybersecurity and observability tools.
Resolutions and action items
None identified
Unresolved issues
How will the emerging AI‑cybersecurity field develop standards and best practices for attacks such as prompt injection? What concrete methods can startups adopt to detect, score, and mitigate AI hallucinations in production systems? How should early‑stage startups structure dedicated roles or processes for AI policy, governance, and risk management? What specific regulatory frameworks will apply to AI products in sectors like healthcare, finance, and biotech, and how can startups stay ahead of evolving rules? How can organizations balance the perceived trade‑off between rapid AI innovation and the implementation of governance controls, especially when large enterprises claim governance slows ROI?
Suggested compromises
Adopt AI governance early to create a trusted AI moat, which can actually accelerate safe AI adoption and generate ROI, thereby reconciling speed of innovation with regulatory compliance.
Thought Provoking Comments
“The new AI wave is not only about the technology trend. With AI, everything changes – the role of the human itself is unclear, the shape of the organization changes, and you can start a company with a team of one.”
This reframes the conventional startup blueprint, suggesting that AI fundamentally alters company structure and talent needs, challenging the assumption that a typical tech startup requires a multi‑disciplinary team from the outset.
It shifted the conversation from comparing AI to previous waves to exploring how AI redefines the very mechanics of building a company. It prompted follow‑up questions about leaner teams and led Arvind to discuss the efficiency gains of using AI for tasks, setting the stage for the discussion on lean startups.
Speaker: Arvind Jain
“Because AI has gotten so good, knowledge is much more democratized. My co‑founder and I have never studied biology, yet we can build a biotech startup by using AI to read papers, talk to scientists, and even predict wet‑lab results.”
Highlights how AI lowers barriers to entry across disciplines, allowing founders without formal expertise to enter deep‑tech fields, which challenges the traditional notion that deep domain expertise is a prerequisite for biotech entrepreneurship.
Opened a new thread about cross‑disciplinary entrepreneurship and the role of AI in research. It led to deeper discussion on the importance of research, validation, and regulatory compliance in AI‑driven biotech, and reinforced the theme of AI as an enabler of novel founder profiles.
Speaker: Malhar Bhide
“The true moat for companies like yours is not just technological innovation but building trusted technology that works within regulatory guardrails. Policy, governance, and risk management are now core to competitive advantage.”
Elevates policy and governance from a peripheral concern to a strategic differentiator, emphasizing that compliance and trust are essential for scaling AI products, especially in regulated sectors.
Redirected the dialogue toward the intersection of AI and policy, prompting participants to discuss how startups embed governance, the emergence of new roles focused on risk, and the necessity of aligning AI outputs with regulatory standards.
Speaker: Navrina Singh
“AI risk is dynamic. Issues like hallucination, supply‑chain testing, and continuous evaluation mean you must prove reliability every time the model is deployed, not just once.”
Adds technical depth to the policy conversation by identifying specific AI reliability challenges that make governance an ongoing process rather than a one‑off compliance check.
Deepened the conversation on AI safety, leading Arvind to elaborate on product‑level safeguards (fact‑checking, provenance) and setting up the later audience question about AI security and hallucination mitigation.
Speaker: Navrina Singh
“We are seeing very clear ROI on AI governance. With proper risk‑management practices you can adopt third‑party AI faster, deploy more trusted products, and actually increase top‑line revenue.”
Counters a common industry narrative that governance stifles innovation, providing empirical evidence that governance can accelerate adoption and drive financial performance.
Addressed the audience’s concern about governance slowing innovation, reinforcing the earlier point about governance as a competitive advantage, and influencing the tone toward a more positive view of regulation.
Speaker: Navrina Singh
“Creative destruction will continue. AI actually makes it easier for individuals to build interesting products without being engineers or AI scientists, so innovation will still come from startups, while big firms provide scale.”
Challenges the fear that big tech’s resources will suppress startup disruption, asserting that AI democratizes creation and may even accelerate the pace of disruption.
Reinforced the earlier optimism about lean, AI‑first startups and set up Navrina’s complementary view that the real threat is not AI itself but people who master it, broadening the discussion on future competitive dynamics.
Speaker: Arvind Jain
“You should not be worried about another person or AI taking your job. You should worry about a person who is so good with AI actually replacing you.”
Shifts the focus from technology‑centric job loss to personal skill development, emphasizing the need for individuals to become proficient with AI tools to stay relevant.
Prompted a nuanced view of disruption, moving the conversation from macro‑level industry shifts to individual agency, and complemented the earlier points about rapid unlearning and adaptability.
Speaker: Navrina Singh
Audience: “Will there be a new field of AI security to handle hallucinations and prompt‑injection attacks?” Arvind: “Yes, AI introduces new attack vectors like prompt injection; detecting hallucinations and providing observability is a burgeoning discipline.”
Introduces a concrete emerging sub‑field (AI security) directly linked to earlier discussions on reliability and governance, turning abstract concerns into actionable entrepreneurial opportunities.
Served as a turning point that connected policy, technical risk, and market opportunity, leading to a concise articulation of a new entrepreneurial space and reinforcing the theme that every problem creates a venture opportunity.
Speaker: Audience (question) & Arvind Jain (answer)
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from a broad comparison of AI to previous tech waves toward a nuanced exploration of how AI reshapes entrepreneurship at multiple levels. Arvind’s observation that AI changes organizational fundamentals opened the floor to talk about lean teams and new talent needs. Malhar’s example of a non‑biologist building a biotech startup illustrated AI’s democratizing power, prompting deeper focus on research rigor and regulatory compliance. Navrina’s emphasis on governance as both a moat and a source of ROI reframed policy from a hurdle to a strategic asset, which in turn led to concrete conversations about AI risk, hallucinations, and emerging AI security as a market. Together, these comments redirected the dialogue toward the interplay of technology, talent, and regulation, highlighting both opportunities for new founders and the evolving responsibilities they must assume. The cumulative effect was a richer, more forward‑looking conversation that linked abstract trends to tangible entrepreneurial strategies.

Follow-up Questions
Will AI enable startups to operate with significantly leaner teams and lower capital requirements compared to previous technology waves?
Understanding the resource efficiency of AI‑first ventures is crucial for founders, investors, and ecosystem builders.
Speaker: Anirudh Suri (to Arvind Jain)
How critical is deep scientific research for AI‑driven startups relative to earlier startup generations?
Determines whether AI startups need to invest heavily in R&D to achieve product‑market fit and regulatory compliance.
Speaker: Anirudh Suri (to Malhar Bhide)
Why should AI entrepreneurs pay attention to AI policy, governance, and geopolitics?
Policy and geopolitical factors shape market access, trust, and regulatory risk for AI products.
Speaker: Anirudh Suri (to Navrina Singh)
What specific changes have occurred in the regulatory and policy risk landscape for AI companies?
Identifies new compliance challenges (e.g., hallucination, dynamic evaluation) that startups must address.
Speaker: Anirudh Suri (to Navrina Singh)
Should every AI startup create a dedicated role or function for AI risk and policy governance?
Explores the need for internal governance structures to manage emerging AI risks and regulatory demands.
Speaker: Anirudh Suri (to Malhar Bhide and Arvind Jain)
Will the principle of creative destruction continue in the AI era, or will large incumbent tech firms dominate the innovation landscape?
Impacts expectations about startup opportunities, competition, and long‑term industry dynamics.
Speaker: Anirudh Suri (to Arvind Jain and Navrina Singh)
How can new entrepreneurs quickly unlearn legacy habits and adopt AI tools to build products faster?
Addresses the learning curve and cultural shift required for effective AI‑first entrepreneurship.
Speaker: Navrina Singh
What distinguishes Indian entrepreneurs building startups in the U.S. today from earlier generations of Indian founders?
Highlights differences in cultural familiarity, market knowledge, and risk‑taking behavior that affect venture success.
Speaker: Anirudh Suri (to Malhar Bhide and Arvind Jain)
How did you identify a high‑impact problem to solve in biotech despite not having a formal biology background?
Provides insight into interdisciplinary problem discovery and the role of AI in lowering entry barriers.
Speaker: Audience member (to Malhar Bhide)
Will a new field of AI security emerge to address threats such as prompt injection and hallucinations, and what would its scope be?
Points to a nascent research and commercial area focused on safeguarding AI systems.
Speaker: Audience member (to Arvind Jain)
What techniques (e.g., relevance scoring) can be used to detect and mitigate AI hallucinations in practice?
Seeks practical methods to improve the reliability of AI outputs, a key trust factor.
Speaker: Audience member (to Malhar Bhide)
How can AI governance be advocated when some large enterprises claim it reduces ROI and stifles innovation?
Explores strategies to demonstrate the business value of responsible AI practices.
Speaker: Audience member (to Navrina Singh)
What evaluation benchmarks and supply‑chain testing frameworks are needed to ensure AI reliability and compliance?
Calls for standardized metrics and testing regimes to address dynamic AI behavior and regulatory scrutiny.
Speaker: Navrina Singh
How does AI affect startup financing models, including capital efficiency and investor expectations?
Understanding funding dynamics is essential for founders and investors navigating the AI era.
Speaker: Arvind Jain (implied)
What are the specific challenges and opportunities of applying AI to biotech research, such as designing novel DNA sequences and navigating FDA/clinical trial pathways?
Identifies a cross‑disciplinary research frontier where AI can accelerate drug discovery while facing strict regulatory hurdles.
Speaker: Malhar Bhide
What policy frameworks are needed for AI deployment in highly regulated sectors like healthcare and finance to ensure compliance and trust?
Guides future regulatory work and helps startups align product development with sector‑specific rules.
Speaker: Navrina Singh

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Panel Discussion AI and the Creative Economy

Panel Discussion AI and the Creative Economy

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, comprising Nicholas Granatino of Tara Gaming, WIPO’s Kenichiro Natsume, and Creative Commons CEO Anna Tumadote, examined how artificial intelligence impacts cultural diversity in global creative output [1-4]. Anna argued that AI’s effect depends on whether models are open-source and governed transparently, noting that current trends risk weakening cultural diversity [12-17]. Ken responded that the issue is not binary; while AI can enhance copyrighted works, existing copyright systems are, in his view, capable of coping with AI-generated content [18-23]. Nicholas highlighted that AI training data largely omit India’s oral and public-domain epics, such as the Mahabharata and Ramayana, which limits representation of a fifth of the world’s population [24-32][38-44].


He suggested that digitising these epics through projects like Sarvam AI could create rich datasets and a new layer of IP built on public-domain material [45-52]. When asked whether the global IP framework is ready for large-scale AI-generated content, Ken explained that achieving consensus among 194 WIPO members is a long-term challenge, making a full legal treaty impractical at present [60-68]. Instead, he advocated a pragmatic, technology-focused approach that lets creators be remunerated while allowing tech firms to use AI outputs, even if neither side is fully satisfied [69-76].


Anna identified an ethical inconsistency: creators who share work freely see it scraped into massive models without consent, raising concerns about attribution and the erosion of the commons [79-84][99-104]. She emphasized that while some artists embrace AI, others demand credit and safeguards, arguing that a middle ground is needed to preserve the public-domain’s richness [100-104]. Ken reinforced the human-centered perspective, noting that learning by imitation has always been slow for humans but is instantaneous for machines, and that a line must be drawn to protect human creativity [119-126][127-130].


Nicholas warned that open-source gateways could become a “Captain America hegemony,” concentrating power in a few platforms unless diverse cultural data are incorporated [155-159]. He expressed optimism that the underlying creative process remains human and that AI will not replace storytelling, which he sees as a uniquely human skill [160-166]. The discussion converged on the need for a human-centered governance model that balances technological advancement with cultural preservation [202-206][207-209]. Participants concluded that safeguarding cultural diversity and creator rights requires open data, nuanced licensing, and collaborative technical solutions rather than rigid global treaties [168-176][173-182].


Keypoints

Major discussion points


AI’s impact on cultural diversity hinges on openness and governance.


Anna notes that whether AI strengthens or weakens cultural diversity “depends” on factors such as open-source models, transparency, and governance frameworks, and she warns that the current trajectory leans toward weakening diversity if those values are not embedded [12-17].


India’s public-domain epics are a strategic AI training resource and a source of new IP.


Nicholas highlights that massive Indian cultural works like the Mahabharata, Ramayana and the Gita are in the public domain yet under-represented in training data, and argues that incorporating them can give India a “strategic edge” while also allowing creators to build fresh IP on top of that heritage [26-33][38-52][155-162].


The existing global IP framework is not ready for large-scale AI-generated content; a pragmatic, technology-focused approach is favored.


Ken explains that the international copyright system can still cope in principle, but consensus among 194 WIPO members is slow, so WIPO is pursuing “more practical, technological” solutions that let creators be remunerated and tech firms operate without waiting for a new treaty [55-69][64-68][70-76][168-176][202-206].


There is an ethical inconsistency between unrestricted AI training on scraped works and creators’ expectations of consent and credit.


Anna describes the “ethical inconsistency” where artists both use AI and protest its un-consented training data, calling for middle-ground mechanisms such as attribution, reward models, and nuanced licensing to preserve the commons while enabling innovation [79-102][131-138][143-152][207-209].


Overall purpose / goal of the discussion


The panel was convened to examine how generative AI is reshaping the creative industries-particularly its effects on cultural diversity, intellectual-property regimes, and emerging market opportunities (e.g., India)-and to explore what governance principles, technical solutions, and policy frameworks are needed to balance innovation with the rights and interests of creators worldwide.


Overall tone and its evolution


– The conversation opens in a formal, inquisitive manner, with the moderator setting the agenda and each expert providing a measured opening response.


– As the dialogue proceeds, the tone shifts to concerned and urgent, especially when discussing risks to cultural diversity, under-representation of non-Western content, and the ethical tension around consent [12-17][79-102][131-138].


– Mid-discussion, speakers adopt a constructive and optimistic stance, highlighting opportunities (e.g., India’s public-domain assets, collaborative technological solutions) and emphasizing human-centered governance [38-52][155-162][202-206].


– The session concludes with a concise, human-focused call to action, reinforcing the centrality of people in any AI-driven creative ecosystem [202-209].


Overall, the tone moves from exploratory to cautionary, then to solution-oriented, ending with a reaffirmation of human primacy in creative governance.


Speakers

Kenichiro Natsume


– Role/Title: Assistant Director General, WIPO (policy side)


– Area of Expertise: Intellectual property policy, AI governance


– Sources: [S2]


Anna Tumadote


– Role/Title: Chief Executive Officer, Creative Commons


– Area of Expertise: Open licensing, Creative Commons, open-knowledge movement


– Sources: [S4]


Nicholas Granatino


– Role/Title: Chairman, Tara Gaming; board member of Frontier Labs, Kivita, H Company


– Area of Expertise: Gaming industry, AI investment, business strategy


– Sources: [S6]


Speaker 1


– Role/Title: Moderator / event host


– Area of Expertise: (not specified)


– Sources: [S8]


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

Introduction – The moderator opened the half‑hour session by introducing the three‑person panel: Nicholas Granatino, chairman of Tara Gaming; Kenichiro Natsume, Assistant Director‑General at WIPO; and Anna Tumadote, chief executive officer of Creative Commons. He then posed the opening question: Does artificial intelligence strengthen or weaken cultural diversity in global creative output? [N]

Anna’s perspective – Anna answered that the impact of AI on cultural diversity “depends” on whether models are open‑source, transparent, and governed by clear design principles. She warned that without openness and good governance the current trajectory risks weakening cultural diversity. [N-M]

Ken’s perspective – Ken emphasized a non‑binary view: AI can enhance copyrighted works but can also threaten creators when outputs are generated without clear authorial input. He argued that the existing international copyright system, although challenged, remains capable of coping with AI‑driven disruption, likening AI to previous historic, cutting‑edge technologies. [N-M]

Nicholas’s perspective – Nicholas highlighted that AI training datasets largely omit India’s rich oral and written traditions, specifically the public‑domain epics Mahabharata, Ramayana, and Bhagavad‑Gītā. He explained that these works are under‑represented in the “latency space” of training data, limiting models’ ability to reflect a fifth of the world’s population. He proposed digitising the epics through the Sarvam AI project—using OCR and voice‑model technologies—to create a high‑quality public‑domain corpus that would both preserve cultural heritage and provide a new layer of IP built on that heritage. [N-M]

Follow‑up on India’s strategic edge – The moderator asked how India’s public‑domain heritage could give a strategic advantage in the emerging AI landscape. Nicholas reiterated that the epics are living traditions, still narrated by families and cited by political leaders, and that no modern IP has yet been built on them. He argued that Sarvam AI’s digitisation effort could enable India to “punch at its weight” in AI training datasets. [N-M]

Global IP framework readiness – Ken explained that, while copyright law can theoretically accommodate AI‑generated content, achieving consensus among WIPO’s 194 member states would be a protracted process. Consequently, WIPO is pursuing a pragmatic, technology‑focused route—developing infrastructural tools such as opt‑in/opt‑out mechanisms and remuneration systems—to balance creator rights with AI development, rather than waiting for a new international treaty. [N-M]

Ethical inconsistency in AI training – Anna identified a clear ethical inconsistency: creators who share their works freely see those works scraped into massive foundation models without consent or attribution, fueling fears of replacement and eroding trust in AI systems. She cited artists such as Holly Herndon and Imogen Heap who embrace AI, contrasted with creators demanding credit and safeguards. She called for a middle‑ground mechanism—attribution, remuneration, and nuanced licensing—to keep the public domain vibrant. [N-M]

Scale of AI ethics – Anna added that AI magnifies a long‑standing “5 % problem” (the small proportion of works that receive proper attribution) to a massive scale, intensifying the need for systematic consent and credit mechanisms. [N-M]

Cultural gate‑keeping concern – Nicholas warned that open‑source gateways could become a “Captain America hegemony,” allowing a few powerful platforms to dominate a freely available corpus of creativity. He stressed that AI sits atop human‑made art rather than replacing it, and that storytelling remains a uniquely human skill that will retain value in the AI era. [N-M]

Data‑commons analogy – Nicholas also drew an analogy to the Protein Data Bank, noting that the Nobel prize for protein folding should have also recognised the underlying data infrastructure, underscoring the importance of robust data commons for scientific and creative progress. [N-M]

Stakeholder diversity – Ken observed that publishers, artists, and technology firms hold very different views on AI, illustrating the fragmented landscape that any governance framework must accommodate. [N-M]

Japanese concept of “learn” – Ken added a cultural note that the Japanese term “learn” originally meant “to mimic or copy.” He argued that while humans have always learned by imitation, AI can do so instantly, dramatically accelerating the process and raising the need for safeguards that protect human creativity. [N-M]

Fragmentation vs. harmonisation – Ken reiterated that a global treaty is impractical given the need for consensus among many states. He announced WIPO’s first meeting on 17 March to discuss technological infrastructures—such as opt‑in/opt‑out tools and provenance metadata—that could provide pragmatic, short‑term solutions. [N-M]

Creative Commons’ role – Anna confirmed that Creative Commons will attend the March meeting and will continue developing nuanced licensing options that incorporate consent, attribution, and reward mechanisms for AI use. [N-M]

Advice to investors – Nicholas likened the AI opportunity to the early internet, emphasizing rapid growth and the enduring value of storytelling as a promising area for investment. [N-M]

Final guiding principle – When asked for a single principle to guide AI governance in the creative industries over the next decade, both Ken and Anna answered that a human‑centred approach is essential: humans must remain at the core of creativity and governance, with credit and control flowing back to creators, and AI treated as a collaborative tool rather than a replacement. [N-M]

Take‑aways


– Inclusion of under‑represented public‑domain works (e.g., Indian epics) is essential for preserving cultural diversity in AI outputs. [N-M]


– Current global IP frameworks are not fully equipped for the scale of AI‑generated content; pragmatic, technology‑driven mechanisms (opt‑in/opt‑out, attribution metadata, remuneration tools) are preferred over awaiting new treaties. [N-M]


– Ethical consistency—particularly consent, attribution, and remuneration—is required to preserve the commons and sustain a vibrant, diverse creative ecosystem. [N-M]


– The panel’s consensus is that governance must be human‑centred, that public‑domain data is a critical resource, and that practical technical solutions are the most realistic path forward. [N-M]

Overall, the discussion underscored that the future impact of AI on cultural diversity hinges on open, transparent governance, equitable inclusion of under‑represented heritage, and the implementation of pragmatic, human‑centred technical mechanisms. [N-M]

Session transcriptComplete transcript of the session
Speaker 1

Yes. Yes. three crucial elements. I’ll start with Nicholas Granatino, who’s on the business side, the chairman of Tara Gaming. Second is Kenichiro Natsume, who is the assistant director general at WIPO on the policy side. And we have Anna Tumadote, who is the chief executive officer of Creative Commons. Since we have a half an hour, I don’t want to waste time. I have this urge to do a lot of context setting, but I will refrain. And I will jump into the first question for each of the panelists, and then we can do it more conversational. First question to each of the panelists. Anna, can we start with you? Does AI strengthen or weaken cultural diversity in the global creative output?

Anna Tumadote

It’s a good question, and I think we can just do our context setting along the way here with this. So I think this is one of those wonderful questions where the answer is going to be, it depends. It depends. Is the model open source, and we can all interrogate what’s in it and build upon it and improve it? Or is it closed source? Is the model got good governance frameworks attached to it so we can understand some of the intentions behind it or is it all very opaque? So I think ultimately it’s going to just come down to what sorts of values and design principles we’re able to instill as we build this new ecosystem.

I think currently we are at risk for a weakening.

Kenichiro Natsume

Thank you very much. My answer is not a binary answer. Because if we think about the intellectual property aspect, namely copyright, then AI is okay. AI can be used to enhance the copyright table or artworks. At the same time, that could be also kind of a threat because people can create something or generate something by using artificial intelligence. and in terms of the legal perspective or international perspective we do have a copyright system which in my personal opinion can still cope with the artificial intelligence because I think that artificial intelligence at this stage is one of the cutting edge technologies which is magnificent but one of the cutting technologies or disruptive technologies we have been experienced in the history of our human life.

Thank you.

Nicholas Granatino

Yeah. Namaste. I think I will actually give an answer which is very positive on the creativity side and that’s what we saw at Atara Gaming as an opportunity is the fact that from the web to social graph to now AI I think we always underestimate and don’t talk enough about the data and the content and the case of AI is what is called the latency space which includes the training data set. which is used for these models to respond to prompts. And if we look at the case of India, which has been mainly an oral tradition, and we look also at how much of their content have been digitized to the level of Hollywood in movies or AAA game in gaming, it is not represented.

And that has huge implication in terms of these models and whether or not they enhance creativity or whether or not they need creativity. And my answer to that is they need creativity. They need actually the wonderful epic and story and living traditions that are in the ETS, which are these Indian epics, to be represented in their data set. And at the moment, they’re not. And so the bigger question for me is how do we make sure that India, with its phenomenal history and culture and 20 % of the world population, punches at its weight? in the training data sets of these AI models. And the first work that needs to be done is a creative work. It’s not an AI work.

Speaker 1

You know, Nicholas, when I sent these questions, the kick I got was Nicholas pushed back happily and he gave me three questions which I told him that one of them really fired my imagination. I want to ask him about this in terms of this aspect and this whole AI IP sort of dispute in terms of the US and Europe are currently facing massive uncertainty in terms of AI training and copyright assets and permission. And that’s either, depending on who you are, it’s either stalling, you know, development, etc. You’ve said, Nicholas, that this paralysis is India’s biggest opportunity. How does India’s rich public domain heritage, the itihas, as you called it, give us a strategic edge right now?

Nicholas Granatino

Yeah, so the ETS, actually, maybe most of you know one part of the Mahabharata, which is one of the ETS. The second one is the Ramayana. The one I’m referring to is the Gita. And so maybe most of you are more familiar with the Gita, which is actually just a short part of the Mahabharata. These are epics that are thousands of years old. What’s phenomenal about them is they are in the public domain, but they are still living tradition. Parents and grandparents still tell their children about Lord Ram, about Ravan, about Hanuman. Prime Minister Modi speaks about it all the time as well. And they are in the public domain. And the reason, and nobody has actually done any work with them and created any IP on top of that, the quality we’re looking to do at Tara Gaming.

And so there is basically a data set. And what’s really exciting about Sarvam AI is that they’re actually going to help. to actually digitize with their OCR model, with their voice model, all this culture which is in the public domain and which can enter through SARVAM and through the works that we’re doing, actually the data set to train. And that creates an opportunity, I think, which is quite unique. And the second layer is the fact that you have 20 % of the world population here in India to talk about it and create rich data sets about that. And the interesting point is that on this public domain content and body of work, you have this opportunity to create IP.

Speaker 1

And Ken, the question is for you. Is the global IP framework prepared for large -scale AI -generated content today?

Kenichiro Natsume

Thank you very much for the very interesting but big question. It’s not very easy to answer in a short time, but let me try. Because I… Artificial intelligence, particularly in the area of the creative industry, of course, it’s changing the work of the artists and creators. and I’ve been discussing with let’s say in India for example taking advantage of my visit to Delhi meeting with different stakeholders publishers music industry or tech industries and the views are very much different it’s no secret and you know that the views are very much different that’s the reality in front of us and even among the same segment for example publishers there are different views and for example artists, one of my friend artists, she’s using AI to create digital art at the same time the other artists are kind of feeling threatened by the generation of the artwork or the output generated by so called generative AI so it’s not very straightforward and the reality in front of us is that it’s not There are very much different views and it’s not so easy to find a common denominator because the views are so different.

It’s not something like zero and one and let’s see 0 .5. It’s not like a mathematic. So if we see about the intellectual property system, at this stage our view is the following because that we, WIPO, as a UN international organization sometimes are asked that, hey WIPO, why don’t you think about making some rules or regulations, international treaty of AI and intellectual property. Not sounds bad, but how long does it take? We have 194 member states. Our principle is consensus. Okay. So 194, meaning 194 member states, including of course India, should agree upon one common thing. and just to be frank with you, don’t quote me, it’s a long journey. Sounds relatively easy. Yes, yes. So our approach is more pragmatic.

Okay, we can think of something, legal framework, but let’s put it aside because it’s not, the international scene is not matured enough. It’s not ready enough. So our idea is let’s think about something more practical, more technological, to see if there is any technological solution possible so that both creators’ side and industry side, tech industry side, can live with. I would say can live with. They don’t have to be necessarily happy very much. They have to be unhappy to some extent equally. They have to compromise each other. But it’s not an international legal negotiation. It’s a collaboration or cooperation explore to find some technological infrastructure where people can live with us so that creators can be benefited or remunerated as well as tech industries can utilize those products.

Thank you.

Speaker 1

Anna, you know one of the things Ken said is an artist and use AI to create artworks and you have artists today and if I look at the music industry film industry you have composers who use AI to generate production music and a lot of musical works maybe not declared or otherwise but it’s a huge industry but at the same time the creator voice often complains about the lack of consent in AI training whereas most of them work on AI models which have trained on a worldwide corpus of data and content. Do you see a sort of ethical or rational inconsistency here in this sort of kind of use and objection or what do you think is the correct position?

Anna Tumadote

Is there an ethical inconsistency? Yes, yes is the answer. It’s funny that you get the, hey, WIPO, can you fix this? Because Creative Commons gets that from time to time, too. In fact, in the early days of generative AI, we were getting a, hey, Creative Commons, can you fix AI for us? We’re like, okay, which direction are we going to fix it in? I’m just kidding. We never suggested that we would actually fix it. But it actually comes down to this question that you were asking. So, you know, here we have the world’s creativity that has been scraped, crawled, trolled, whatever, you know, however you want to describe it. And we built these massive foundation models.

And the, you know, the sort of relative weight of every individual work in there is infinitesimal. It’s tiny. It’s tiny. The bigger concern is what does use of these technologies do to the creative industries, right? It’s more a fear of, like, replacement. It’s a labor issue. It actually feels increasingly less. Like a copyright issue when we’re talking about some of these considerations. But then there’s that layer on top in the sort of inference level where. you know, you’re querying, like, tell me, you know, tell me a story about this, you know, tell me, tell me, you know, about a certain concept, or whatever the case may be, and we’re not seeing where that information is coming from, right?

So, we’re sort of divorced from where the origin of the creativity, or the knowledge, or whatever it was, came from, and that, I think, is going to be a longer -term problem for these tools, because you’re not really going to trust in how they work, and you see it show up similarly with the artistry piece, right? So, we have artists who have always embraced the free culture movement, you know, give things over to the public domain, or give things over with a creative commons license, and are enthusiastically experimenting with these technologies. We have artists, too, who have, like, vast bodies of work, where they’re building their own models, and so they’re just enhancing their own craft.

So, interesting examples to look at would be Holly Herndon and Imogen Heap, who’ve, like, really been on the forefront of this, um, but at the same time, it was to your point, you know, there are artists who are playing with this, but are like, no, no, no, anything that I create, that’s mine, but I’m going to use all the world’s creativity here freely, and we have to find some kind of a middle ground in this, because Ultimately, all knowledge builds on prior knowledge. All creativity builds on prior creativity. The richness of the public domain, like Nicholas was talking about, you can walk into any museum and be inspired. You don’t have to go, I saw this work and that work and that work and that work and that work, and now I’ve sketched this drawing.

But there is something with the technological layer where if you are fusing together different things or asking for certain styles or certain inspirations or so on, there really should be some sort of form of credit given.

Speaker 1

Please, Nicholas.

Nicholas Granatino

Yeah, I mean, I think everybody has celebrated, including the Nobel Committee, the work of Demis Hassabis at DeepMind on protein folding. But the reality is there have been actually scientists which have been for 50 years putting their crystal structure of proteins into a database called the Protein Database. I think it would have been nice of the Nobel Committee to also include the protein data bank as a recipient of that Nobel Prize. And actually, the data has always been put under the carpet. I mean, a lot of the big tech is saying content is free, what people pay for is search, tech, AI, whatever it is, the link, the graph, the link graph, the social graph, and now the AI model.

And so, the question they want to pose is, do you want AI as a society to have the best data? And the answer is probably yes on the condition that you are open, but if you are going to make money at the gateway of the chat or whatever is your application, and you’re going to assume that everybody works for you, I don’t think society wants that. President Macron yesterday talked about it’s not about regulation, it’s about civilization. And I think it’s, as a civilization, what do we want as a future? How much do we want to rework the work of these protein crystallographers, of these creatives, etc.? That’s the real question.

Speaker 1

Please, Ken.

Kenichiro Natsume

Just one quick note. Anna’s comment was very touching to me because you mentioned it. we human beings refer to or learn from the other person’s creative work, which is true. I’m a Japanese, and the Japanese term of learn originally meant mimic or copy. So the learning is starting from to look at some other person’s work and try to imitate, and based on that we develop our own flavor or texture. And this has been done by human beings for ages. The big difference is that if it’s done by human, it takes time. But if it’s done by computer, it takes very, very limited time. And that’s a big difference. So what it’s doing is essentially more or less the same, but the speed is completely different.

And maybe we have to draw a line. This is what exactly Anaba was saying. And the answer, where do we have to draw a line? Is a difficult question.

Anna Tumadote

You know, it’s so funny you say that because even in the open movement and in the open knowledge communities, we’ve had problems for years, but they’ve been problems on the margins, right? They’ve been the sort of 5 % problem, and we’ve thought to ourselves, you know, this is 95 % good, and so that’s good enough. But AI comes along, and the scale is so massive that now you have to grapple with this. So, for instance, like, what if you’ve shared your work freely, and then it’s used for nefarious purposes? Nobody wants that. Like, that’s the sort of, like, another ethical conundrum that you face. But copyright is not built to handle that. Like, society ideally would find ways around that.

Maybe there’s normative frameworks we need to introduce. Maybe we need to think about sort of different legal or technical solutions to this because the scale is just so extreme.

Speaker 1

So, Ana, is the open movement something that can withstand this, the AI onslaught or the spread of AI? Is it structured to do that, or would it have the same challenges that, let me call it the proprietary copyright model as the traditional… The traditional copyright model.

Anna Tumadote

I think there’s a transformation that’s going to have to happen because I think what we’re actively seeing is people pulling back from sharing their works because, you know, for whatever, you know, if they have no consent or, you know, consent mechanism or no agency over how it’s used, they’re going to probably do the only thing that they can in that situation and either take it back or put it behind a wall or try to make you pay for it or, you know, any of these other levers that they can pull. And we are actively seeing the shrinking of the commons already. Like, this is a really bad outcome. And here’s the real kicker when we’re talking about the sort of humans have been doing this for a long time.

Like, human -to -human collaboration relies on copyright clarity, on the CC licenses, on this ability that, like, if I write something, you know what you can do with it because it’s under this license and so on and so forth. But now we’re seeing creators say, no, no, I’m going to go more restrictive. And that breaks the human collaboration element. So there’s all these downstream negative consequences from this. So I think we can withstand it, but I think collectively we have to reckon with the fact that there is a problem. and the scale is so magnificent that we can’t just stick our fingers in our ears and say, you know, ultimately this is in the public interest.

It’s not going to be that way because if nobody shares, then there’s nothing left for us.

Speaker 1

You know, Nicholas, to your point on the opportunity for India and the use of ethos and public domain, but, you know, being the opportunity for India to create that IP layer, isn’t there also the danger that we might slip into this cultural gatekeeping by a handful of AI platforms? Is that then an overwhelming danger as well?

Nicholas Granatino

Yeah, absolutely. I mean, I think, you know, Europe and Mistral are pushing a lot for, you know, open source, and China is pushing a lot also in the open source community. But that’s just a layer above of what I think we’re talking about here. today. There’s something I call the Captain America hegemony, which is basically we all grew up in France, in India, anywhere with Captain America as being this kind of powerful with all the weapons, all the defense mechanism, etc. And if you actually have open source, you just have a gateway which is free to steal a corpus of creativity, which is in this Captain America hegemony. And I think what we want is we have to remember that AI is just on top of something that has been done, and so the creative process is not really part of that.

It’s above language, it’s above image, etc. And we know that everybody agrees in the AI community there’s actually two, three things that need to happen to reach AGI. And those two, three things probably overlap with a lot of the creativity that makes us unique and allow us to make this corpus that AI will continue to be trained of. So I’m quite optimistic because there is things that is pre – art, whether it’s text, whether it’s image, whether it’s video, whether it’s game, you know, that is still a creative process. Sometimes that involves hundreds of people. And that you’re not going to, it’s not that you’re going to have agents speaking to each other and creating a game.

We’re very far from that.

Speaker 1

Ken, you know, to your point on we need to find a middle ground, is this, realistically, is this a level and as a copyright lawyer, the Berne Convention, Rome, the whole push to sort of a harmonized, largely sort of co -existent copyright model across the world, has generally worked. The one exception being the Broadcast Treaty, which I started working when I was a young associate and it’s still being discussed and I love those documents because I see how that conversation has changed. But is, in the context of IP and AI, is global harmonization realistic or is fragmentation something? you know that we’ll all have to live with

Kenichiro Natsume

i wish i could immediately say yes however the reality is as i mentioned briefly before that having a consensus with 194 member states including this country and big other countries is not always easy so that’s why we are opting at this stage for a little bit softer approach so that the technique technological solution or technological platform or technological infrastructure could actually solve the issues where creators can be rewarded or remunerated appropriately where the tech companies can easily recognize what is opted in what is opted out what is the artwork made by generated by the artificial intelligence what is done by the human being so that they can understand what can be done and what cannot be done.

So that’s the approach we are taking place. And just for your information, we will launch the first meeting of that next month, March 17th, which is available online. So please stay tuned. Thank you.

Anna Tumadote

Oh, it’s in our calendar. Yeah. Great, thank you very much. Yeah, we’ll be there because I was thinking about this sort of global standard and framework question. And one of the things that we’ve tried to do with the Creative Commons community is think about, all right, what are the things that everybody wants in this moment? What are the choices they want? And what are the sort of conditions under which they would share, right? And you can imagine it going everywhere from, to your point, like in the EU, there’s the opt -out. It’s like, no, I’m not interested in this. Very important that we maintain limitations and exceptions there, though, for research. All the way to the full yes, because maybe there’s a world where people are like, put me in, put me in and tell people who I am.

But somewhere in between there, there’s a yes if. yes if you reward me, yes if you attribute me, yes if you contribute to this project, yes if you support the open infrastructure etc. and I think we just have to get a lot more sort of creative and nuanced in that spectrum.

Speaker 1

I sense that at least for the short -term countries are going to try to find their own you know sort of policy solutions given and hopefully and I would hope that the intention is to harmonize as much as possible because I think the implications of just the AI business across the world require harmonization and scream for harmonization and so will businesses that use those models to create more IP or IP -like content. Nicholas you sit on these boards of companies frontier lab companies like Kivita and H company if there’s one mandate to Indian investors or something like that, what would you say? I think there’s a lot of work and creators in this room you want to give to ensure that we aren’t India is just not consumer of AI 2030, what would that be?

Nicholas Granatino

No, I think as an investor, it’s a tremendous time. I mean, I think it feels like the internet all over again. I think there’s lots of opportunity. It’s moving very fast, you know, much faster than the internet. So it’s a bit, you know, difficult to pack the right opportunity. But I think it’s going to be a collaboration with these wonderful, you know, tools that we have and human creativity. And that is going to stay. I mean, some people say storytelling is going to be the main skill in business. That is very much a human inequality. So the future is bright.

Speaker 1

I’m seeing this big flashing red sign which says time’s up. I don’t know, mine or the panel’s. I’m hoping it’s only the panel’s. But I’m going to do a little Don Quater thing and do one last question in terms of what single principle should guide international AI governance in the creative industries over the next decade? Ken? Ken?

Kenichiro Natsume

That’s a big question. It says time’s up, so let me do it very briefly. I think we should put human -centered approach because still creativity comes from human beings’ activity, not from the artificial intelligence. So this is one fundamental which I should take. Thank you.

Anna Tumadote

I’ll just say plus one. Just keep the humans. Keep the humans at the center.

Speaker 1

Insightful answer as always. Thank you. Thank you to the panel. Thank you to this very engaging audience. Thank you for listening to us.

Related ResourcesKnowledge base sources related to the discussion topics (31)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“The moderator introduced the three‑person panel: Nicholas Granatino, chairman of Tara Gaming; Kenichiro Natsume, Assistant Director‑General at WIPO; and Anna Tumadote, chief executive officer of Creative Commons.”

The knowledge base lists the same three speakers with those exact titles, confirming the report’s description [S2].

Confirmedhigh

“AI training datasets largely omit India’s rich oral and written traditions, specifically the public‑domain epics Mahabharata, Ramayana, and Bhagavad‑Gītā, leaving them under‑represented in model “latency space”.”

Sources note that Western-trained models lack Indian cultural material and that the epics are essential yet missing from many datasets, supporting the claim [S1] and [S106].

Additional Contextmedium

“Anna argued that AI’s impact on cultural diversity depends on openness, transparency, and good governance; without these, AI could weaken cultural diversity.”

Broader discussions in the knowledge base highlight concerns about linguistic and cultural diversity in AI and stress the need for open, transparent models to preserve cultural heritage [S28] and [S99].

Additional Contextmedium

“Ken stated that the existing international copyright system, though challenged, remains capable of coping with AI‑driven disruption and that AI is comparable to historic cutting‑edge technologies.”

The knowledge base outlines the complexity of applying traditional copyright to AI-generated content and notes ongoing policy work, providing nuance to Ken’s optimism but not confirming full capability [S33], [S15], [S20], [S105].

Additional Contextlow

“WIPO is pursuing pragmatic, technology‑focused tools such as opt‑in/opt‑out mechanisms and remuneration systems rather than waiting for a new international treaty.”

While the knowledge base does not mention these specific tools, it emphasizes that existing legal instruments and multi-stakeholder approaches are being leveraged to address AI and IP issues, adding relevant background [S105] and [S33].

!
Correctionlow

“The existing international copyright system remains capable of coping with AI‑driven disruption.”

The knowledge base points to significant unresolved challenges and divergent jurisdictional approaches to AI-generated works, suggesting the system’s capability is not yet assured [S33] and [S15].

External Sources (109)
S1
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-and-the-creative-economy — I’m seeing this big flashing red sign which says time’s up. I don’t know, mine or the panel’s. I’m hoping it’s only the …
S2
Panel Discussion AI and the Creative Economy — Yes. Yes. three crucial elements. I’ll start with Nicholas Granatino, who’s on the business side, the chairman of Tara G…
S3
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — -Ken Ichiro Natsume- Role/expertise not clearly specified in transcript
S4
Panel Discussion AI and the Creative Economy — Yes. Yes. three crucial elements. I’ll start with Nicholas Granatino, who’s on the business side, the chairman of Tara G…
S5
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-and-the-creative-economy — I’m seeing this big flashing red sign which says time’s up. I don’t know, mine or the panel’s. I’m hoping it’s only the …
S6
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-and-the-creative-economy — Yes. Yes. three crucial elements. I’ll start with Nicholas Granatino, who’s on the business side, the chairman of Tara G…
S7
Panel Discussion AI and the Creative Economy — – Anna Tumadote- Nicholas Granatino – Nicholas Granatino- Anna Tumadote – Nicholas Granatino- Speaker
S8
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S9
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S10
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S11
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Hannah Taieb:Real diversity is very important indeed, and it all depends on the models and business models. Algorithms a…
S12
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S13
Open Forum #33 Building an International AI Cooperation Ecosystem — **Professor Dai Li Na** from the Shanghai Academy of Social Sciences presented a comprehensive case study of Shanghai’s …
S14
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — I would emphasize there’s two ingredients that are necessary, which often are associated with discussions of responsible…
S15
Ties between generative artificial intelligence and intellectual property rights — It is during this unsupervised learning process that the first copyright issue arises, which relates to the presence of …
S16
The 9th WIPO Conversation on Intellectual Property and Frontier Technologies – Training the Machines: Bytes, Rights and the Copyright Conundrum — The WIPO conversation training will address the ongoing debate faced by AI developers, who rely heavily on publicly avai…
S17
WS #270 Understanding digital exclusion in AI era — The speaker advocates for a human-centered approach in AI design to ensure inclusivity and accessibility. This approach …
S18
9821st meeting — The Secretary-General emphasizes the importance of maintaining human control over AI systems. This is crucial to ensure …
S19
The entropy trap: When creativity forces AI into piracy — While the technology’s format is legally irrelevant, the court had to determine at which point of the process the reprod…
S20
The intellectual property saga: approaches for balancing AI advancements and IP protection |Part 3 — The intellectual property saga: The age of AI-generated content | Part 1 The intellectual property saga: AI’s impact on …
S21
Responsible AI in India Leadership Ethics & Global Impact — So you have to infuse more people into that. Legal teams. So small organizations cannot do that. So the people, process,…
S22
OpenAI delays Media Manager amid creator backlash — In May, OpenAI announced plans for ‘Media Manager,’ a tool to allow creators to control how their content is used in AI …
S23
(Plenary segment) Summit of the Future – General Assembly, 4th plenary meeting, 79th session — World Intellectual Property Organization: Mr President, distinguished delegates, throughout today we have heard world l…
S24
How to make AI governance fit for purpose? — Legal and regulatory | Development The speed of AI development creates uncertainty and challenges that exceed current c…
S25
Artificial intelligence and machine learning in armed conflict: A human-centred approach — specific rules of international humanitarian law. AI and machine-learning systems remain tools that must be used to serv…
S26
Digital Humanism: People first! — Pavan Duggal: Okay. Thank you for giving this opportunity. Today we are actually undergoing a new revolution. This is an…
S27
Ateliers : rapports restitution et séance de clôture — Joseph Nkalwo Ngoula Merci. C’est toujours difficile de restituer la parole d’experts de haut vol. sans courir le risque…
S28
Artificial intelligence — Cultural diversity
S29
How African knowledge and wisdom can inspire the development and governance of AI — H.E Muhammadou M.O. Kah:Thank you so much, and good afternoon. And apologies, I was somewhere else, being pulled in anot…
S32
The Alan Turing Institute stresses AI’s vital role in UK national security — A recentreportfrom the Turing’s Centre for Emerging Technology and Security (CETaS), commissioned by the UK government, …
S33
AI-generated content and IP rights: Challenges and policy considerations — Many regulations and policies are primarily focused on ethics, accountability, and risk management, yet barely address t…
S34
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Current policies often replicate Western standards, ignoring local contexts Demands on policy exist without the buildin…
S35
Smart Regulation Rightsizing Governance for the AI Revolution — This comment is deeply insightful because it cuts through the optimistic summit rhetoric to present a stark geopolitical…
S36
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — Furthermore, the speakers stress the importance of source reliability and ethically sourced data in AI. They note that c…
S37
Session — The discussion maintains a consistently academic and diplomatic tone throughout. Both participants approach the topic wi…
S38
Ad Hoc Consultation: Thursday 1st February, Morning session — From these positions, it is clear that Papua New Guinea is acting in diplomatic accord with the initiatives presented by…
S39
From Technical Safety to Societal Impact Rethinking AI Governanc — The discussion began with a formal, academic tone but became increasingly critical and urgent throughout. Speakers expre…
S40
Launch / Award Event #168 Parliamentary approaches to ICT and UN SC Resolution 1373 — The tone was largely formal and informative, with speakers providing expert perspectives in a professional manner. There…
S41
Panel Discussion AI and the Creative Economy — Anna highlights an ethical gap where creators lack consent and credit when AI uses their works. She calls for mechanisms…
S42
RESEARCH PAPERS — 57 Story, A., et al (eds.) ‘The Copy/South Dossier: Issues in the economics, politics and ideology of copyright in the …
S43
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — Furthermore, transparency issues were identified regarding web content and LLMs. The analysis noted that creative common…
S44
Policy Brief — The new digital environment offers both opportunities and challenges for developing countries. New international legal r…
S45
Artificial intelligence — Cultural diversity
S46
Open Forum #37 Digital and AI Regulation in La Francophonie an Inspiration and Global Good Practice — Boukar Michel: Thank you, Mr. Henri. Mr. Ambassador in charge of digital, thank you for giving me this opportunity to ta…
S47
How to ensure cultural and linguistic diversity in the digital and AI worlds? — He underscored the enormous potential for linguistic expansion to support SDG 10: Reduced Inequalities and SDG 16: Peace…
S48
The 9th WIPO Conversation on Intellectual Property and Frontier Technologies – Training the Machines: Bytes, Rights and the Copyright Conundrum — The WIPO conversation training will address the ongoing debate faced by AI developers, who rely heavily on publicly avai…
S49
Ties between generative artificial intelligence and intellectual property rights — It is during this unsupervised learning process that the first copyright issue arises, which relates to the presence of …
S50
Anthropic AI training upheld as fair use; pirated book storage heads to trial — A US federal judge has ruled that Anthropic’s use of books to train its AI modelfalls under fair use, marking a pivotal …
S51
Authors challenge Meta’s use of their books in AI training — A lawsuit filed by authors Richard Kadrey, Sarah Silverman, and Ta-Nehisi Coates against Metahas taken a significant ste…
S52
How Trust and Safety Drive Innovation and Sustainable Growth — Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was unexpected cons…
S53
WS #187 Bridging Internet AI Governance From Theory to Practice — Governance Implementation Challenges Uses historical examples of radio frequency spectrum and telecom network interconn…
S54
Laying the foundations for AI governance — The path forward likely requires synthesis: balancing international cooperation with respect for national differences, i…
S55
Open Forum #3 Cyberdefense and AI in Developing Economies — High level of consensus on problem identification and fundamental challenges, but more divergent views on solutions and …
S56
AI and international peace and security: Key issues and relevance for Geneva — Realism: Realism in this context emphasizes the importance of grounding governance frameworks in practical consideration…
S57
Closing remarks – Charting the path forward — Moving from principles to practical solutions, tools, technical standards and specific initiatives is essential for achi…
S58
AI-generated content and IP rights: Challenges and policy considerations — Many regulations and policies are primarily focused on ethics, accountability, and risk management, yet barely address t…
S59
360° on AI Regulations — AI regulations are considered crucial and should not be limited by borders, as they have a significant impact on various…
S60
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — 2. Policy Harmonisation and Regional Integration: 3. Contextualising Policies and Technologies: Adamma Isamade: Okay, …
S61
Leaders TalkX: Looking Ahead: Emerging tech for building sustainable futures — Dr. Sharon Weinblum:Thank you very much for giving me this opportunity to speak on such an important subject and to be a…
S62
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — AI policies in Africa should ideally espouse a context-specific and culturally sensitive orientation. The prevailing ten…
S64
Policy Network on Meaningful Access: Meaningful access to include and connect | IGF 2023 — They support practical initiatives such as digitisation projects, fellowships, and hackathons, contributing to the prese…
S65
WIPO Conversation on Intellectual Property (IP) and Artificial Intelligence (AI) — 25. The number of countries with expertise and capacity in AI is limited. At the same time, the technology of AI is adva…
S66
Artificial intelligence — Cultural diversity
S68
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S69
WS #98 Towards a global, risk-adaptive AI governance framework — 2. The importance of flexible frameworks that account for cultural differences and evolving technology. 3. The recognit…
S70
Open-source tech shapes the future of global AI governance — As the world marks a decade since China introduced the idea of building a ‘community of shared future in cyberspace,’ th…
S71
Panel Discussion AI and the Creative Economy — There is potential for creating new IP on public domain content, presenting unique opportunities for countries with rich…
S72
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Taneja argued that India is uniquely positioned to lead in AI deployment due to its status as the world’s strongest grow…
S73
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — And as we look at the journey on AI, which is just beginning for most of the world, what I see is if I look at the US, f…
S74
Open Forum #37 Digital and AI Regulation in La Francophonie an Inspiration and Global Good Practice — It’s unexpected that a French ambassador and an African regional representative would both emphasize the importance of d…
S75
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — In conclusion, AI presents both opportunities and challenges. Effective regulation is crucial to harness the potential b…
S76
The intellectual property saga: The age of AI-generated content | Part 1 — The intellectual property saga: AI’s impact on trade secrets and trademarks | Part 2 The intellectual property saga: app…
S77
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — There’s no really ethically sourced data and models of data scraping are not consensual. This will help establish a res…
S78
The 9th WIPO Conversation on Intellectual Property and Frontier Technologies – Training the Machines: Bytes, Rights and the Copyright Conundrum — The WIPO conversation training will address the ongoing debate faced by AI developers, who rely heavily on publicly avai…
S79
Session — The discussion maintains a consistently academic and diplomatic tone throughout. Both participants approach the topic wi…
S80
Ad Hoc Consultation: Thursday 1st February, Morning session — In a formal and courteous address, the speaker began by respectfully acknowledging the presiding official, Madam Chair, …
S81
Launch / Award Event #168 Parliamentary approaches to ICT and UN SC Resolution 1373 — The tone was largely formal and informative, with speakers providing expert perspectives in a professional manner. There…
S82
Pre 6: Countering Disinformation and Harmful Content Online — The discussion began with a measured, academic tone as experts presented frameworks and standards. However, it became in…
S83
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — The moderator opens, transitions, and closes the session, guaranteeing that speakers are introduced, the discussion proc…
S84
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S85
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, …
S86
WS #25 Multistakeholder cooperation for online child protection — The tone of the discussion was serious and concerned, reflecting the gravity of the issues being discussed. However, it …
S87
New Technologies and the Impact on Human Rights — The discussion maintained a collaborative and constructive tone throughout, despite addressing complex and sometimes con…
S88
Women, peace and security — The overall tone was one of concern and urgency. Many speakers expressed alarm at negative trends and backsliding on wom…
S89
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S90
The Global Power Shift India’s Rise in AI & Semiconductors — The discussion maintained an optimistic and forward-looking tone throughout, with speakers expressing confidence in Indi…
S91
Building the Workforce_ AI for Viksit Bharat 2047 — The tone was formal and optimistic throughout, maintaining a diplomatic and collaborative atmosphere. Speakers consisten…
S92
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S93
The Role of Government and Innovators in Citizen-Centric AI — The discussion maintained an optimistic and collaborative tone throughout, with speakers expressing enthusiasm about AI’…
S94
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — A regulamentação da informação e proteger as indústrias criativas de nossos países. O modelo atual de negócios dessas em…
S95
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — – A feedback form was shared at the end of the session for further input from participants. Ahmad Bhinder: Well, sorry…
S96
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — And thank you so much for joining us. And if you continue with your thoughts about the conversation, you can use the has…
S97
Open Forum #64 Women in Games and Apps: Innovation, Creativity and IP — Kristine Schlegelmilch: hear you. Go ahead, Christine. Thanks so much, Richard, and a big thank you, Ella, for opening…
S98
Open Forum #12 Game on Exploring IP and Resolving Disputes in Esports — – **Alexia Gkoritsa** – Co-moderator from the WIPO Arbitration and Mediation Center (AMC) – **Richard Frelick** – Moder…
S99
Inclusive AI_ Why Linguistic Diversity Matters — And so when I joined Current AI early this year, multilingual diversity was already a topic. And I was very happy about …
S100
Leaders TalkX: Local to global: preserving culture and language in a digital era — As Caroline Vuillemin concluded, preserving linguistic diversity and cultural heritage requires sustained political will…
S101
A Global Compact for Digital Justice: Southern perspectives | IGF 2023 — Anna Christina emphasises the significance of a multi-stakeholder approach in the governance of digital platforms, under…
S102
WSIS Action Line C8: Key messages in preparation for the UNESCO MONDIACULT Conference in 2025 — Laura Nonn:Good morning, everyone. My name is Laura Nunn. I work at the culture sector at UNESCO headquarters in Paris. …
S103
Cooperation for a Green Digital Future | IGF 2023 — By promoting digital innovation, Thorne aims to foster economic growth and address the goal of reducing inequalities, al…
S104
For the record: AI, creativity, and the future of music — Copyright Protection and Legal Framework Statement that ‘We’re really good at copyright. We figured it out’ and that th…
S105
Keynotes — O’Flaherty emphasizes that we are not operating in a legal vacuum when it comes to digital governance. He argues that th…
S106
From Innovation to Impact_ Bringing AI to the Public — The cultural preservation argument proves particularly nuanced. Sharma illustrates how Western-trained models may lack u…
S107
Disinformation and Misinformation in Online Content and its Impact on Digital Trust — Mike Mpanya: Yeah, yeah. I would say as someone who’s going to live in the future as well, I’m actually very hopeful aro…
S108
WS #219 Generative AI Llms in Content Moderation Rights Risks — Dhanaraj Thakur provided extensive analysis of how language inequities create systematic discrimination in LLM-based con…
S109
The Expanding Universe of Generative Models — The current training strategy struggles to progress as models advance beyond the knowledge of an average person They st…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Anna Tumadote
4 arguments198 words per minute1312 words395 seconds
Argument 1
Openness and governance determine whether AI weakens or strengthens cultural diversity
EXPLANATION
Anna argues that the effect of AI on cultural diversity depends on whether the models are open source and governed transparently. If AI systems are built with clear values and design principles, they can support diversity; otherwise they risk weakening it.
EVIDENCE
She explains that the answer to the question of AI’s impact on cultural diversity “depends” on factors such as whether the model is open source and can be interrogated, and whether it has good governance frameworks that reveal its intentions; she notes that currently the situation is risky and leans toward weakening cultural diversity [12-17].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel discussion notes that openness, transparency and governance frameworks shape AI’s impact on cultural diversity, highlighting risks of weakening diversity when governance is lacking [S2] and the risk of cognitive colonialism is discussed in [S26].
MAJOR DISCUSSION POINT
Openness and governance determine whether AI weakens or strengthens cultural diversity
AGREED WITH
Nicholas Granatino
DISAGREED WITH
Nicholas Granatino, Kenichiro Natsume
Argument 2
Open‑movement perspective: the public domain is essential for AI development, but creators need clear incentives to share
EXPLANATION
Anna highlights that while the public domain fuels AI training, creators who contribute freely are increasingly pulling back due to lack of consent and attribution. She calls for mechanisms that reward and protect creators while preserving open access.
EVIDENCE
She points out the ethical inconsistency of using scraped works without consent, cites examples of artists like Holly Herndon and Imogen Heap who experiment with AI, and stresses the need for a middle ground that gives credit and possibly remuneration to creators [99-104][80-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Anna’s remarks in the panel emphasize the public domain’s role and the need for incentives and credit for creators, echoed by discussions of creator-control tools such as OpenAI’s Media Manager [S22] and consent issues in AI training [S15] [S2].
MAJOR DISCUSSION POINT
Open‑movement perspective: the public domain is essential for AI development, but creators need clear incentives to share
AGREED WITH
Nicholas Granatino
DISAGREED WITH
Kenichiro Natsume, Nicholas Granatino
Argument 3
There is a clear ethical inconsistency: creators lack consent and attribution when their works are scraped for AI training
EXPLANATION
Anna states that it is ethically inconsistent for AI systems to train on copyrighted works without the creators’ permission or proper attribution. She argues that this lack of consent undermines trust in AI and harms the creative ecosystem.
EVIDENCE
She answers the question directly with “Yes, yes is the answer” and explains that artists complain about the lack of consent in AI training, noting that the massive scraping of works creates an ethical problem [80-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel raises the ethical inconsistency of training on copyrighted works without consent, aligning with analysis of consent and compensation in AI training datasets [S15] and WIPO’s concerns about copyrighted material in training [S16] [S2].
MAJOR DISCUSSION POINT
There is a clear ethical inconsistency: creators lack consent and attribution when their works are scraped for AI training
AGREED WITH
Kenichiro Natsume
DISAGREED WITH
Kenichiro Natsume, Nicholas Granatino
Argument 4
Keep humans at the centre of AI systems, ensuring credit, control, and benefit flow back to creators
EXPLANATION
Anna reinforces the idea that humans must remain central in AI-driven creative processes, with mechanisms to ensure creators receive credit and benefits. She emphasizes that AI should augment, not replace, human creativity.
EVIDENCE
In her closing remarks she simply adds “plus one. Just keep the humans at the center” and earlier she discussed the need for credit when AI fuses styles or inspirations [207-209][102-104].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Her closing comment about keeping humans central is reflected in the panel and reinforced by calls for human-centered AI governance in the UN Secretary-General’s remarks [S18] and the human-centred approach highlighted in [S17] [S2].
MAJOR DISCUSSION POINT
Keep humans at the centre of AI systems, ensuring credit, control, and benefit flow back to creators
AGREED WITH
Kenichiro Natsume, Nicholas Granatino
K
Kenichiro Natsume
5 arguments133 words per minute952 words428 seconds
Argument 1
AI can both enhance copyright‑protected works and pose a threat; existing IP systems can still cope
EXPLANATION
Kenichiro says AI can be used to enrich copyrighted works, but it also raises concerns when AI generates new creations. He believes the current international copyright framework is capable of handling these challenges.
EVIDENCE
He notes that AI can enhance the “copyright table” of artworks while also being a threat when creations are generated by AI, and asserts that the existing copyright system can still cope with AI despite its disruptive nature [20-23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Kenichiro states AI can enrich the copyright table while existing frameworks can cope, a view echoed in the panel discussion and in the broader IP analysis of AI impacts [S2] and the intellectual property saga overview [S20].
MAJOR DISCUSSION POINT
AI can both enhance copyright‑protected works and pose a threat; existing IP systems can still cope
DISAGREED WITH
Anna Tumadote, Nicholas Granatino
Argument 2
Practical technological solutions (e.g., opt‑in/opt‑out mechanisms) can help Indian creators be remunerated while enabling AI use
EXPLANATION
Kenichiro proposes focusing on pragmatic, technology‑based tools such as opt‑in/opt‑out systems that let creators be rewarded while allowing AI developers to use data. He emphasizes that this approach is more feasible than waiting for a new treaty.
EVIDENCE
He describes a pragmatic approach that looks for technological infrastructure allowing creators to be remunerated and tech companies to recognize opted-in or opted-out content, and mentions an upcoming meeting on March 17 to discuss this solution [68-76][168-171].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He proposes opt-in/opt-out mechanisms, detailed in the panel, and aligned with Indian responsible AI initiatives described in [S21] and the France-India trusted AI collaboration [S14] [S2].
MAJOR DISCUSSION POINT
Practical technological solutions (e.g., opt‑in/opt‑out mechanisms) can help Indian creators be remunerated while enabling AI use
AGREED WITH
Anna Tumadote
Argument 3
Achieving consensus among 194 WIPO members is slow; a pragmatic, technology‑focused approach is preferred over a new treaty
EXPLANATION
Kenichiro explains that reaching consensus among all WIPO member states would take a long time, so WIPO is opting for a more practical, technology‑driven solution rather than drafting a new international treaty on AI and IP.
EVIDENCE
He outlines that WIPO has 194 member states, operates on consensus, and therefore a new treaty would be a long journey; instead, the organization is focusing on pragmatic, technological solutions that can be implemented sooner [60-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel notes WIPO’s preference for pragmatic, technology-driven solutions over a new treaty, corroborated by WIPO’s own statement on a pragmatic approach [S23] and the challenges of consensus among 194 members [S2].
MAJOR DISCUSSION POINT
Achieving consensus among 194 WIPO members is slow; a pragmatic, technology‑focused approach is preferred over a new treaty
AGREED WITH
Anna Tumadote
DISAGREED WITH
Speaker 1
Argument 4
Human‑learning by imitation is traditional, but AI’s speed creates a novel dilemma that demands a boundary line
EXPLANATION
Kenichiro notes that humans have always learned by copying others, a process that takes time, whereas AI can replicate at unprecedented speed. This acceleration raises new questions about where to draw the line between acceptable learning and problematic automation.
EVIDENCE
He explains that the Japanese term for learning originally meant mimic or copy, and while humans take time to learn, computers can do it in “very, very limited time,” highlighting the need to consider where to draw a line [119-127].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
His observation about imitation versus AI speed and the need for a boundary line is captured in the panel and further discussed in the AI governance speed challenges report [S24] [S2].
MAJOR DISCUSSION POINT
Human‑learning by imitation is traditional, but AI’s speed creates a novel dilemma that demands a boundary line
DISAGREED WITH
Anna Tumadote, Nicholas Granatino
Argument 5
Adopt a human‑centered approach because still creativity comes from human beings’ activity, not from the artificial intelligence
EXPLANATION
In his final remarks, Kenichiro stresses that AI governance should prioritize human creativity, asserting that AI does not generate original creativity on its own.
EVIDENCE
He succinctly states that a “human-centered approach” is needed because creativity still originates from human activity [204-205].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
His final call for a human-centered approach matches the panel’s emphasis and the Secretary-General’s call for human control over AI systems [S18] and the human-centred AI principles in [S17] [S2].
MAJOR DISCUSSION POINT
Adopt a human‑centered approach because still creativity comes from human beings’ activity, not from the artificial intelligence
AGREED WITH
Anna Tumadote, Nicholas Granatino
N
Nicholas Granatino
5 arguments163 words per minute1144 words420 seconds
Argument 1
Training data lack representation of Indian epics; inclusion is needed to preserve and boost cultural creativity
EXPLANATION
Nicholas points out that Indian oral traditions and epics are under‑represented in AI training datasets, which limits the models’ ability to reflect Indian culture. He calls for these works to be incorporated to enhance creativity and cultural diversity.
EVIDENCE
He describes India’s oral tradition, the limited digitisation of its content compared to Hollywood, and stresses that Indian epics are not present in current training data, arguing that they need to be included for AI to boost creativity [26-31].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Nicholas highlights the under-representation of Indian epics in training data, a point made in the panel and supported by discussions on cultural bias and the need for sovereign datasets [S11] and the India-France AI collaboration [S14] [S2].
MAJOR DISCUSSION POINT
Training data lack representation of Indian epics; inclusion is needed to preserve and boost cultural creativity
DISAGREED WITH
Anna Tumadote, Kenichiro Natsume
Argument 2
Indian public‑domain epics (Mahabharata, Ramayana, Gita) offer a rich, living dataset for AI training and novel IP creation
EXPLANATION
Nicholas explains that these ancient epics are in the public domain and remain part of living tradition, providing a valuable, culturally rich dataset for AI models. Leveraging them can also enable the creation of new IP built on this heritage.
EVIDENCE
He details the Mahabharata, Ramayana, and Gita as public-domain works that are still told across generations, describes how Sarvam AI will digitize them via OCR and voice models to build a dataset, and notes the opportunity to create new IP from this public-domain content [38-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel details the richness of the Mahabharata, Ramayana and Gita as public-domain resources for AI, aligning with the proposal to digitise them via OCR and voice models [S2] and the broader push for inclusive AI datasets [S11].
MAJOR DISCUSSION POINT
Indian public‑domain epics (Mahabharata, Ramayana, Gita) offer a rich, living dataset for AI training and novel IP creation
Argument 3
Data is often treated as free while AI services monetize access; society must decide how to balance open data with fair compensation
EXPLANATION
Nicholas argues that while data is presented as free, AI providers monetize the resulting services, raising questions about equitable compensation for the original data contributors. He suggests society must deliberate on the appropriate balance.
EVIDENCE
He observes that big tech treats content as free and monetizes search, graph, and AI services, questioning whether society wants AI to have the best data for free while companies profit at the gateway, and cites President Macron’s comment about civilization and the need to decide the future of creative work [110-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
His critique of free data versus monetised AI services is echoed in the panel and in analyses of consent and compensation for training data [S15] and the OpenAI Media Manager debate [S22] [S2].
MAJOR DISCUSSION POINT
Data is often treated as free while AI services monetize access; society must decide how to balance open data with fair compensation
AGREED WITH
Anna Tumadote
DISAGREED WITH
Kenichiro Natsume, Anna Tumadote
Argument 4
Unchecked commodification of freely shared cultural content risks shrinking the commons; societal safeguards are needed
EXPLANATION
Nicholas warns that without proper safeguards, creators may withdraw their works from the public domain, leading to a contraction of the commons. He calls for mechanisms that protect shared cultural heritage while enabling AI innovation.
EVIDENCE
He notes the shrinking of the commons as creators become more restrictive due to lack of consent mechanisms, describing this as a “bad outcome” with downstream negative consequences for collaboration [144-150].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel warning about shrinking the commons is reinforced by the concept of cognitive colonialism and the need for safeguards in [S26] and discussions of digital exclusion [S17] [S2].
MAJOR DISCUSSION POINT
Unchecked commodification of freely shared cultural content risks shrinking the commons; societal safeguards are needed
AGREED WITH
Anna Tumadote
DISAGREED WITH
Anna Tumadote, Kenichiro Natsume
Argument 5
Emphasise storytelling and human creativity as the enduring skill set; AI should serve as a collaborative tool
EXPLANATION
Nicholas highlights storytelling as a core human skill that will remain valuable, suggesting AI should complement rather than replace human creativity. He likens the current AI boom to the early internet era, emphasizing collaboration between tools and creators.
EVIDENCE
He states that storytelling will be the main skill in business, calls it a “human inequality,” and asserts that the future is bright as AI tools collaborate with human creativity [194-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
His emphasis on storytelling as a core skill and AI as a collaborative tool matches the panel’s remarks and the human-centred AI narrative in [S17] and the digital humanism perspective [S26] [S2].
MAJOR DISCUSSION POINT
Emphasise storytelling and human creativity as the enduring skill set; AI should serve as a collaborative tool
AGREED WITH
Anna Tumadote, Kenichiro Natsume
Agreements
Agreement Points
AI governance should be human‑centered, preserving human creativity and ensuring humans remain at the core of AI‑driven creative processes
Speakers: Anna Tumadote, Kenichiro Natsume, Nicholas Granatino
Keep humans at the centre of AI systems, ensuring credit, control, and benefit flow back to creators Adopt a human‑centered approach because still creativity comes from human beings’ activity, not from the artificial intelligence Emphasise storytelling and human creativity as the enduring skill set; AI should serve as a collaborative tool
All three panelists stress that creativity originates from people, not machines, and that AI should augment rather than replace human creators; they call for keeping humans central in any AI governance framework [207-209][204-205][194-196].
Creators need clear attribution, credit and incentive mechanisms to keep the public domain and commons vibrant
Speakers: Anna Tumadote, Nicholas Granatino
Open‑movement perspective: the public domain is essential for AI development, but creators need clear incentives to share There is a clear ethical inconsistency: creators lack consent and attribution when their works are scraped for AI training Unchecked commodification of freely shared cultural content risks shrinking the commons; societal safeguards are needed Data is often treated as free while AI services monetize access; society must decide how to balance open data with fair compensation
Both speakers highlight that without proper credit and remuneration creators will withdraw works, shrinking the commons; they argue for mechanisms that reward creators while preserving open access [99-104][80-86][144-150][110-115].
POLICY CONTEXT (KNOWLEDGE BASE)
Anna’s remarks on the ethical gap for creators and WIPO’s discussion of consent and fair compensation for training data underscore the need for attribution and incentive mechanisms [S41][S49].
Openness, transparency and good governance of AI models determine whether AI will strengthen or weaken cultural diversity
Speakers: Anna Tumadote, Nicholas Granatino
Openness and governance determine whether AI weakens or strengthens cultural diversity Data is often treated as free while AI services monetize access; society must decide how to balance open data with fair compensation
Anna stresses that open-source models with clear governance can support cultural diversity, while Nicholas warns that treating data as free while profiting from AI risks weakening diversity; both link openness and governance to outcomes for culture [12-17][110-115][155-158].
POLICY CONTEXT (KNOWLEDGE BASE)
Transparency concerns about LLMs and the role of Creative Commons control mechanisms illustrate how openness influences cultural diversity outcomes [S43][S45].
Pragmatic, technology‑based solutions (e.g., opt‑in/opt‑out, attribution tools) are more realistic in the short term than waiting for a new international treaty
Speakers: Kenichiro Natsume, Anna Tumadote
Practical technological solutions (e.g., opt‑in/opt‑out mechanisms) can help Indian creators be remunerated while enabling AI use Achieving consensus among 194 WIPO members is slow; a pragmatic, technology‑focused approach is preferred over a new treaty There is a clear ethical inconsistency: creators lack consent and attribution when their works are scraped for AI training
Ken stresses that consensus-driven treaties will take too long and proposes opt-in/opt-out tools; Anna echoes the need for technical or normative frameworks to address consent and attribution, indicating shared belief in near-term tech solutions [68-76][168-171][138-139][80-86].
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholders argue that targeted, technology-based tools are more feasible than awaiting a new treaty, reflecting the pragmatic stance expressed at IGF and WIPO panels [S52][S53][S54][S55][S57].
Similar Viewpoints
Both emphasize that the public domain fuels AI but that creators must receive credit and fair reward; otherwise the commons will contract and cultural diversity will suffer [99-104][80-86][144-150][110-115].
Speakers: Anna Tumadote, Nicholas Granatino
Open‑movement perspective: the public domain is essential for AI development, but creators need clear incentives to share There is a clear ethical inconsistency: creators lack consent and attribution when their works are scraped for AI training Unchecked commodification of freely shared cultural content risks shrinking the commons; societal safeguards are needed Data is often treated as free while AI services monetize access; society must decide how to balance open data with fair compensation
Both call for a human‑centric AI governance model and favour practical, technology‑driven mechanisms (such as attribution or opt‑in/opt‑out) over lengthy treaty negotiations [207-209][204-205][68-76][168-171].
Speakers: Anna Tumadote, Kenichiro Natsume
Keep humans at the centre of AI systems, ensuring credit, control, and benefit flow back to creators Adopt a human‑centered approach because still creativity comes from human beings’ activity, not from the artificial intelligence Practical technological solutions (e.g., opt‑in/opt‑out mechanisms) can help Indian creators be remunerated while enabling AI use Achieving consensus among 194 WIPO members is slow; a pragmatic, technology‑focused approach is preferred over a new treaty
Unexpected Consensus
Agreement between a Creative‑Commons representative (Anna) and a WIPO official (Ken) on the need for immediate, technology‑based, opt‑in/opt‑out mechanisms rather than waiting for a new international treaty
Speakers: Anna Tumadote, Kenichiro Natsume
Practical technological solutions (e.g., opt‑in/opt‑out mechanisms) can help Indian creators be remunerated while enabling AI use Achieving consensus among 194 WIPO members is slow; a pragmatic, technology‑focused approach is preferred over a new treaty There is a clear ethical inconsistency: creators lack consent and attribution when their works are scraped for AI training
Despite coming from different institutional backgrounds (open-movement vs intergovernmental), both converge on the view that short-term technical tools are the realistic path forward, which is not an obvious alignment given their usual policy stances [68-76][168-171][138-139][80-86].
POLICY CONTEXT (KNOWLEDGE BASE)
The panel where Anna and Ken advocated immediate opt-in/opt-out mechanisms documents this cross-organizational agreement [S41][S48].
Overall Assessment

The panel shows strong convergence on three pillars: (1) AI systems must remain human‑centered; (2) creators need attribution, consent and fair remuneration to keep the commons alive; (3) openness and transparent governance are decisive for cultural diversity outcomes. Participants also agree that pragmatic, technology‑driven solutions are preferable to protracted treaty negotiations.

High consensus on the human‑centric, rights‑based approach and on the need for practical technical mechanisms; this suggests that future policy work can build on these shared foundations to shape AI governance frameworks that protect cultural diversity while enabling innovation.

Differences
Different Viewpoints
Impact of AI on cultural diversity – whether it weakens or can be leveraged to strengthen diversity
Speakers: Anna Tumadote, Nicholas Granatino, Kenichiro Natsume
Openness and governance determine whether AI weakens or strengthens cultural diversity Training data lack representation of Indian epics; inclusion is needed to preserve and boost cultural creativity AI can both enhance copyright‑protected works and pose a threat; existing IP systems can still cope
Anna says the effect of AI on cultural diversity depends on openness and governance and currently leans toward weakening [12-17]. Nicholas argues that the lack of Indian epics in training data harms diversity and that their inclusion would boost cultural creativity [26-31]. Ken counters that AI can enrich copyrighted works while also posing threats, but believes the current international copyright system can still handle these challenges [20-23].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs and forum discussions have examined AI’s potential to both erode and promote cultural and linguistic diversity, highlighting the contested impact [S45][S47][S61][S63].
Preferred governance route – new international treaty vs pragmatic technological solutions and incentive mechanisms
Speakers: Kenichiro Natsume, Anna Tumadote, Nicholas Granatino
Achieving consensus among 194 WIPO members is slow; a pragmatic, technology‑focused approach is preferred over a new treaty Open‑movement perspective: the public domain is essential for AI development, but creators need clear incentives to share Data is often treated as free while AI services monetize access; society must decide how to balance open data with fair compensation
Ken stresses that reaching consensus among 194 WIPO members would take too long, so WIPO is focusing on pragmatic, technology-based tools such as opt-in/opt-out mechanisms rather than drafting a new treaty [60-68][168-171]. Anna highlights the need for incentive structures and normative frameworks to keep creators sharing while protecting their rights [80-86][136-139]. Nicholas points out the tension between the notion of “free” data and the monetisation of AI services, calling for societal decisions on fair compensation [110-115].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between pursuing a new treaty versus deploying pragmatic technical solutions is a recurring theme in AI governance literature, noted in WIPO and IGF deliberations on realistic pathways [S52][S53][S54][S55][S57].
Ethical consistency of using copyrighted works for AI training without consent or attribution
Speakers: Anna Tumadote, Kenichiro Natsume, Nicholas Granatino
There is a clear ethical inconsistency: creators lack consent and attribution when their works are scraped for AI training Human‑learning by imitation is traditional, but AI’s speed creates a novel dilemma that demands a boundary line Unchecked commodification of freely shared cultural content risks shrinking the commons; societal safeguards are needed
Anna declares a clear ethical inconsistency because creators’ works are scraped for AI training without consent or attribution [80-86]. Ken focuses on the speed difference between human learning and AI replication, arguing that the issue is to draw a line rather than framing it as an ethical breach [119-127]. Nicholas warns that the lack of consent mechanisms is causing creators to withdraw works, leading to a shrinking commons [144-150].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple cases and analyses-including the Meta lawsuit, the Anthropic fair-use ruling, and WIPO’s examination of consent for training data-highlight the ethical concerns of non-consensual use [S48][S49][S50][S51][S58].
Realism of achieving global IP harmonisation for AI‑generated content
Speakers: Kenichiro Natsume, Speaker 1
Achieving consensus among 194 WIPO members is slow; a pragmatic, technology‑focused approach is preferred over a new treaty Is the global IP framework prepared for large‑scale AI‑generated content today? (implied expectation of harmonisation)
Ken argues that a global treaty is unrealistic due to the need for consensus among 194 states, so a pragmatic, technology-driven approach is favoured [60-68][168-171]. Speaker 1, however, raises the question of whether the global IP framework can handle AI-generated content, implying that harmonisation is desirable and expected [184-185].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs on digital measures, discussions on harmonising regulations, and WIPO’s assessment of capacity gaps illustrate the practical limits of global IP harmonisation for AI-generated works [S44][S58][S59][S60][S65].
Unexpected Differences
Open movement’s resilience vs shrinking commons
Speakers: Anna Tumadote, Nicholas Granatino
Open‑movement perspective: the public domain is essential for AI development, but creators need clear incentives to share Unchecked commodification of freely shared cultural content risks shrinking the commons; societal safeguards are needed
Anna expresses optimism that the open movement can withstand AI’s scale, suggesting only a small (5 %) problem and that the commons can adapt [131-133]. Nicholas, however, warns that without proper safeguards creators are withdrawing works, leading to a shrinking commons – a much more severe outcome [144-150]. This contrast between optimism and alarm was not anticipated given their shared commitment to openness.
AI’s ability to enhance copyright vs ethical inconsistency of training on copyrighted works
Speakers: Kenichiro Natsume, Anna Tumadote
AI can both enhance copyright‑protected works and pose a threat; existing IP systems can still cope There is a clear ethical inconsistency: creators lack consent and attribution when their works are scraped for AI training
Ken maintains that the current IP framework can accommodate AI-enhanced works and that the system is capable of coping [20-23]. Anna counters that using scraped copyrighted material without consent is ethically inconsistent, undermining trust in AI [80-86]. The clash between a technical/legal confidence in existing IP and a moral critique of consent is unexpected.
POLICY CONTEXT (KNOWLEDGE BASE)
Scholars note AI can augment copyright protection but stress the inconsistency of training on copyrighted works without permission, calling for fair-compensation frameworks [S49][S58].
Overall Assessment

The panel shows substantial disagreement on three core fronts: (1) the net impact of AI on cultural diversity and the role of data representation; (2) the appropriate governance pathway – treaty‑based legal harmonisation versus pragmatic technological tools and incentive mechanisms; (3) the ethical framing of consent and attribution in AI training. While all participants converge on keeping humans central and valuing the public domain, they diverge sharply on how to achieve these goals.

The disagreement is moderate to high, reflecting fundamentally different assumptions about the adequacy of existing IP law, the speed of international consensus, and the moral weight of consent. These divergences suggest that any policy response will need to balance legal pragmatism with ethical safeguards, and that achieving global consensus may be prolonged, requiring parallel technological and normative initiatives.

Partial Agreements
All three agree that humans must remain central in AI‑driven creative processes and that AI should augment rather than replace human creativity. Anna and Ken explicitly call for a human‑centered approach [207-209][204-205], while Nicholas stresses storytelling as a uniquely human skill that will stay valuable [194-196]. The divergence lies in how to operationalise this principle – through credit mechanisms, technological safeguards, or broader societal choices.
Speakers: Anna Tumadote, Kenichiro Natsume, Nicholas Granatino
Keep humans at the centre of AI systems, ensuring credit, control, and benefit flow back to creators Adopt a human‑centered approach because still creativity comes from human beings’ activity, not from the artificial intelligence Emphasise storytelling and human creativity as the enduring skill set; AI should serve as a collaborative tool
Both agree that the public domain is a crucial resource for AI training and that creators need incentives or safeguards to keep sharing. Anna stresses the need for credit and remuneration [80-86][102-104], while Nicholas points out the tension between free data and monetisation [110-115]. They differ on the primary mechanism – Anna leans toward licensing/credit frameworks, Nicholas emphasizes broader societal decisions on compensation.
Speakers: Anna Tumadote, Nicholas Granatino
Open‑movement perspective: the public domain is essential for AI development, but creators need clear incentives to share Data is often treated as free while AI services monetize access; society must decide how to balance open data with fair compensation
Both recognise the public domain as a valuable, living dataset for AI. Anna highlights its importance for the open movement and the need for incentives [80-86], while Nicholas details specific Indian epics that are in the public domain and can be digitised for AI training [38-48]. The disagreement is on the focus: Anna speaks generally about the public domain, Nicholas concentrates on a concrete Indian cultural corpus.
Speakers: Anna Tumadote, Nicholas Granatino
Open‑movement perspective: the public domain is essential for AI development, but creators need clear incentives to share Indian public‑domain epics (Mahabharata, Ramayana, Gita) offer a rich, living dataset for AI training and novel IP creation
Takeaways
Key takeaways
The impact of AI on cultural diversity depends on openness, governance, and design principles; without these, AI risks weakening diversity. AI can both enhance and threaten copyright‑protected works, but existing IP systems are viewed as capable of adapting, though they may need updates for scale. Indian public‑domain epics (Mahabharata, Ramayana, Gita) are under‑represented in AI training data; leveraging them offers a strategic advantage for creating new IP and preserving cultural heritage. The current global IP framework is not fully prepared for large‑scale AI‑generated content; consensus‑based treaty making is slow, prompting a shift toward pragmatic, technology‑focused solutions. There is a clear ethical inconsistency in using creators’ works for AI training without consent or attribution, which threatens the commons and calls for new normative or technical safeguards. All panelists emphasized a human‑centred approach: creativity originates from people, and AI should serve as a collaborative tool that respects credit, control, and benefit for creators.
Resolutions and action items
WIPO will convene a meeting on March 17 to discuss technological infrastructures (opt‑in/opt‑out, attribution mechanisms) that can balance creator remuneration with AI development. Tara Gaming (via Sarvam AI) plans to digitize Indian epics using OCR and voice models to create a public‑domain dataset for AI training. Creative Commons expressed intent to attend the March 17 WIPO meeting and to continue developing nuanced licensing options that incorporate consent and attribution for AI use. Panelists suggested investors view AI as a new frontier comparable to the early internet, encouraging investment in tools that augment human storytelling.
Unresolved issues
How to establish a globally accepted, enforceable consent and attribution framework for AI training on existing works. Whether a harmonised international treaty on AI and IP is feasible or if fragmented national approaches will dominate. What concrete legal or technical standards will define the line between acceptable AI‑assisted creation and infringement. How to prevent cultural gatekeeping by a few dominant AI platforms while ensuring open access to diverse cultural data. The long‑term impact of AI on the size and health of the public domain and commons if creators retreat behind paywalls.
Suggested compromises
Adopt middle‑ground licensing models that allow opt‑in or opt‑out choices, with conditions such as attribution, remuneration, or contribution to open infrastructure. Focus on pragmatic, technology‑driven solutions (e.g., metadata tags, provenance tracking) rather than waiting for a full international treaty. Encourage creators to share works under Creative Commons‑style licenses while providing mechanisms that reward them when their data is used by AI systems. Maintain a human‑centred governance principle that keeps creators at the core of AI development and ensures they receive credit and benefits.
Thought Provoking Comments
It depends. Is the model open source and interrogable, or closed source and opaque? The outcome will hinge on the values and design principles we embed, and currently we are at risk of weakening cultural diversity.
She frames the AI‑cultural diversity debate not as a binary but as contingent on openness, governance, and design choices, foregrounding the importance of transparency and community control.
Set the analytical lens for the whole panel, prompting later speakers to discuss open‑source data, public‑domain resources, and the ethical implications of closed models. It shifted the conversation from abstract benefits to concrete governance questions.
Speaker: Anna Tumadote
India’s oral traditions like the Mahabharata and Ramayana are in the public domain but under‑represented in training data. We need to digitize these epics so AI can learn from them, giving India a strategic edge.
He highlights a concrete cultural and data gap, linking it to both representation and economic opportunity, and introduces the concept of leveraging public‑domain heritage as AI training material.
Redirected the discussion toward data equity and the practical steps (e.g., OCR, voice models) needed to include non‑Western content. It spurred follow‑up questions about IP creation from public‑domain works and the risk of cultural gatekeeping.
Speaker: Nicholas Granatino
WIPO’s consensus‑based treaty process is too slow for AI; we should pursue pragmatic technological solutions that let creators be remunerated while allowing tech firms to use the data.
He challenges the assumption that international law can keep pace, proposing a shift from legal treaties to technical infrastructure as a more realistic short‑term remedy.
Introduced a new policy direction, prompting Anna to discuss the limits of current copyright frameworks and leading the panel to consider technical standards (opt‑in/opt‑out mechanisms) rather than waiting for global legal consensus.
Speaker: Kenichiro Natsume
There is an ethical inconsistency: creators demand consent while AI models are trained on a massive, scraped corpus. The real issue is the fear of replacement and the need for attribution when AI fuses styles.
She pinpoints the paradox between open‑source expectations and the reality of massive data scraping, and moves the debate from legal copyright to broader ethical and labor concerns.
Deepened the conversation about creator rights, leading to further remarks on the shrinking commons and the necessity of credit mechanisms. It also set up the later discussion on how the open movement is being pressured.
Speaker: Anna Tumadote
The Protein Data Bank has been providing data for decades; yet Nobel recognition missed it. Society must decide if we want AI to have the best data, and on what terms we let it profit from that data.
He uses a vivid analogy to illustrate how foundational data infrastructures are undervalued, raising the question of who benefits from AI’s data pipelines.
Prompted a broader reflection on data as a public good versus a commercial gateway, influencing the later “Captain America hegemony” comment and reinforcing the call for equitable data governance.
Speaker: Nicholas Granatino
We are already seeing the commons shrink because creators, lacking consent mechanisms, are pulling back or putting their work behind paywalls. This threatens the collaborative spirit of the open movement.
She identifies a tangible negative feedback loop where lack of control leads to reduced sharing, threatening the very foundation of open culture.
Shifted the tone from hopeful to cautionary, prompting the panel to discuss protective measures, licensing nuances, and the urgency of establishing new normative frameworks.
Speaker: Anna Tumadote
I call it the ‘Captain America hegemony’: open‑source gives a free gateway to a corpus of creativity that powerful platforms can weaponize, so we must remember AI sits on top of human‑made art and not replace the creative process.
He coins a memorable metaphor for cultural gatekeeping by dominant AI platforms, emphasizing the power imbalance inherent in open‑source models.
Served as a turning point, moving the discussion from data inclusion to power dynamics and prompting participants to consider safeguards against monopolistic control of cultural assets.
Speaker: Nicholas Granatino
A single guiding principle for AI governance in creative industries should be a human‑centered approach – keep humans at the core of creativity.
It distills the complex debate into a clear, actionable ethic, reinforcing the centrality of human agency amidst rapid AI development.
Provided a concise closing framework that unified earlier points about consent, attribution, and equitable data use, leaving the audience with a memorable takeaway.
Speaker: Kenichiro Natsume (and echoed by Anna Tumadote)
Overall Assessment

The discussion was shaped by a series of pivotal interventions that moved it from a generic overview of AI’s impact on creativity to a nuanced examination of data equity, governance, and ethical responsibility. Anna’s opening framing of openness versus opacity set the agenda, while Nicholas’s focus on India’s under‑represented public‑domain epics introduced the concrete problem of cultural bias in training data. Ken’s pragmatic call for technological solutions over slow legal treaties reframed the policy debate, and Anna’s ethical inconsistency point deepened the conversation around creator consent and attribution. Subsequent analogies (Protein Data Bank, ‘Captain America hegemony’) highlighted the systemic undervaluation of data and the risk of platform dominance. Together, these comments redirected the panel toward actionable ideas—digitizing heritage, building opt‑in infrastructures, and maintaining a human‑centered principle—thereby enriching the dialogue and providing a clear roadmap for future AI governance in the creative sector.

Follow-up Questions
How can consent and attribution be built into AI training datasets to address ethical inconsistencies?
Anna highlighted an ethical inconsistency in AI training on copyrighted works and called for normative/legal/technical frameworks to ensure creators’ consent and proper credit.
Speaker: Anna Tumadote
What technological solutions can enable creators to be remunerated and allow tech platforms to recognize opt‑in/opt‑out status of works?
Ken emphasized the need for a practical technological infrastructure that can track and enforce creators’ rights while supporting AI development.
Speaker: Kenichiro Natsume
How can India ensure its rich public‑domain cultural heritage is represented in AI training data to gain a strategic edge?
Nicholas pointed out that Indian epics are under‑represented in current datasets and asked how to make India’s cultural assets a significant part of AI training.
Speaker: Nicholas Granatino
What is the impact of AI on the commons and how can the shrinking of open cultural resources be prevented?
Anna observed creators pulling back from sharing due to AI misuse, warning of a shrinking commons and calling for research on protective mechanisms.
Speaker: Anna Tumadote
Is global harmonization of AI‑related IP law realistic, or will fragmentation dominate?
Ken discussed the difficulty of achieving consensus among 194 WIPO members and suggested studying the feasibility of a unified versus fragmented regulatory approach.
Speaker: Kenichiro Natsume
What standards or mechanisms can track provenance and provide credit for AI‑generated outputs?
Anna noted the need for systems that reveal the origin of AI‑generated content and ensure appropriate attribution to original creators.
Speaker: Anna Tumadote
How can open‑source AI models be leveraged to avoid cultural gatekeeping by a few dominant platforms?
Nicholas warned about a ‘Captain America’ hegemony and suggested investigating open‑source pathways that democratize access to cultural data.
Speaker: Nicholas Granatino
How can a human‑centered governance principle be operationalized in international AI policy for the creative industries?
Both emphasized keeping humans at the core of AI governance but left open the concrete policy tools needed to implement this principle.
Speaker: Kenichiro Natsume, Anna Tumadote
What is the legal status of AI‑generated works under existing copyright treaties such as the Berne Convention, Rome Convention, and Broadcast Treaty?
Ken referenced these treaties and implied the need for research on how they apply to AI‑generated content.
Speaker: Kenichiro Natsume
What normative frameworks or new legal/technical solutions are needed to address large‑scale misuse of freely shared works by AI?
Anna called for new frameworks because current copyright law cannot handle the scale of AI training on openly shared works.
Speaker: Anna Tumadote

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

India’s AI Future Sovereign Infrastructure and Innovation at Scale

India’s AI Future Sovereign Infrastructure and Innovation at Scale

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened with the launch of the “Sovereign AI” research report by Amrita Vishwa Vidya Peetham and introduced a diverse group of industry and academic leaders to discuss how India can build sovereign AI capabilities [1-4]. Moderator Ankit Bose then asked each panelist to name the single most important factor for India to achieve AI leadership for the country and the Global South [44-45][92-95].


Sunil Gupta argued that India’s principal bottleneck is the lack of abundant GPU compute, noting that only a few thousand GPUs are currently available while millions will be needed for large-scale inference and training [54-58][70-78]. He described how the government’s “shared compute facility” has pooled roughly 38 000 GPUs from providers such as IOTA and is adding another 20 000, creating a low-cost resource for startups and research [224-236][237]. Gupta urged that this shared infrastructure be extended beyond model training to support the first wave of inference for sectoral use cases, with government subsidies for the initial cycle [240-247][250-254].


Kalyan Kumar highlighted that sovereign AI also requires a robust data layer, including localized vector databases, data catalogs and contracts, to enable distributed edge inference and high-quality data products [96-108]. He explained that HCL’s recent acquisition of Actian and CWI assets gives them control over core database patents and a vector AI engine slated for release, which will underpin the data-centric approach [98-103]. Kumar stressed that without such data infrastructure, even abundant compute cannot deliver scalable AI solutions [105-108].


Brandon Mello identified three systemic barriers to AI adoption in Indian enterprises: difficulty quantifying ROI, fragmented departmental ownership of AI projects, and the lack of executive sponsorship [119-124][129-143]. Ganesh Ramakrishnan added that ensuring interoperability across the AI stack-from models to data contracts-will foster participation, enable alternative solutions, and support a collaborative ecosystem of academia and industry [151-162]. He also emphasized co-design and a nine-institution academic consortium that is building multilingual foundation models tailored to Indian contexts [163-170][188-194].


The panelists converged on the view that building sovereign AI requires coordinated investment in compute, data infrastructure, skilled talent, and open collaboration between government, startups and research institutions [215-223][454-455]. They announced ongoing actions such as NASCOM’s policy draft, a new MOU with Amrita, and a QR-code-driven feedback mechanism to shape India’s AI roadmap [414-420][435-438].


Keypoints

Major discussion points


Compute infrastructure is the bottleneck for sovereign AI.


Sunil Gupta emphasized that the lack of abundant GPU compute has been the core obstacle and described how IOTA’s “Sovereign Cloud” has built a shared pool of ≈ 10 000 GPUs, with the government now aggregating ≈ 38 000 GPUs and planning to add 20 000 more [46-78][224-236].


A robust data stack and interoperability are essential layers.


Kalyan Kumar highlighted the need for centralized data platforms, vector-DBs, edge inference and data contracts to ensure high-quality, shareable data [96-108]. Ganesh Ramakrishnan added that interoperability across models, datasets and institutions enables participation, scaling and the creation of data products [151-168].


Adoption hurdles stem from ROI uncertainty, organisational friction and lack of executive sponsorship.


Brandon Mello identified “ROI invisibility,” siloed departmental processes and the “champion problem” as reasons why 95 % of AI pilots never reach production [115-142]. He later stressed the importance of solving real-world use cases, consolidating tools and handling India’s multilingual data to drive adoption [335-351].


Skill development and a shift from services to product/IP creation are required.


Kalyan Kumar argued that India must pivot from a service-only model to building its own IP, investing in smarter engineers, research talent and new semiconductor capabilities [266-310]. Ankit Bose noted NASCOM’s initiative to up-skill 150 k developers and revamp curricula to produce specialised AI talent [312-319].


Collaboration between government, academia and industry is the backbone of the sovereign AI ecosystem.


Ganesh stressed the need for interoperable standards, consortium-based research (nine academic institutions) and co-design of models and data contracts [151-170][193-205]. Sunil described the government’s “shared compute facility” that empanels multiple providers, creating a public-private partnership for scaling AI resources [224-236].


Overall purpose / goal


The session was convened to launch the Sovereign AI research report and to surface concrete actions that India-and the broader Global South-must take to build a self-reliant AI ecosystem. Panelists were asked to pinpoint the single most critical step for achieving sovereign capability, covering infrastructure, data, talent, adoption and collaborative governance.


Overall tone


The discussion began with a formal, celebratory tone (report launch, introductions) and quickly shifted to a technical, problem-focused dialogue about compute shortages and data challenges. Mid-session the tone became solution-oriented and collaborative, with panelists proposing concrete initiatives, partnerships and skill-building programs. It concluded on an optimistic, call-to-action note, urging participants to join the consortium, contribute to the QR-coded roadmap and continue the partnership.


Speakers

Speakers (from the provided list)


Ankit Bose – Head of AI, NASCOM (National Association of Software and Services Companies) – Moderator of the panel and expert on AI ecosystem development and developer enablement. [S4]


Sunil Gupta – Co-founder, Managing Director & CEO, Yotta (Yotta Data Services) – Builder of Sovereign Cloud infrastructure and large-scale GPU compute facilities in India. [S4]


Kalyan Kumar – Executive Vice President, Head of Software Product Business, HCL Software – Leader in enterprise software products, data platforms, and sovereign-by-design solutions. [S6]


Ganesh Ramakrishnan – Professor, Indian Institute of Technology Bombay – Researcher in AI foundations, interoperability, and large-scale language models. [S9]


Professor Ganesh Ramakrishnan – (same individual as Ganesh Ramakrishnan; listed separately in the names list) – Professor, IIT Bombay – AI research and model development. [S9]


Brandon Mello – Founding GTM Executive, GenSpark.ai – Entrepreneur driving agentic AI solutions for knowledge-workers and enterprise adoption. [S12]


Speaker 1 – Event moderator/host – Introduced the session, announced report launch and MOU, and facilitated the panel discussion.


Additional speakers (not in the provided names list)


Dr. Manisha V. Ramesh – Pro Vice-Chancellor, Amrita Vishwa Vidyapeetham – Representative for the launch of the Sovereign AI research report.


Dr. Shiva Ramakrishnan – Head, AI Safety Research Lab, Amrita Vishwa Vidyapeetham – Co-speaker for the report launch.


Professor Suresh – Academic representative (specific affiliation not stated) – Invited to the stage for the report launch.


Bharat Jain – Panelist (affiliation not specified in transcript) – Contributed to the discussion on AI sovereignty.


Bhaskar Gorti – Executive Vice President, Tata Communications – Panelist discussing telecom and communications aspects of sovereign AI.


Brenno – (likely a mis-pronunciation of Brandon Mello) – Referenced in the transcript but covered under Brandon Mello above.


Other unnamed panelists – The transcript mentions “Mr. …” and “Ms. …” without full names; these are not listed due to insufficient information.


Full session reportComprehensive analysis and detailed insights

Opening & report launch – The session began with the formal launch of the Sovereign AI research report produced by Amrita Vishwa Vidya Peetham. The moderator thanked the audience, invited Pro-Vice-Chancellor Dr Manisha V. Ramesh and AI-Safety Lab head Dr Shiva Ramakrishnan to the stage, and then introduced the panel (Prof Ganesh Ramakrishnan, IT Bombay; Bharat Jain, IIM Indore consortium; Sunil Gupta, co-founder, MD & CEO of Yotta; Bhaskar Gorti, Tata Communications; Kalyan Kumar, CPO, HCL Software; Brandon Mello, GenSpark) [1-5].


Key “single-most-critical-factor” answers


* Sunil Gupta – Compute scarcity – Gupta identified the shortage of specialised GPU compute as the decisive bottleneck. He noted that when large-scale generative models emerged, India had strong software, services and talent, but “what India was not having at that time was compute” [54-58]. He added that the Indian language model Bhashini was recently migrated from a hyperscale cloud to Yotta’s Sovereign Cloud [54-58]. Gupta quantified the gap: the shared-commodity pool currently holds ~38 000 GPUs, with an additional 20 000 announced, yet “millions of GPUs” will be required for nationwide inference across sectors [70-78][224-236][237]. He argued that 95 % of the country’s use-cases can be served by a 20-100 billion-parameter model, underscoring the urgency of scaling [70-78]. Only 3 % of India-generated data is hosted in-country while India creates/consumes 20 % of global data [54-58]; therefore he called for government-funded subsidies for the first inference cycle to jump-start sectoral adoption [240-247][250-254][260-267].


* Kalyan Kumar – Interoperable data stack – Kumar stressed that compute alone is insufficient without a robust, interoperable data layer. HCL’s unified platform combines centrally managed vector-DBs, edge-ready AI engines, and patents acquired from Actian and CWI Netherlands [96-108][98-103]. The platform emphasizes “data products, contracts and catalogs” to ensure quality, accessibility and provenance as inference moves to the edge [105-108][171-176].


Ganesh Ramakrishnan – Interoperability & data ownership – Ganesh highlighted the need for layer-wise interoperability to encourage participation, offer alternatives and balance fidelity-latency trade-offs [151-156]. He cited the nine-institution consortium led by IIM Indore that co-designs multilingual foundation models for 22 Indian languages using mixture-of-experts architectures [163-170][188-212]. To protect creators, he invoked the principle “jiska data uska adhikar” and referenced his book Samanway (meaning “bringing all languages together”) [166-170]. He also mentioned his earlier work Informatics and AI for Healthcare* [112-115] and advocated “glass-box” models that expose provenance and enable trustworthy AI [151-156].


* Brandon Mello – Adoption barriers – Mello shifted the focus to organisational frictions that keep AI pilots in sandbox mode. He identified “ROI invisibility” – the inability of CFOs to quantify returns – as a key blocker, noting that only one in ten executives has tools to measure AI ROI [119-124]. He added “data-trust and compliance friction” from siloed departmental ownership and the “champion problem” where lack of executive sponsorship stalls projects [129-143]. Successful adoption, he argued, requires solving real-world use cases, consolidating fragmented tooling, and supporting India’s multilingual landscape [335-351].


Deep dive on compute infrastructure – Building on Gupta’s points, the panel described the government empanelment process that lets multiple providers contribute GPUs at market-determined price points, creating a low-cost commodity for startups and research [224-236]. The current pool of ~38 000 GPUs (plus the announced 20 000) is a first step; the panel urged public funding not only for model training but also for the inference phase, arguing that subsidised early usage will generate revenue-producing use cases and later attract private investment [260-267][239-254].


Talent & IP strategy (Kalyan Kumar) – Kumar argued that India must pivot from a service-oriented model to building proprietary IP. He recalled HCL’s 2015-16 decision to “build products for ourselves” and the subsequent acquisition of talent and assets [266-283]. Emphasising the exact wording from the transcript, he said, “you need fewer people, smarter people” [286-290]. He called for investment in fundamental physics and quantum research to reshape future compute paradigms [286-304][298-304], and highlighted the India Chips Limited-Foxconn joint venture as a path to domestic semiconductor fab capacity [441-452].


NASCOM up-skilling & curriculum reform (Ankit Bose) – Bose outlined a complementary programme targeting 150 000 developers over the next six months, together with a curriculum overhaul (BTEC, MTEC, MCA, BCA) in partnership with MIT and industry bodies to create specialised AI tracks [312-319][326-329].


Sector-specific perspectives (Kalyan Kumar) – Kumar outlined four stakeholder lenses:


1. Consumer AI – data-control mechanisms, regulator-led data-rights frameworks.


2. Enterprise AI – metadata-first approaches, data-product marketplaces.


3. Government services – sovereign platforms for citizen services and public-sector AI.


4. Critical national infrastructure – air-gap, defence-grade security, and the need for choice of infrastructure and human-centric AI [96-108][266-283].


Closing – The panel invited participants to scan the QR code displayed on the digital backdrop to provide feedback on the report and contribute to the forthcoming Sovereign AI policy document [435-438]. The session concluded with the signing of an MOU between NASCOM and Amrita Vishwa Vidya Peetham, a group photo, and a reaffirmation of the commitment to advance India’s AI capabilities for both national and Global South impact [414-420][454-455].


Session transcriptComplete transcript of the session
Speaker 1

Thank you. Thank you. hello and good afternoon everyone thank you for joining us for this session on sovereign AI for India before we begin the panel discussion again we are happy to announce that there will be a launch of the sovereign AI research report by Amrita Vishwa Vidyapetam may I invite the following representatives to kindly join us on stage first for the release of the report from Amrita we would like to invite pro vice chancellor Dr. Manisha V. Ramesh and if available head of the AI safety research lab Dr. Shiva Ramakrishnan and any other representatives from Amrita Vishwa Vidyapetam that you would like to invite on stage sir alright Alright, Professor Suresh and if we could please have you on stage I would like to invite Mr.

Ankit Bose, Head NASCOM AI on stage as well We will Thank you so much Yeah, yeah, absolutely You can take a seat sir if you want Thank you Thank you. Thank you, everyone. We now move into the panel discussion. To guide this conversation, we are joined by Mr. Ankit Bose, head NASCOM AI. Joining him today are our distinguished panelists, Professor Ganesh Ramakrishnan from IT Bombay and Bharat Jain, Mr. Sunil Gupta, co -founder, MD, and CEO of Yotta, Mr. Bhaskar Gorti, EVP, Tata Communications, Mr. Kalyan Kumar, CPO, HCL Software, and Mr. Brenno Mello, founding GTM executive, GenSpark. Ankit, over to you. Professor Ganesh will be shortly joining us in two minutes. Thank you.

Ankit Bose

So hi everyone, I think we had a good launch and we have a very strong panel. So Ganesh was on the way and he is still stuck on the traffic, he is walking in. So meanwhile we start the discussion, I think, you know, happy to have a very strong panel. So why don’t we do this, we start with the introduction, right? I think Kalyan, we can start with your quick introduction. So Neil and then Bruno.

Kalyan Kumar

Yeah, hi, Kalyan Kumar, call me KK. I run the software product business for HCL, HCL Software. We are the largest India headquartered enterprise B2B software company with about 10 ,000 customers and about 1 .5 billion dollars of revenue. And very intricately involved in building software products which are sovereign by design.

Sunil Gupta

Hello, good afternoon. Good afternoon. Good afternoon. My name is Sunil Gupta. I am co -founder and CEO of IOTA. So we run data center campuses. We have built Sovereign Cloud in India, which is running a whole lot of mission -critical government of India applications. Recently, we migrated Bhashini from a hyperscale cloud to our Sovereign Cloud. Our claim to fame in the last two years is that we have got thousands of NVIDIA GPU chips in India. And all the models which you are hearing getting launched in this summit, MITS, Sarvam model, IOT, Bombay’s Bharat Gen model or Socket model, they all have been trained on our GPU clusters, and now they are being made available to public use.

Thank you.

Brandon Mello

Hello. Good afternoon. My name is Brandon Mello. I work for Genspark .ai, a follow -up -based company. We have been around for about 10 months. We are the largest growing AI company right now in the world. We just broke $200 million in ARR. Our solution has been incredibly well -received. adopted in the market. It is our third largest market and our solution is to drive adoption from the bottom up by bringing agentic AI to the knowledge worker. Thanks for letting me be here.

Ankit Bose

Great, great, great. And hi, folks. I’m Ankit Bose. I head AI for NASCOM. So, whatever NASCOM does in AI something, I support that. I lead that, right? And we will be joined by Ganesh, who is from Bhadrajin. He’s leading the, you know, sovereign AI modern building effort in the country, right? So, I think meanwhile you join, let’s start. I think, Sunil, let me start with you, right? The first question I think I would want to ask after five days of immense brainstorming around, you know, AI for the country, AI for the world, right? You know, what is the top thing you say which, you know, India has to do, right, to build its sovereign capability, not only for the country, plus for the global south?

Sunil Gupta

Yeah. Ankit, if I take everybody, Just two years or maybe two and a half years down the line, when Chad GPT got on world scene, basically AI capability came in consumer hands. A big debate happened in India’s obviously government circle, industry circle, telecom circle, technology circles everywhere. That while India has got everything which is needed to succeed in AI, like we have been software and services leaders for last three decades. We have a startup ecosystem. On skill set index of mathematics, science, engineering, we are always the best. As a market, we are literally close to 1 billion people carrying smartphones, creating consuming content. AI ultimately resulted to most of the cases, you know, some apps which will be giving some productivity to us.

So both on the demand side and the supply side, including data sets like India will have the best data sets available. So everything India has, but what India was not having at that time was compute. Because AI does not run. And regular data centers or regular CPU computes, it required this. specialized GPU computes. So I would say that the biggest problem and of course you have to take care of the entire stack models, data sets, applications, everything. But the core problem to solve for taking AI to the masses was that how do you make compute available in an abundant way so that we don’t think of that. That should become just a hygiene which is always available.

And that’s the problem we tried to solve. You know way back at that time Jensen was in India. I happened to get to meet him and he says we as NVIDIA are too committed to India. We can extend your parity allocation. We can give you engineering support, everything. But somebody has to take a step forward of not only putting your data centers and power and everything but you also need to put in chips and we will give you everything. And from there to now today we are running almost 10 ,000 chips. You know as I said majority of the models which you are hearing sovereign models getting launched in India. You know they have been trained on a GPU.

But the real thing I would say is start now. Many of these models are great, you must have heard Sarvam Modeller beating Gemini and ChatGPT on many of the match marks. And they are making them absolutely for India use cases like OCR, you know the handwritten notes and all that thing, how do you get convert and all that stuff. So these are real India purpose built use cases and models. When they start scaling, when they start getting adopted by masters, we have seen one UPI changed our lives. Imagine we have UPI in 50 different sectors in the country, 50 UPI movement will come into India. At that time, the number of GPUs required will be millions. Today we are happy as a country, we have X thousand of GPUs.

But if you as a single company like SpaceX or like Meta can have 1 million GPUs, India as a country require multiple million GPUs. So while we are working on all the upper layer of stacks and Indians are very good at that, models, data sets, applications. We need to solve this issue. We are taking care of infrastructure problems. We are taking care of railways and roadways and airports. We also need to create this digital infrastructure. We take care of that, make it available abundantly to every startup, every, you know, I would say academic community. We make it available at a very low price. Government India AI machine is doing a human’s role. On one side, they have asked people like us, incentivized us to invest into the GPUs.

But they are taking GPUs from us, putting their own money, putting their own subsidy and then giving it to Sarvams and IITs and sockets of the world. And they think now you make, you don’t have to bother about money. Just go and make India’s plastic model. And the result is to seem in two years, India has come a long way and we have a long way to go. Compute problem has to be solved.

Ankit Bose

Great. Thank you. Thank you, Sunil. Same question to you, KK. You know, what is the one thing you feel can add the edge, right? The whole.

Kalyan Kumar

When you look at sovereign and I think Minister of Electronics and IIT Vaishnavji, he was mentioning. The. Mr. talking the five layers layer stack right and that’s where if you what sunil mentioned is for a easier way i say i use the word infrastructure which can combine energy or the ping power uh cooling the whole stack so that’s that’s providing that layer and then explain the whole model piece i think as you train and when you start to deploy at scale a couple of things becoming very interesting so you need to start to also build a data stack data platforms vector dbs edge vector i personally think you can do as much centralization the way the data consumption model is going is going to highly get distributed going to go down into the edge correct so you need a very different kind of inferencing and those capabilities so you need a data layer something which uh which we are doing is very interesting outside of oracle and ibm uh the only other company which has all the patents for database is Ethier, because we acquired Actian.

So Actian owns the original patent of Ingress. And every derivative today, whether it is Postgres or every one of them is basically an Ingress query processor derivative, including SQL Server and others. Like that, we also acquired an asset from CWI in Netherlands. So we have a VectorDB, the original Vector engine. So we’ve been building a lot of those asset portfolio, HDB, now releasing a, in April we’re going to release a localized vector AI engine, which again can run on, because as the AI PCs become more and more, Edge becomes more and more, so building that. And building the data disciplines. I think that’s a very important layer. A lot of times what happens is we worry about infrastructure, and then we think about model, and then app.

The data platform is going to become very important, because as we’re building the data platform, the enterprise will only scale if you get your data. centric approach, data products, data contracts, data catalogs and those kind of things. Because finally the AI use case is going to be built on how good quality your data is. Yeah.

Ankit Bose

Great point. I think compute data, data stack for the country, I think very important. Let me come to Venu. Again, the same question, right? If India have to build a server AI for the country and Global South, what’s the top one thing you will say which will help the whole cause?

Brandon Mello

Yeah, so it’s interesting. MIT last year ran a big report and they said 95 % of AI pilots actually never made it to real production, right? So in my point of view, this is never really a tech problem. It’s really a production problem, right? So in my point of view, actually like when I look at a our solution, right, like we are able to deploy over thousands of companies in only eight weeks, right? So when I look at that, there’s really, it comes down to three reasons why this is happening in the industry, right? And the first one is what I call ROI invisibility, right? So when you look at companies right now, it’s really easy to get a budget for a pilot, right?

But what comes to the reality is can they get a budget to get the project done, right? So the data that I have to share with you guys, which is astonishing, is a third of CFOs really nowadays, they cannot quantify ROI inside of their organizations, right? And only one out of ten can actually have tools that can actually measure ROI, right? So. What ended up happening is whenever you talk to those organizations. right? Companies, and you ask, like, how are you actually going to measure productivity gains or how are you going to, like, they don’t have the answer, right? So it ends up, like, what’s the baseline? Like, they don’t have the answer, right? So whenever you bring to, like, the CFO to get that project approval, ends up on the project never getting approved and ends up on that cycle of, like, it ends up getting stuck into a pilot, right?

So when you look at what, you know, number two is, like, I think it’s data and trust and compliance friction, right? I think there’s a huge red tape in terms of what happens inside of organizations, right? I think that it’s very departmentalized, where, like, each part of the organization is trying to solve for each part of the department, right? So when I look at IT, it’s trying to solve for IT. Procurement is trying to solve for IT. Procurement is trying to solve for IT. Procurement is trying to solve for IT. procurement. Because no one’s really trying to solve that as an organization, the project ends up stalling. So something that can essentially take a few months to resolve ends up taking six months to a year.

And like I say in sales, time kills every deal. Last but not least, I think my third point is the champion problem. I think there’s a severe issue within organizations nowadays is there’s really no executive sponsorship. And whenever you don’t have executive sponsorship, especially for AI opportunities, deals never get approved. And people, especially at the bottom tier, they don’t understand what’s going on. And when there’s no clear alignment within the middle tier management, deals never get approved.

Ankit Bose

Great. I think let me summarize probably the three points that, you know, you need a close collaborated teams, right, with a single point of view with executive sponsorship. I think that will solve the adoption piece at least at last, right? Let me come to you, Professor Ganesh, right? Ganesh, I think what we are discussing is the, we have discussed a lot on AI for last five days for India, for globe, you know, and then we had three point of views. I asked them, give me one top thing. You heard probably from Breno and KK and then from, you know, Sunil was confused. What is that your top one take which India should do so that we can lead the seven race for the country and the globe?

Ganesh Ramakrishnan

I would suggest interoperability at every layer. I think it is also alluded to by earlier panelists. Interoperability encourages participation and in the words of PSA, if you are there in our Bharat, genesision is a meaningful participation right interoperability also helps you present alternatives because there is no one size fits all and you need to also ensure that in the trade off between fidelity and latency or between sensitivity and specificity you are able to find the right sweet spot which is suitable for you you can pick something that is appropriate I just on a lighter note I was driving from the PSA office and there was such traffic jam which most of you experienced so I exercised my sovereignty and I started walking so you find alternatives when you think sovereign 3 kilometers that’s why I was late so there are alternatives and also provisions for human participation much better there could be places where AI could be substitutional but many other places where you may want it to be just supplementary or complementary.

So alternatives is another thing that interoperability provides for. And I think the very key is scale out. I mean if just by scaling up we could cater to everyone, great. I would say that at least matches one checkbox which is people being catered to. But even we are not there. Scaling up is not going to cater. The capabilities are not there. But even if it were hypothetically, I think participation would also ensure that people are part of the process. It’s informed. I mean Bharat Jain, I take pride in one of our consortium members at IIM Indore. We are a consortium of nine academic institutions. And in the Institute of Management, what are they doing? They do a fabulous job in going to many of the second tier cities, going to people who have data and engage in conversations, education.

That data is an asset and you could actually transform that asset into IP generation. generation and not just source data. So the dialogue, right, and informed decision making is where participation is encouraged when you have interoperability. I just want to add just what he said. He made a very interesting point. How do you monetize data, correct? And this is something which needs a very different approach because today what happens is you are sourcing data and I think PM yesterday made a very amazing statement, correct? He’s saying, jiska data uska adhikar, correct? Very interesting. But if you look at what he’s saying is the creator of the data, the producer of the data, the consent provider for the use, all have a role to play and that’s what I’ve been using this word called data product or a data catalog.

So you need a catalog first. You need to build a data product and then set up a data contract, which is the fundamental, fundamental for interoperability. I just want to add. Because if that gets solved, I can choose my own personal data and say my data catalog, you can have five things to access. I think India has proven that amazing way of identity payments. So I think we can actually set up an environment where you can really build this. And the data benefactor is also the same person. So great point, Professor. I think it probably means definitely removing or optimizing the various layers and taking it to the last person in the rank. And it will help scale to the 1 .4 billion what we need.

I think thank you for that. Let me ask you again a second question. I think this is a very, very direct question. I think as a country, I think we are building our foundation models. You are one of the person who is building foundation models of the country. And at large, we have built sub -500 billion parameter model. And globally, we are going to 5 trillion or plus. The comparison is so huge, right? What do you think India’s moat can be when we are really, you know, in such a situation where we are at a disadvantage, though we have to aggressively, you know, handle that? Yeah, so the other important takeaway, which probably, you know, addresses some part of what you’re saying, what you’re asking is cooperation, right?

Collaboration. A collaboration, honestly, is not just a transactional process. It begins here, right? The will to understand the other side. I just published a book, you know, Informatics and AI for Healthcare. This is with my colleague, Shetha Jadhav. And what we did in the entire book was I tried to, I mean, I empathize with all the entire life cycle of a healthcare practitioner. And we tried to map every, ML example, informatic example, parsing to healthcare, right? and vice versa there was reciprocation from the other side as well it was very interesting exercise I think that’s how co -design also happens, so collaboration is actually to do innovation and again China has shown in many ways, right in contrast to the US ecosystem that co -design can lead to very innovative ideas, and co -design often is even lacking at the level of algorithms and infrastructure, right right there, new algorithms can come up but all the way to application layers so collaboration also comes by creating ecosystem where people can participate since you alluded again to Bharat Jain, we have a consortium of 9 academic institutions and the whole collaboration is through a section 8 company a not for profit company, which engages with for profit entities but also the academic institutions 60 full time employees work with 100 plus researchers, master students, it’s been a very profound exercise in a very short span of time I mean we may say we are late since you brought up also the landscape outside which is 1 trillion plus parameters and that’s also our North Star at least from the India AI vision that is our goal to get to at least 1 trillion parameters but even the 17 million parameter model that we have released there is a lot of research due diligence that has gone into the architecture choice and actually we are very proud of whatever model we released because ensuring that you know if you have two shared experts one of them is actually catering to languages and mixed code the other is catering to domain due diligence that was actually done based on Indian context right the fact that we covered 22 languages in our speech model the text to speech model again all of that is raised we explicitly captured the common phonetic vocabulary of Indian language And that’s only possible through this process of empathy.

I mean, linguist has to empathize with the computer scientist and vice versa. If we do that, we can actually create magic. Believe me. You can create magic. We just have to break our silos and the biggest silos sitting here. I mean, in fact, an endorsement to this was when we actually built our LLM enabled speech to text model. We had a projector layer which actually projected from speech to text. And we used a mixture of experts for the projection. It was very interesting. The expert for Hindi and Marathi performed very similarly. I mean, they were the same expert. Expert got shared. Whereas for Telugu, there was collaboration between Hindi and Tamil experts. So, data, domain knowledge, all of them actually are reinforcing each other.

So, this is actually a time where we can break the language barrier in my interaction with you. on 8th Jan, I gifted him a book from our consortium called Samanway Samanway stands for bringing all languages together and he said, we need to use AI also to show the strength of India it’s not just AI for India, but AI by India great, great, I think the point of collaboration and you know the story what we all have heard single stake course is a bunch of stakes I think it’s very true and that’s what is the mode for India collaboration, building that collaborative effort between different universities, bringing 9 different universities together to work and it’s a gigantic work, especially what you have created is amazing also, we are very happy 3 days back, we also announced at MOU with our heritage foundation sitting in the US we got a lot of support from people in the Bay Area, so once you open up for collaboration, you will find there is support from around the world and it’s very very good and I think that’s the most important Great, great, great.

Thank you, thank you, Professor Ganesh.

Ankit Bose

So, let me come to Sunil, you, right? I think we all agree that, you know, compute is one of the biggest player and pillar, right? And then government is doing their bit, right? I think they are doing their bit. But again, I think in terms of compute for the country, for some unity, can it be a shared commodity? Can it be, you know, some commodity which different, you know, factors of the country or probably ecosystem come together and build, right? How to solve that problem? Because as you rightly said, few thousands versus few lakhs, right? That’s something, yeah, very high.

Sunil Gupta

Number one, they said, you all come and panel with us at a right price point, right quality, and you declare how much GPUs you can give. They were not forcing us. They said, okay, you decide how much you want to give. We all got empaneled. We contributed GPUs, which were made available to startups. Then government said, every quarter we will come and we’ll encourage new and new providers to come up with the facility. And even existing players can also top up their capacities. And every next time, because the market forces, when the quantities start increasing, supplies start increasing, the pricing also will start reducing. Government say, okay, if new player comes, they can reduce the price.

Existing players will have to match. And they keep on empaneling more and more capacity. And that is something which has resulted into that 38 ,000 GPUs, which government is talking about, the shared compute facility, which is nothing but a, you can say, combination of the compute capacity created by multiple providers like us. And now yesterday, Prime Minister announced that 20 ,000 more are being added to this facility. So I would say, both as a concern, except this is proven that last 18 months, must is doable and both are the technology right while technically it’s possible that the same model can get trained like Ganeshji I’m sure can can talk very authoritatively on this subject technically also you can train on multiple different clusters of course inferencing you can do in multiple different places but even if you don’t do that you are actually what government did very democratically okay IIT we will put you into this service provider okay Sarvam will put you into this service provider okay GAN will put you into this provider so government is democratically making sure that they are encouraging industry to invest into this creating this capability which is required and we because we are getting business we are scaling up now we are investing more and more now and then they are making it available to people because India needs its own models we may use frontier models for certain purposes but as minister was saying that 95 percent of the use cases of the country can very well be done by a 20 billion to 100 billion parameter model right of course Ganeshji is carrying a mandate to create a trillion parameter model also in which country required almost we can for all those things why anybody else can do right their success Bharat Jain success and Sarvam success has proven that India can do it right so I would say that shared compute framework which has been done it is proven we just need to scale it up and my request to government which I think they are doing is don’t limit it only for training of models because models training is one step done now these models will be going to massage for adoption and you require millions of GPUs I think I’m repeating myself but that is where government need to fund the first cycle of inferencing on these models when users start adopting let’s say agriculture use case or a healthcare use case or a education use case or whichever use case which come on multiple UPI equal and use case will come up it will take time for users to start adopting it start accepting it making it a part of their lives at that time it will take time for users to start adopting it start accepting it making it a part of their lives at that time only user will be happy to pay 10 paisa per transaction or maybe 50 rupees per month subscription for that that time these models and use cases will become self -sufficient to generate revenue also then they will need government support but at least for i would say first cycle of inferencing maybe one year or two years government not only support the funding of the training of the model but also they support the first phase of inferencing on this model so that adoption happens revenue models emerge and after that government can say okay let private sector invest and government will come back to their original role of regulator

Ankit Bose

great so i think i think probably it will augment and put fewer thoughts right so the india mission has really created the single fire right yeah this fire is going to every state in the country yes all 28 states all eight union territories they are building aicoes yes and the mandate for each co is to give compute right i think that like a small wildfire it will spread all across the country it will be phenomenal but again i think at the same time you know we have to keep up the pace right i think one thing is space.

Sunil Gupta

Absolutely, Ankit, just trying to, this is something which I know two years back when we said that I’m putting 8000 GPUs, everybody started laughing. Because we were starting with the base when India was not having GPUs, right? Today, we comfortably say okay, India will be going to 50 -60 ,000 GPUs but even today I can tell you India require millions of GPUs. In US, just 3 or 4 deep tech companies are collectively owning millions of GPUs. India has got 1 .4 billion people out of which 1 billion people are carrying smartphones, creating, consuming content every single minute. And as Ganeshji will talk about, they all are creating voice -based AI because India’s AI will be voice -based. People are talking in their own native language or a mixture of Hindi, English, everything.

And they’ll be comfortable doing that instead of writing in their native language or screen which is not so easy. When you’re doing that and actually innovations are being done that even from feature phone or regular telephone line, not using smartphones, you will be able to talk to an AI model at the back end. When you are basically talking about 1 .4 billion people coming in the AI fold for multiple use cases. Just imagine what type of number of GPUs will be coming for inferencing and how many GPUs will be coming for training multiple models for sectoring all these things. So you are right, Ankit. What we have done in last two years is kudos to the whole ecosystem, to government and everybody, all of us.

But we need to keep on building for next 7, 8, 10 years. Sorry, just to give one or two more data points. India is creating and consuming 20 % of world’s data. One -fifth of the world’s data is created and consumed by India. Only 3 % of that data is hosted in India. That shows the upscope of the infrastructure both at the physical data center level and also in terms of the compute or GPU level India need to build. Because we don’t want any single country or any single company start dictating our digital destiny. We need to be as much sovereign as possible.

Ankit Bose

Thank you, Sunil. Thank you. Kalyan, let me come to you. So, Kalyan, I think one big base for the sovereignty is the skill set. to research, develop, deploy, right? And do all of that responsibly, right? I think SCL being, you know, one of the companies who have done that, right, in the last two, three years, what will be your nuggets, right? I think how other companies, other players in the country, other countries can do that, right?

Kalyan Kumar

So, if you look at, see, what is India known for? India is known for capability, historically. NASSCOM, right? But that capability was historically, and for a majority, and most of the business capability for hire. You basically are building capability to build things for others, and that’s been the core business. We’ve now become pretty much, if you really look at, if some other country thinks sovereignty, 50 % of their, global tech engineering services, development operations talent is sitting out of India. You see those GCC crates. But where is the pivot? The pivot is, I think what Professor was talking about, is you have to pivot towards build. We are always more towards service. So building, research, development, build your own IP, and how do you make India for the world?

I think it’s very important. I think that’s what our journey has been. So what we did is in 2015 -16, because we have one advantage, we are a single majority shareholder run company. Mr. Nader had a very ambitious vision. He said, we are building products for others, we should start building for yourself. It’s 2015. It’s a very conscious strategy, and he realized if you want to play in the global market, you need to have access to market permission and market access. Because people would only buy if you are a software product company. So that whole idea of acquiring India intellectual property, because if you really start to see the underlying of these pieces, you could build on open source and other stuff, but suddenly what’s happening is some of these open source companies are getting acquired and suddenly becoming closed source.

This is becoming a very interesting plan and suddenly some of them are getting classified as dual use. Suddenly they’ll say, oh, this is dual use tech, so I can only release this. So what we’re seeing from a skill standpoint, you need lesser smarter people. So I’m making a very controversial statement. You need lesser people, smarter people. You need engineers more than coders. See what’s happening is that we’re building quarters. You need engineers, people who think systems thinking you need people who are research bent. I meet students and I asked MBA students, what did you do? I did engineering. I said, why the hell did you waste four years of your life? If you wanted to go and do an MBA, the things like, why are you not doing deeper?

Why don’t you specialize in a domain? But those are things like even fundamental things. I would say. The big leap is going to be. I think India can solve something very interestingly, and as he’s referring to the PSA, quantum. Because I think the kind of compute needs you have, and looking at energy GPUs, you could completely change the computational paradigm. So hence, but that needs fundamental science, research, physics. Like no one wants to study physics. If you go back 20 years back in this country, everyone wanted to go and do coding. So those are the fundamental skills. So what we’re doing, in a very small way, we are acquiring, we are building talent and research pools.

So 50 % of HCL software product business is in India, engineering. But my second largest engineering center is in Rome. Third is in Israel. Then I’m in Perth, Austin, Chemsford outside of Boston. Why? Because if global companies can come to India and acquire talent to build and research, and then build an IP and take it to US, I’m doing the reverse. So AppScan, which is a code security product, the security heuristics is built in Israel. The, SAS UX is built in Boston. but the core engineering is in Bangalore but the IP is registered in India which is where we are moving a very different way we are now tapping global talent to build for us so we are still a billion and a half we are not big but we have got 130 countries so we are a step in the change it’s a long journey it needs to get away from short term thinking hire people to get them built I think you have to go to a very different model I think that’s what we are starting within the larger scheme of HCM but I think we are walking the right path I think we are acquiring assets continuously and building that

Ankit Bose

so let me add probably what I am seeing in the skill level the persona at least what NASCOM is focused on is the developer and the way we code is changing so NASCOM has done concentrated effort to help developers learn the new way of coding redefine the whole SCLC as a target what I have taken my team has taken we have taken a target of you know enabling 150k developers across the country next six months. Make them AI enabled, AI ready. Help them change the whole, you know, or unlearn and learn the new way. I think that’s what, is one thing, right? But finally, I think, which I should make everyone aware, I think there will be announcements sometime soon.

But with the MIT and, you know, the education industry, we are rewriting the whole, you know, technical, BTEC, MTEC, MCA, BCA curriculum, right? I think we are adding more specialization, as rightly said. Because we need specialists. We don’t need journalists. As an engineer, he studies 48 subjects in four years. At the end, what is he specialized on? It is his luck, right? The group he gets, the project he takes, somehow, some job he gets, right? So, I think that’s what we are changing. Soon, there will be announcements happening. But again, I think that’s what is happening at the background. Coming back to Benno, Benno, you have a product which is so simple, anyone can use it and build agents through that, right?

And get, you know, benefit, benefit from it. that. Let me ask you this. I think the one big piece of AI to really be mature and impact is adoption, right? And you started with the 95 % project fail or probably don’t go to production, right? So if we have to really do adoption at scale, what are the top issues you see, right? And how do you suggest, you know, the companies or folks here can take some pointers to mitigate it in their life functionally?

Brandon Mello

Yeah. So I’ll give you three. One is very specific to India, actually. those are relatable to our solution, but I think those are real use cases because the proof is, like I said, the proof is in the pudding, right? One is like you got to solve a real use case, something that is actually changing in people’s life. So AI is complex and AI is people still like trying to figure out AI. So it needs to be something that is into people’s everyday life. So in our case, for example, let’s go back. So if you look at Cursor or Lovable, right, they changed the life of, you know, vibe coding, software engineers. In our situation here at GenSpark, we looked at people that were producing office work, right?

So people looking at producing Excel, PowerPoint, and essentially just like any mechanical work on the everyday office work, right? Because if you think about it, every time you office task, all of that office work is very mechanical, right? And that’s why we realized all this massive growth in our solution, right? So to your point, I think that adoption… comes from like something that is something that can change people’s life and something in a very simplistic way right I think the second the second thing is should be consolidation of tools right I think from the time that we wake up in the morning I think most of us pick up our phones and we have we inundate about messages and naps and then we go to our office work and then we have probably a hundred tools that we have to touch you know actually we looked at a you know draw our research at work you know people waste in average two and a half hours a day right just you know flipping between different solutions right so in that causes contacts loss of context right so if there’s a waking consolidate tools that also drives adoption right you know we have probably a hundred tools that we have to touch you know so I think the third one is especially in India is In fact, there’s a lot of different languages in this country, which you brought up, right?

So I think in this country, especially LLMs, I think really struggling with being able to drive the right language, especially with all the different dialects that this country has. So being able to really naturalize and be able to bring the sovereignty here, I think is very important. And I think last but not least, people are very scared about data, right? And how that data, once they bring data into AI, how is that data going to be treated, right? So I think the solution needs to bring that sense of security of how that data is going to be managed.

Ankit Bose

Great. Thank you, Breno. I think with the last segment, last question, 30 seconds each, right? Again, probably starting with Breno, since you have the mic, right? So AI is not a short game. It’s a game for the next five years, 10 years, decades. Probably centuries. you know what is the challenge as a humanity we have to mitigate you feel that you know we don’t align with something which is hazardous to us

Brandon Mello

yeah so I think it’s you know actually I was having breakfast the other day and actually a person I was serving asked me the exact same question and I think that it’s how human beings interact with AI I think we’re still trying to figure out how to properly interact with AI and I think the speed of AI is evolving I think we’re still uncertain how to manage that I think the line on the sand moves so fast that we can’t really catch up to that right and the interaction of AI and us no one really knows how to do it yet

Ankit Bose

so I’ll map the earlier part in this part. You know, a very specific use of AI for self to, you know, make, you know, your life simpler. We’ll adopt AI skill. And we have to build a certain, you know, the processes to interact with AI in the long run. Because AI is changing, things are changing. Thank you, Breno. Coming back to you, Professor Ganesh, right? Same question, 30 seconds. What’s the challenge you see if we make something, you know, not aligned?

Professor Ganesh Ramakrishnan

I think the biggest challenge in not making AI aligned is that we will become products, not even consumers, right? We want to be in the steering wheel. I remember my very fondly, my first machine translation paper, I called it, you know, machine assisted human translation. Obviously, I can’t, I mean, that will sound too regressive. But the key is provenance. Right? I mean, how can you leave provenance? at every step in the stack, whether it’s data aggregation, which is again aligned with ecosystem. You need an ecosystem to leave provenance on the data part, whether it’s metadata refinement, data curation, provenance at the level of trading, tokenization, provenance at the observability, the other keyword, right? At the level of the way the model performs.

Models are glass boxes, because that gives you enough breathing space. Where do you, where should you actually yield your practices versus existing practices? So I think if you don’t have that view, the recipes, if they’re not made available, if the education isn’t there, I mean as a prof I always focus on the education part, I think we’ll become products.

Ankit Bose

Thank you, thank you. Sunil, you and then Kalyan.

Sunil Gupta

No, I think I concur with the views that at the end of the day we should not do AI for the sake of doing AI. It is a means to achieve an end purpose and the end purpose is beneficial. for the masses. I remember I think I was seeing on a YouTube video when Prime Minister Sir met all the startups and Professor Ganesh was there and I think Prime Minister Sir said to everybody don’t create toys, don’t use make AI to make toys, right, and use AI which benefits the masses in the real problem which they face in their real lives. So that is something that that is where the name of this event also has come in the Impact Summit, right, that and I think yesterday also used one word that unlike the previous summit where we are too much concerned about security governance which are things to be done but at the same time, keval bhai nahi rekhna hai AI ka, AI se aap apna bhagya bana sakte, apna bhahisha bana sakte ho.

So kaise AI se how we sort of create an impact, we benefit the masses and also machine should not end up dictating our lives as again I would say ke we should not end up becoming product itself. As much AI makes improvements, it possibly will never reach a stage where it starts acquiring human’s emotions, it starts acquiring our sense of gut, it starts acquiring our sense of culture, it starts acquiring what we speak, our body language, not just with our words. So I think human in the loop and human remaining the master of AI is something we’ll have to guard against all the time.

Ankit Bose

Interaction, don’t become product, have human -centric development. Kalyan?

Kalyan Kumar

I would say, break this into four key areas. Professor mentioned, I think the consumer AI, so I’m going to break it into consumer, enterprise, government and critical national infrastructure defense. So let’s, the reasons, all fours are going to play, just like ten seconds. Consumer AI, you are the product, unfortunately. You now have to use data control to decide how much of what you give to get. It’s a give to get mode, correct? In the consumer AI. Because the day you click I agree on an Android 4 on an Apple intelligence, suddenly you are the product and you’re getting something back but that give to get balance and that’s where the role of the regulator in my opinion has a far more play than in the enterprise of regulation enterprise god made world in seven days because he had no installed base enterprise cios you go and talk to cios on the ground their reality is that they’ve got a big problem architectural problem their data landscape is broken so they have to pivot from process workflow to data first big shift so they need to start about lineage metadata most of these companies don’t have metadata correct metadata discovery use techniques acknowledge graph to understand the metadata and then you organize your data for so that AI can be benefited I think the big place in govtech government government citizen engagement g2c massive but that’s where I think that sovereign AI play comes in where the work which serve them is doing or or the whole bar agent important because that’s where you can host citizen service platform and the last is for critical national infrastructure air gap networks, private AI and defense.

So I think we need to also have a very broken up view of this whole thing rather than trying to have one brush to paint all of them. But I think the last is sovereignty is all about choice. Making choice. Like he walked here. It’s a great choice. I can run on hyperscaler A, B. I can run on IOTA. I can run on CIFI. I can run on any or I can run on my own infrastructure. Then I need to have choice of it’s all about choice. And second is please AI exists for human good. So put the people back into the center. Human because we suddenly have made human someone in the side and everything is about AI.

It’s about people using AI surrounding them. So that’s what my thought was.

Ankit Bose

Great. Thank you. I think we have had a lot of good nuggets from everyone. I think we’ll continue this conversation after this. As a part of NASCOM, I think 7 AI is a big initiative for us. I think we have been driving it since last three, three and a half years. Ganesh knows that. Sunil knows that. services companies, we have worked enough with them. To keep it on, I think it’s not an end point. We have to think about the sovereignty and we have to think about how India builds the AGI capability, quantum AGI capability. I think that’s the journey we are on as NASCOM. I think we are writing a current policy document for government on sovereign AI and AGI roadmap.

And I think the QR code is there. The QR code will be here and I want all of you to have a look. It’s a dark one. Please work on it. I think that’s that. Yeah, Ganesh?

Professor Ganesh Ramakrishnan

I mean, the potential is so immense. We have not even scratched the surface, not even the tip of the iceberg we have touched. So, sovereignty is critical because the amount of inefficiency in that entire stack needs to be done away with. GPUs were never designed for building these models, right? Legacy and how can we use even the large work we are doing, workload to actually do better? A SIG design? can we use it to have better model serving engines? So, there’s so much to do. I think everyone should get inquisitive about the entire stack. That’s where sovereignty comes.

Ankit Bose

Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborator. We will have a QR code and please respond to that. Give your inputs. And with that, thank you to my panelists. I loved it and I think hope you also loved it. Thank you again.

Kalyan Kumar

Just one thing I want to just say. Watch on 21st, the PM is inaugurating a new JV which HCL is announcing with Foxconn. It’s called India Chips Limited. I would call it a patient capital. It’s about 16 and 32 nanometer fab which are creating. Basically it’s like a OSAT unit. It’s going to come out after 5 years. You have to build the whole thing. But also building that skill, correct? It’s a big important thing. And we have to start now. We cannot wait for 5 years on the line. So,

Speaker 1

Thank you so much to our panelists I request the panelists to please stay back for a group photo right now You can also access the report that Ankit has been talking about in the QR code displayed on the digital background before and leave feedback I’m also happy to announce Thank you Thank you to our panelists I’m also happy to announce an MOU being signed with Amrita Vishwa Vidyapetam and NASCOM right now Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The session began with the formal launch of the Sovereign AI research report produced by Amrita Vishwa Vidya Peetham.”

The knowledge base records that Professor Suresh from Amrita Vishwa Vidya Peetham participated in the report launch ceremony, confirming the launch of the Sovereign AI report [S2].

Additional Contexthigh

“Sunil Gupta identified a shortage of specialised GPU compute as the decisive bottleneck, stating that the shared‑commodity pool currently holds ~38 000 GPUs with an additional 20 000 announced, and that “millions of GPUs” will be required for nationwide inference.”

Other sources note India’s historic GPU scarcity (only ~8 000 GPUs previously) and recent plans to scale to 50-60 000 GPUs, as well as a programme to make 50 000 GPUs available at low cost, providing additional context on the compute gap and scaling efforts [S1] and [S46].

Confirmedhigh

“95 % of the country’s AI use‑cases can be served by a 20‑100 billion‑parameter model, making large frontier models unnecessary for most applications.”

Multiple knowledge-base entries emphasize focusing on smaller models (20-100 B parameters) to address roughly 95 % of national use-cases, confirming the claim [S19], [S89] and [S90].

External Sources (90)
S1
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-future-sovereign-infrastructure-and-innovation-at-scale — Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborat…
S4
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — -Sunil Gupta: Co-founder, MD, and CEO of Yotta – operates data center campuses and built Sovereign Cloud in India, manag…
S5
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — -Sunil Gupta- Managing Director and Chief Executive Officer, Yota Data Services Following Cormann’s presentation, the s…
S6
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — – Ganesh Ramakrishnan- Kalyan Kumar- Sunil Gupta – Kalyan Kumar- Ankit Bose – Sunil Gupta- Ganesh Ramakrishnan- Kalyan…
S7
Announcement of New Delhi Frontier AI Commitments — -Ganesh: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S8
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — – Ganesh Ramakrishnan- Kalyan Kumar – Sunil Gupta- Ganesh Ramakrishnan
S10
Announcement of New Delhi Frontier AI Commitments — -Ganesh: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S11
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-future-sovereign-infrastructure-and-innovation-at-scale — And like I say in sales, time kills every deal. Last but not least, I think my third point is the champion problem. I th…
S12
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Hello. Good afternoon. My name is Brandon Mello. I work for Genspark .ai, a follow -up -based company. We have been arou…
S13
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S14
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S15
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S16
The challenges of introducing Generative AI into the marketplace — I have been hearing a lot about the shortage of powerful GPUs for AI lately. It seems like the demand is much bigger tha…
S17
Keynotes — Marianne Wilhelmsen: but as Norway prepares for the upcoming IGF 2025, I look forward to welcoming many of you in June a…
S18
Advancing digital identity in Africa while safeguarding sovereignty — A pivotal discussion on digital identity and sovereignty in developing countries unfolded at theInternet Governance Foru…
S19
Panel Discussion Data Sovereignty India AI Impact Summit — The discussion began by challenging conventional notions of sovereignty, with moderator Arghya Sengupta framing the cent…
S20
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Voice technology and multilingual capabilities were highlighted as crucial horizontal solutions for healthcare AI in Ind…
S21
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Drawing from his gaming background, the speaker describes a revolutionary shift toward “live operations” in content crea…
S22
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — Rather than viewing India’s complexity as a challenge, Raghavan presented it as the country’s greatest competitive advan…
S23
https://dig.watch/event/india-ai-impact-summit-2026/waves-of-infrastructure-open-systems-open-source-open-cloud — And it’s got thousands of steps. It takes about 120 days to make a chip. So $10 billion for 120 days producing a wafer, …
S24
Driving Indias AI Future Growth Innovation and Impact — “Investment also includes energy infrastructure, because without energy, there is really no compute infrastructure you c…
S25
Global AI Policy Framework: International Cooperation and Historical Perspectives — So the infrastructure is missing, right? Now, if you’re talking about policies related to compute, you’re talking about …
S26
Regulating Open Data_ Principles Challenges and Opportunities — Digital ecosystems simply do not function in silos. However, enabling data to move across borders should not mean that c…
S27
DC-Sustainability Data, Access & Transparency: A Trifecta for Sustainable News | IGF 2023 — In conclusion, AI has been instrumental in sectors like health and education, aiding in vaccine development and benefit …
S28
WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches — Jon Lloyd: I’m just going to pause for a second here, and we’re going to launch a Mentimeter poll. Those of you online, …
S29
Business Engagement Session: Sustainable Leadership in the Digital Age – Shaping the Future of Business — Reyansh identifies a common organizational challenge where leaders disagree about implementing emerging technologies lik…
S30
Research Publication No. 2014-6 March 17, 2014 — Based on the map of roles provided in the previous section, one can identify a number of potential role conflicts. For …
S31
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Additionally, the analysis notes that the need for skill development aligns with the Sustainable Development Goals (SDGs…
S32
Supply Chain Fortification: Safeguarding the Cyber Resilience of the Global Supply Chain — With the amount of material being brought into the decision-making process by emerging technologies, decision-makers nee…
S33
Open Forum #33 Building an International AI Cooperation Ecosystem — Practical implementation requires comprehensive ecosystems combining government guidance, industry-academia collaboratio…
S34
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — “No single institution, no matter how large or how well resourced, can navigate this epoch alone.”[64]. “It will require…
S35
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Unexpectedly, there was strong consensus across industry, government, and academic perspectives on the need for collabor…
S36
Towards inclusive digital innovation ecosystems – do’s and don’ts and what next? — Ms. Vahini Naidu:discourse on data for development? Thank you, Anita, and good morning, colleagues. So, essentially, wha…
S37
POLICY BRIEF — Many e-diplomacy tools are free (Facebook, Twitter, YouTube and Flickr accounts for example). Even software platf…
S38
POLICY BRIEF — Finally, the rise of open-source software (i.e. free software without copyright constraints) and the increasing co…
S39
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Kalyan calls for moving away from a services‑centric model toward building proprietary IP, hiring smarter engineers, and…
S40
A Digital Future for All (afternoon sessions) — There is a need to build AI capacity in developing countries to ensure they can participate in and benefit from AI advan…
S41
From India to the Global South_ Advancing Social Impact with AI — Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. I’d like to welcome everyone to t…
S42
Artificial intelligence (AI) – UN Security Council — The discussions on structuring capacity-building initiatives in AI to maximize their impact, especially in regions with …
S43
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — If compute, database and foundational models remain concentrated of a few, we risk creating a new form of inequality, an…
S44
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Abhishek Singh: Thank you for convening this and bringing this very, very important subject at FORC, like how do we bala…
S45
Indias Roadmap to an AGI-Enabled Future — -Compute Infrastructure and GPU Requirements: Analysis of India’s current and projected compute needs, with estimates su…
S46
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Actionable Solutions and Pathways Given the lack of GPUs and data centers in the Global South, new business models need…
S47
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — AI policies in Africa should ideally espouse a context-specific and culturally sensitive orientation. The prevailing ten…
S48
WS #144 Bridging the Digital Divide Language Inclusion As a Pillar — Government policy and procurement can drive multilingual adoption Government policy can drive multilingual internet ado…
S49
Announcement of New Delhi Frontier AI Commitments — “First, advancing understanding of real‑world AI usage through anonymized and aggregated insights to support evidence‑ba…
S50
MahaAI Building Safe Secure & Smart Governance — Unexpected focus on quantum computing as an immediate policy concern rather than a distant future issue, highlighting th…
S51
D-Wave quantum backs congressional push to expand US Quantum Program — D-Wave Quantum Inc., a leading player in the field of quantum computing,has voiced its endorsement for recent Congressio…
S52
Building fair markets in the algorithmic age (The Dialogue) — Combining legal and policy research with insights from physics, chemistry, and mathematics can provide better evidence. …
S53
Contents — We believe this vision can be realised and government strategy has a key role to play. We argue that the government’s fo…
S54
The Global Power Shift India’s Rise in AI & Semiconductors — High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different …
S55
From KW to GW Scaling the Infrastructure of the Global AI Economy — High level of consensus across technical, business, and policy dimensions. The agreement spans both global technology pr…
S56
AI 2.0 The Future of Learning in India — High level of consensus with remarkable alignment across diverse stakeholders (government officials, academics, industry…
S57
Keynote-Jeet Adani — As we all know, under peak load, advanced processors generate extraordinary heat. Systems throttle when power falters an…
S58
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — Le Fevre Cervini argues that Europe lacks a unified vision and operates with fragmented country-by-country policies. He …
S59
State of play of major global AI Governance processes — The Hiroshima AI process was introduced in 2022, marking the pursuit of an interoperable governance framework. Interoper…
S60
Keynote-Mukesh Dhirubhai Ambani — The third commitment centres on building India’s sovereign compute infrastructure through three interconnected initiativ…
S61
Skilling and Education in AI — So when I look at the work that we’ve been doing across board and across product areas, and speaking to some of the anno…
S62
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — India possesses many essential ingredients for AI success: a robust software services industry, thriving startup ecosyst…
S63
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — And the big mindset shift that’s starting to occur is this notion that, you know, these aren’t just productivity tools. …
S64
Global AI Policy Framework: International Cooperation and Historical Perspectives — So the infrastructure is missing, right? Now, if you’re talking about policies related to compute, you’re talking about …
S65
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-for-shared-prosperity — And it’s that kind of computing power that is essential. It’s essential for training large AI models. It’s essential for…
S66
High Level Session 2: Digital Public Goods and Global Digital Cooperation — – **Interoperability as Essential**: Universal agreement that interoperability is crucial for DPGs to function effective…
S67
DC-Sustainability Data, Access & Transparency: A Trifecta for Sustainable News | IGF 2023 — Interoperability is essential in both technical and legal systems. As technologies become increasingly global, it is cru…
S68
WS #271 Data Agency Scaling Next Gen Digital Economy Infrastructure — – **Interoperability as a Core Enabler**: The panelists discussed how interoperability between different protocols and p…
S69
Exploring Emerging PE³Ts for Data Governance with Trust | IGF 2023 Open Forum #161 — Certain barriers, such as low budgets, less technical focus in decision-making teams, and low priority given to smaller …
S70
WSIS Action Line C7: E-Agriculture — – **Youth as agents of change and innovation**: Young people were identified as crucial catalysts for digital agricultur…
S71
Business Engagement Session — Garza highlights the importance of closing the digital divide to ensure sustainable digital transformation. She argues t…
S72
Introduction — Content and services. To increase demand, there needs to be new, locally relevant content and services. Content and Ski…
S73
Open Forum #33 Building an International AI Cooperation Ecosystem — Practical implementation requires comprehensive ecosystems combining government guidance, industry-academia collaboratio…
S74
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — International cooperation and knowledge sharing are essential, requiring interoperable governance frameworks and multi-s…
S75
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — So working in tandem, working in synchronization is the need of the hour. This transformation cannot be driven by indust…
S76
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — “And I think this integration of government support for both the academic piece of this and the industry piece is really…
S77
Panel Discussion Data Sovereignty India AI Impact Summit — Successful sovereignty requires government-industry partnership with governments providing guardrails and policy stabili…
S78
Artificial Intelligence & Emerging Tech — Audience:Thank you, respected moderator, Ms. Jennifer Chang, for giving me such a golden opportunity to place my questio…
S79
Multi-stakeholder Discussion on issues about Generative AI — Amrita Choudhury:Good evening everyone. My name is Amrita Choudhury. I come from India, represent CCUI which is a civil …
S80
Bridging Connectivity Gaps and Harnessing e-Resilience | IGF 2023 Networking Session #104 — The moderator invited participants to continue discussions at their booth The moderator thanked everyone for joining th…
S81
https://dig.watch/event/india-ai-impact-summit-2026/ai-automation-in-telecom_-ensuring-accountability-and-public-trust-india-ai-impact-summit-2026 — Technology Security and Data Privacy Officer at Vodafone India Limited, with over 20 years of experience in cyber securi…
S82
COUR EUROPÉENNE DES DROITS DE L’HOMME EUROPEAN COURT OF HUMAN RIGHTS — “In their essentials”, stated the Vice-Chancellor, “these contentions seem to me to be sound.” He accepted that, by the …
S83
https://dig.watch/event/india-ai-impact-summit-2026/national-disaster-management-authority — DG of IMD is here and IMD can elaborate on how robust AI based systems can be deployed at population scale in the Indian…
S84
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — He’s coming back. Thank you so much for your excellencies setting the discussion regarding the Global Network for Center…
S85
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Dr. Romesh Ranawana:Of course. I mean, essentially, the problem is, like it’s been mentioned so many times, are the foun…
S86
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — <strong>Naveen GV:</strong> out a long, lengthy form of information for that to be processed much later by another human…
S87
Leaders TalkX: Local Voices, Global Echoes: Preserving Human Legacy, Linguistic Identity and Local Content in a Digital World — NK Goyal, President of the CMAI Association of India, presented a series of strategies for digital empowerment, includin…
S88
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — An interesting fact is that most of the AI models in the world work in English. But your AI model works in Indian langua…
S89
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — ROI doesn’t come from creating a very large model. 95% of the work can happen with models which are 20 billion or 50 bil…
S90
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-data-sovereignty-india-ai-impact-summit — So I think that’s the goal. by having models which are 20 billion to let’s say 100 billion parameters. You don’t need to…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sunil Gupta
6 arguments200 words per minute2225 words665 seconds
Argument 1
GPU scarcity hampers AI scaling
EXPLANATION
Sunil explains that while India has strong demand and talent for AI, the lack of sufficient GPU compute resources is the main bottleneck preventing large‑scale model training and deployment. Without abundant specialized GPUs, AI cannot be brought to the masses.
EVIDENCE
He notes that after the rise of ChatGPT, India possessed talent, data and market size but was missing compute, requiring specialized GPU clusters rather than regular CPUs [54-60]. He further quantifies the gap by stating that current deployments use about 10,000 chips while millions would be needed for nationwide use cases such as UPI across sectors [75-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel discussions stress the acute shortage of powerful GPUs for AI workloads, citing supply-chain delays and limited allocations as a bottleneck for large-scale model training in India [S2][S16].
MAJOR DISCUSSION POINT
Compute scarcity
AGREED WITH
Kalyan Kumar
DISAGREED WITH
Brandon Mello
Argument 2
Shared, government‑empanelled compute pool as a commodity
EXPLANATION
Sunil describes how the Indian government has created a shared compute facility by empaneling multiple private providers, making GPU capacity available to startups and researchers at market‑driven prices. This pooled approach is intended to turn compute into a commodity that can be scaled up over time.
EVIDENCE
He outlines the empanelment process where providers declare the amount of GPUs they will supply, leading to a shared pool of about 38,000 GPUs, with an additional 20,000 announced by the Prime Minister [224-236]. He emphasizes that this model encourages competition, price reduction and broader access for the ecosystem.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The report describes a shared compute facility created from multiple private providers and coordinated by the government, turning GPU capacity into a commodity that can be scaled [S2].
MAJOR DISCUSSION POINT
Shared compute pool
AGREED WITH
Kalyan Kumar
DISAGREED WITH
Kalyan Kumar
Argument 3
AI must serve the masses, remain a tool, not become the product
EXPLANATION
Sunil stresses that AI should be used to solve real problems for the population rather than being developed as a novelty or a product in itself. Human oversight must be retained to ensure AI benefits society and does not dictate outcomes.
EVIDENCE
He recalls a Prime Minister’s admonition to avoid building “toys” and instead focus on AI that benefits the masses, and he calls for keeping humans in the loop and preventing AI from becoming the product [380-386].
MAJOR DISCUSSION POINT
Human‑centric AI
AGREED WITH
Ankit Bose, Ganesh Ramakrishnan, Brandon Mello
Argument 4
Data localisation is a cornerstone of digital sovereignty
EXPLANATION
Sunil points out that while India generates and consumes a fifth of the world’s data, only a tiny fraction is hosted domestically, creating a strategic vulnerability. He stresses that building physical data‑center capacity and keeping data within India are vital to prevent foreign control over the nation’s digital destiny.
EVIDENCE
He states that India creates and consumes 20 % of global data but only 3 % of that data is hosted in India, highlighting the need for domestic infrastructure and compute resources [254-257].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The data-sovereignty panel highlights that only a few percent of India’s data is hosted domestically, underscoring localisation as essential for digital independence [S19].
MAJOR DISCUSSION POINT
Data localisation for sovereignty
AGREED WITH
Ganesh Ramakrishnan, Kalyan Kumar
Argument 5
Government should fund the first inference cycle to accelerate AI adoption
EXPLANATION
Sunil urges the government to extend its support beyond model training and subsidise the initial phase of inferencing, allowing applications to reach users, generate revenue, and become self‑sustaining before private investors take over. This early funding is presented as a catalyst for widespread deployment.
EVIDENCE
He requests that the government not limit its role to training models but also support the first phase of inferencing, describing this as a necessary step for adoption and revenue generation [224-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sunil’s call for government-funded inferencing aligns with the panel’s mention of public support for the first inference cycle to jump-start AI deployment [S2].
MAJOR DISCUSSION POINT
Public funding for inference phase
AGREED WITH
Ankit Bose
Argument 6
Voice‑based AI will be the dominant interaction mode in India, leveraging linguistic diversity
EXPLANATION
Sunil contends that because Indian users prefer speaking in native languages or mixed Hindi‑English, AI solutions should prioritize speech and voice interfaces, even for feature‑phone users, to achieve mass adoption.
EVIDENCE
He notes that “India’s AI will be voice-based… people are talking in their own native language or a mixture of Hindi, English… even from feature phones you will be able to talk to an AI model at the back end” [244-248].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of Indian AI use cases point to voice-first interfaces as critical for reaching users in native languages and on feature phones [S20][S22].
MAJOR DISCUSSION POINT
Voice‑centric AI
AGREED WITH
Ganesh Ramakrishnan, Brandon Mello
K
Kalyan Kumar
4 arguments175 words per minute1697 words579 seconds
Argument 1
Building a unified data stack with vector DBs and edge inference
EXPLANATION
Kalyan argues that beyond compute, a robust data infrastructure—including centralized platforms, vector databases and edge‑ready inference engines—is essential for sovereign AI. Such a stack enables scalable, low‑latency AI services across the country.
EVIDENCE
He details HCL’s acquisition of Actian (original Ingress patent) and a vector engine from CWI, the development of a localized vector AI engine for edge devices, and the broader portfolio of data-centric assets being built for the stack [96-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion emphasizes the need for a centralized data platform, vector databases and edge-ready inference engines as core components of a sovereign AI stack [S2].
MAJOR DISCUSSION POINT
Data stack development
AGREED WITH
Sunil Gupta, Ganesh Ramakrishnan
Argument 2
Shift from services to building proprietary IP; need smarter engineers and quantum research
EXPLANATION
Kalyan explains that HCL is transitioning from a service‑oriented model to creating its own intellectual property, requiring a talent shift toward engineers with systems thinking and research expertise. He also highlights the future need for quantum‑level compute and fundamental science research.
EVIDENCE
He recounts the 2015-16 strategic pivot to build products for HCL itself, the emphasis on hiring smarter engineers rather than just coders, and the call for quantum research to change the computational paradigm, noting the scarcity of physics talent [266-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Kalyan’s strategic pivot toward proprietary IP, hiring of systems-thinking engineers and calls for quantum-level compute are documented in the panel overview [S2].
MAJOR DISCUSSION POINT
IP creation & talent shift
AGREED WITH
Ankit Bose
DISAGREED WITH
Ankit Bose
Argument 3
Cross‑border partnerships and asset acquisitions expand capabilities
EXPLANATION
Kalyan points out that strategic acquisitions and international collaborations broaden HCL’s technology base, giving it access to advanced database patents and vector engines, while new joint ventures like India Chips Limited aim to develop domestic semiconductor fabs.
EVIDENCE
He mentions acquiring Actian’s Ingress patent, a vector engine from CWI, and later references the upcoming JV with Foxconn for a 16/32 nm fab called India Chips Limited, slated to be operational in five years [98-103][441-447].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Acquisitions such as Actian’s Ingress patent and collaborations with international research groups are cited as ways to broaden HCL’s technology base [S2].
MAJOR DISCUSSION POINT
Strategic acquisitions & JV
AGREED WITH
Ganesh Ramakrishnan
Argument 4
Urgent investment in domestic semiconductor manufacturing is critical for sovereign compute
EXPLANATION
Kalyan highlights the creation of a joint venture with Foxconn, India Chips Limited, to build a 16 nm/32 nm fab, describing it as patient capital and stressing that the effort must begin now rather than wait five years to secure an indigenous GPU supply chain.
EVIDENCE
He announces the JV with Foxconn for a 16 nm and 32 nm fab, calls it patient capital, and emphasizes the need to start immediately instead of waiting five years [441-448][449-452].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Details on the lengthy chip-fabrication process and the need for early, patient capital investment in a domestic fab are provided [S23][S24].
MAJOR DISCUSSION POINT
Domestic semiconductor capability
AGREED WITH
Sunil Gupta
DISAGREED WITH
Sunil Gupta
G
Ganesh Ramakrishnan
4 arguments157 words per minute1464 words558 seconds
Argument 1
Interoperability across layers enables participation and multilingual data products
EXPLANATION
Ganesh emphasizes that ensuring interoperability at every stack layer encourages broad participation, allows alternative solutions, and supports multilingual data products tailored to India’s linguistic diversity. Interoperability also facilitates data contracts and cataloguing.
EVIDENCE
He describes how interoperability offers alternatives, supports participation from academia and industry, and cites the need for data products, catalogs and contracts to enable multilingual AI, referencing the PM’s statement on data ownership and the concept of data products [151-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel stresses layered interoperability, data contracts and catalogs as enablers for broad ecosystem participation and multilingual AI products [S2][S19].
MAJOR DISCUSSION POINT
Layered interoperability
AGREED WITH
Kalyan Kumar
Argument 2
Co‑design, academic‑industry consortia, and open collaboration accelerate sovereign models
EXPLANATION
Ganesh outlines the value of co‑design and consortium‑based collaboration, where multiple academic institutions and industry partners jointly develop foundation models that reflect Indian contexts. Open collaboration reduces silos and speeds up innovation.
EVIDENCE
He cites the nine-institution consortium at IIM Indore, the co-design of a healthcare book, the development of a 17-million-parameter model with multilingual experts, and the recent MOU with a US heritage foundation, illustrating how collaborative ecosystems produce sovereign models [193-214].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A nine-institution consortium at IIM Indore developing a multilingual foundation model exemplifies co-design and open collaboration [S2].
MAJOR DISCUSSION POINT
Consortium‑driven co‑design
AGREED WITH
Kalyan Kumar
Argument 3
Provenance, transparency, and education are essential for alignment
EXPLANATION
Ganesh argues that AI systems must retain provenance at every stage—from data aggregation to model performance—so that users can trace origins and trust outcomes. Education on these practices is crucial to prevent AI from becoming a black‑box product.
EVIDENCE
He references his early work on machine-assisted translation, stresses the need for provenance in data, metadata, tokenisation and observability, and highlights the role of education in making models “glass boxes” [371-376].
MAJOR DISCUSSION POINT
Provenance & transparency
AGREED WITH
Sunil Gupta, Ankit Bose, Brandon Mello
Argument 4
Existing GPU hardware is not optimized for AI workloads; India must develop specialized designs and better model‑serving engines to achieve true sovereignty
EXPLANATION
Ganesh argues that the current generation of GPUs was never built for the demands of large AI models and that a redesign of the compute stack—including signal‑integrated‑gate (SIG) designs and more efficient serving architectures—is essential for a sovereign AI ecosystem. He calls for inquisitiveness across the entire stack to identify and implement these innovations.
EVIDENCE
He notes that “GPUs were never designed for building these models” and asks whether a SIG design could be used to create better model-serving engines, emphasizing the need for new hardware and stack innovations [426-434].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists note that current GPUs were never built for large AI models and call for specialized hardware and serving architectures, echoing broader concerns about GPU suitability [S2][S16].
MAJOR DISCUSSION POINT
Hardware and stack innovation for sovereign AI
B
Brandon Mello
4 arguments147 words per minute1171 words475 seconds
Argument 1
Data trust, compliance friction and need for data contracts
EXPLANATION
Brandon notes that organizations face heavy red‑tape and compliance hurdles when handling data, which hampers AI adoption. Establishing clear data contracts and trust frameworks is necessary to move projects forward.
EVIDENCE
He points to “data and trust and compliance friction” as a major barrier, describing how departmental silos and lack of unified data policies create friction for AI initiatives [129-136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The data-sovereignty discussion highlights heavy compliance friction and the necessity of clear data contracts to enable AI projects [S19].
MAJOR DISCUSSION POINT
Data compliance friction
DISAGREED WITH
Sunil Gupta
Argument 2
ROI invisibility, lack of executive sponsorship, and departmental silos block AI projects
EXPLANATION
Brandon identifies three systemic obstacles: difficulty in quantifying ROI, absence of senior executive champions, and fragmented departmental responsibilities, all of which cause AI pilots to stall or fail.
EVIDENCE
He provides statistics that a third of CFOs cannot quantify ROI and only one in ten have tools to measure it, explains how lack of executive sponsorship leads to stalled projects, and illustrates how siloed procurement and IT functions delay implementations [119-143].
MAJOR DISCUSSION POINT
Execution barriers
DISAGREED WITH
Sunil Gupta
Argument 3
Uncertainty in human‑AI interaction requires careful governance
EXPLANATION
Brandon reflects on the nascent understanding of how humans should interact with increasingly capable AI systems, arguing that rapid advances outpace governance frameworks and demand cautious policy development.
EVIDENCE
He shares a personal anecdote about a breakfast conversation where the question of human-AI interaction was raised, noting that the speed of AI evolution makes it hard to establish stable governance or interaction norms [358-366].
MAJOR DISCUSSION POINT
Human‑AI interaction uncertainty
AGREED WITH
Sunil Gupta, Ankit Bose, Ganesh Ramakrishnan
Argument 4
Adoption depends on solving real everyday problems, consolidating fragmented tools, and supporting India’s multilingual landscape
EXPLANATION
Brandon stresses that AI solutions must address concrete use‑cases that improve daily life, reduce the overhead of juggling many separate applications, and be able to operate across the country’s many languages and dialects. These factors together drive meaningful uptake of AI technologies.
EVIDENCE
He cites the need for a real use case that changes people’s lives, the importance of consolidating dozens of tools into a single workflow to avoid context loss, and the challenge of handling India’s linguistic diversity for LLMs [337-351].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Voice-first solutions and multilingual capabilities are identified as key drivers for practical AI adoption across India’s diverse linguistic environment [S20][S22].
MAJOR DISCUSSION POINT
Practical adoption drivers
AGREED WITH
Sunil Gupta, Ganesh Ramakrishnan
A
Ankit Bose
4 arguments173 words per minute1450 words501 seconds
Argument 1
Massive developer up‑skilling program and curriculum overhaul
EXPLANATION
Ankit outlines NASCOM’s initiative to rapidly up‑skill a large cohort of developers and revamp academic curricula to include AI specializations, aiming to create a workforce ready for sovereign AI development.
EVIDENCE
He states that NASCOM targets 150,000 developers across India within six months, and mentions collaboration with MIT and the education sector to rewrite BTEC, MTEC, MCA, BCA curricula with added specializations [312-319].
MAJOR DISCUSSION POINT
Developer up‑skilling
AGREED WITH
Kalyan Kumar
DISAGREED WITH
Kalyan Kumar
Argument 2
Real‑world use cases, tool consolidation, and language support drive adoption
EXPLANATION
Ankit argues that AI adoption will accelerate when solutions address concrete everyday problems, integrate disparate tools into a unified workflow, and support India’s multilingual environment.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of addressing concrete use-cases, unifying tools and supporting multiple languages is reinforced by sector analyses on voice technology and multilingual AI [S20][S22].
MAJOR DISCUSSION POINT
Adoption drivers
Argument 3
A QR‑code feedback mechanism is being used to crowdsource input on sovereign AI policy and foster collaborative participation
EXPLANATION
Ankit announces that a QR code displayed on the digital background gives participants access to the sovereign AI research report and a channel to submit feedback, thereby inviting the broader ecosystem to shape the policy roadmap.
EVIDENCE
He asks the audience to scan the QR code, view the report, and provide inputs, emphasizing collaborative engagement through this digital tool [435-439].
MAJOR DISCUSSION POINT
Collaborative policy feedback via QR code
Argument 4
AI alignment and safety are essential long‑term challenges to prevent hazardous outcomes
EXPLANATION
Ankit warns that without proper alignment, AI could become a dangerous tool, urging the community to mitigate risks and ensure AI remains beneficial to humanity over the coming decades.
EVIDENCE
He states that “the challenge as a humanity we have to mitigate… we don’t align… AI could be hazardous” and asks about the challenge of non-aligned AI in a 30-second closing question [356-357][365-366].
MAJOR DISCUSSION POINT
AI safety & alignment
AGREED WITH
Sunil Gupta, Ganesh Ramakrishnan, Brandon Mello
S
Speaker 1
1 argument55 words per minute330 words359 seconds
Argument 1
Launch of the Sovereign AI research report and MOU with Amrita Vishwa Vidyapeetham signal policy commitment
EXPLANATION
Speaker 1 announces the publication of a sovereign AI research report and the signing of an MOU with Amrita University, indicating institutional and governmental support for India’s sovereign AI agenda.
EVIDENCE
The opening remarks invite Amrita representatives for the report launch [1], and later the speaker thanks the panel and announces the MOU with Amrita Vishwa Vidyapeetham [454-455].
MAJOR DISCUSSION POINT
Policy & institutional backing
P
Professor Ganesh Ramakrishnan
2 arguments166 words per minute292 words105 seconds
Argument 1
Data ownership rights and a data‑product framework are foundational for sovereign AI
EXPLANATION
Ganesh stresses that data should remain under the control of its creator and that establishing data products, catalogs and contracts is essential to enable trustworthy, interoperable AI systems that respect sovereignty.
EVIDENCE
He references the Prime Minister’s statement “jiska data uska adhikar” and explains that a data catalog, data product and data contract are required for interoperability and to protect data owners’ rights [171-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The data-sovereignty panel emphasizes that data creators retain ownership and that data products, catalogs and contracts are essential for trustworthy AI systems [S19].
MAJOR DISCUSSION POINT
Data ownership & productization
AGREED WITH
Sunil Gupta, Ganesh Ramakrishnan, Kalyan Kumar
Argument 2
Multilingual AI models constitute a strategic moat for India
EXPLANATION
Ganesh argues that supporting India’s linguistic diversity through AI models that cover many languages is a key competitive advantage and essential for broad adoption across the country.
EVIDENCE
He describes the foundation model that covers 22 Indian languages, using mixture-of-experts where experts for Hindi, Marathi, Telugu and other languages are shared, highlighting the focus on multilingual capability [190-212].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses point to India’s linguistic diversity as a unique competitive advantage, with multilingual models seen as a strategic moat for the nation’s AI ecosystem [S20][S22].
MAJOR DISCUSSION POINT
Multilingual capability as a moat
AGREED WITH
Sunil Gupta, Ganesh Ramakrishnan, Brandon Mello
Agreements
Agreement Points
Compute scarcity and the need for a shared, domestically sourced GPU capacity
Speakers: Sunil Gupta, Kalyan Kumar
GPU scarcity hampers AI scaling Shared, government‑empanelled compute pool as a commodity Urgent investment in domestic semiconductor manufacturing is critical for sovereign compute
Both speakers highlight that insufficient GPU compute is the main bottleneck for scaling AI in India. Sunil describes the lack of specialised GPUs and the gap between the 10,000 chips currently deployed and the millions needed for nationwide use cases, and outlines the government-empanelled shared compute pool as a way to turn compute into a commodity [54-60][75-78][224-236]. Kalyan points to the joint venture with Foxconn to build a 16/32 nm fab, calling it patient capital and urging immediate action rather than waiting five years [441-447][449-452].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s AI roadmap estimates a need for at least 128,000 GPUs for domestic workloads, highlighting the urgency of expanding domestic GPU capacity and shared infrastructure models [S45]. The lack of GPUs and data centres in the Global South drives proposals for shared-infrastructure business models [S46]. Policy discussions warn that concentration of compute resources creates an AI divide, reinforcing the need for sovereign compute capacity [S43]. Recent government commitments include building gigawatt-scale, AI-ready data centres to address this scarcity [S60].
AI must remain a human‑centric tool that serves the masses and stays aligned
Speakers: Sunil Gupta, Ankit Bose, Ganesh Ramakrishnan, Brandon Mello
AI must serve the masses, remain a tool, not become the product AI alignment and safety are essential long‑term challenges to prevent hazardous outcomes Provenance, transparency, and education are essential for alignment Uncertainty in human‑AI interaction requires careful governance
All four speakers stress that AI should be deployed to solve real problems for people, with strong human oversight and alignment. Sunil warns against building “toys” and insists AI stay a tool for the masses [380-386]. Ankit flags the long-term safety challenge of misaligned AI [356-357][365-366]. Ganesh calls for provenance, transparency and education to keep models as “glass boxes” [371-376]. Brandon notes the rapid pace of AI outstripping governance and the need for careful human-AI interaction frameworks [358-366].
POLICY CONTEXT (KNOWLEDGE BASE)
The UN AI Security Council emphasizes inclusive, human-centric AI capacity-building to ensure AI serves societal needs and aligns with ethical standards [S42].
Data sovereignty through localisation, ownership rights and a robust data stack
Speakers: Sunil Gupta, Ganesh Ramakrishnan, Kalyan Kumar
Data localisation is a cornerstone of digital sovereignty Data ownership rights and a data‑product framework are foundational for sovereign AI Building a unified data stack with vector DBs and edge inference
The speakers converge on the need for strong data governance to underpin sovereign AI. Sunil points out that India creates and consumes 20 % of global data but only 3 % is hosted domestically, highlighting a strategic vulnerability [254-257]. Ganesh stresses that data creators must retain rights and that data catalogs, products and contracts are essential for interoperability [171-176]. Kalyan describes HCL’s work on a unified data platform, vector databases and edge-ready inference engines as critical infrastructure [96-108].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs on data for development stress the importance of localisation, ownership rights and robust data stacks to protect national interests in the digital economy [S36].
Interoperability and collaborative ecosystems accelerate sovereign AI development
Speakers: Ganesh Ramakrishnan, Kalyan Kumar
Interoperability across layers enables participation and multilingual data products Co‑design, academic‑industry consortia, and open collaboration accelerate sovereign models Cross‑border partnerships and asset acquisitions expand capabilities
Both speakers argue that open, interoperable frameworks and collaborative consortia are key to building sovereign AI. Ganesh highlights how interoperability encourages participation, alternatives and multilingual data products, and describes a nine-institution consortium that co-designs models for Indian contexts [151-156][193-214]. Kalyan adds that strategic acquisitions (e.g., Actian, vector engine) and partnerships broaden HCL’s technology base, supporting interoperable solutions [98-103].
POLICY CONTEXT (KNOWLEDGE BASE)
European AI governance proposals call for interoperable frameworks that enable collaborative ecosystems while respecting national sovereignty [S58]. The Hiroshima AI process similarly highlights interoperability as a cornerstone for coordinated AI policy [S59].
Multilingual and voice‑first AI as a strategic moat and adoption driver
Speakers: Sunil Gupta, Ganesh Ramakrishnan, Brandon Mello
Voice‑based AI will be the dominant interaction mode in India, leveraging linguistic diversity Multilingual AI models constitute a strategic moat for India Adoption depends on solving real everyday problems, consolidating fragmented tools, and supporting India’s multilingual landscape
All three emphasize that catering to India’s linguistic diversity is essential for AI adoption and competitive advantage. Sunil notes that India’s AI will be voice-based, serving users on feature phones in native languages [244-248]. Ganesh describes a foundation model covering 22 Indian languages using mixture-of-experts, positioning multilingual capability as a moat [190-212]. Brandon stresses that addressing multilingual needs and consolidating tools are critical for real-world adoption [347-351].
POLICY CONTEXT (KNOWLEDGE BASE)
Government policy frameworks are urging multilingual internet and AI adoption, treating language accessibility as a core right and encouraging voice-first solutions [S48]. India’s Frontier AI commitments specifically target strengthening multilingual and contextual AI evaluations [S49].
Massive up‑skilling of developers and shift toward building proprietary IP
Speakers: Ankit Bose, Kalyan Kumar
Massive developer up‑skilling program and curriculum overhaul Shift from services to building proprietary IP; need smarter engineers and quantum research
Both speakers underline the importance of developing a skilled AI workforce and moving from service-based models to proprietary product development. Ankit outlines NASCOM’s target to train 150,000 developers and revamp curricula with new specialisations within six months [312-319]. Kalyan discusses HCL’s strategic pivot to build its own IP, hiring smarter engineers with systems thinking, and the future need for quantum research to change the compute paradigm [266-304].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s sovereign AI strategy advocates moving from services-centric models to building proprietary IP and up-skilling engineers, coupled with investment in foundational research such as quantum computing [S39]. National AI capacity-building initiatives call for large-scale developer training, learning support and infrastructure to create a skilled AI workforce [S40][S41][S61]. The shift also reflects a broader debate on open-source versus proprietary software models in AI development [S38].
Government should fund the first inference cycle to accelerate AI adoption
Speakers: Sunil Gupta, Ankit Bose
Government should fund the first inference cycle to accelerate AI adoption
Sunil calls for public funding not only for model training but also for the initial inference phase, arguing this will jump-start adoption and allow revenue-generating use cases to become self-sustaining. He describes the existing shared compute facility and the need for government support for the first inference cycle [224-236]. Ankit’s questioning of how compute can become a shared commodity reflects alignment with this view [215-222].
POLICY CONTEXT (KNOWLEDGE BASE)
The recent keynote outlined a three-pronged plan where the government will underwrite the initial inference layer of sovereign AI models, leveraging newly built AI-ready data centres to accelerate adoption [S60]. Complementary policy notes stress the need for secure, resilient infrastructure to support early-stage AI services [S61].
Similar Viewpoints
Both identify compute scarcity as a critical barrier and propose domestic solutions—shared compute pools and indigenous chip manufacturing—to achieve sovereign AI capacity [54-60][75-78][224-236][441-447][449-452].
Speakers: Sunil Gupta, Kalyan Kumar
GPU scarcity hampers AI scaling Urgent investment in domestic semiconductor manufacturing is critical for sovereign compute
Both stress that a robust, interoperable data infrastructure—including vector databases and edge inference—is essential for scalable sovereign AI ecosystems [151-156][96-108].
Speakers: Ganesh Ramakrishnan, Kalyan Kumar
Interoperability across layers enables participation and multilingual data products Building a unified data stack with vector DBs and edge inference
Both argue that AI adoption hinges on delivering concrete, everyday solutions that benefit the broader population, rather than building AI for its own sake [380-386][337-351].
Speakers: Brandon Mello, Sunil Gupta
Adoption depends on solving real everyday problems, consolidating fragmented tools, and supporting India’s multilingual landscape AI must serve the masses, remain a tool, not become the product
Both highlight that addressing real use‑cases, reducing tool fragmentation, and supporting multilingual contexts are key to scaling AI uptake [312-319][337-351].
Speakers: Ankit Bose, Brandon Mello
Real‑world use cases, tool consolidation, and language support drive adoption Adoption depends on solving real everyday problems, consolidating fragmented tools, and supporting India’s multilingual landscape
Unexpected Consensus
Agreement between an academic researcher and an infrastructure provider on voice‑first, multilingual AI as India’s strategic moat
Speakers: Sunil Gupta, Ganesh Ramakrishnan
Voice‑based AI will be the dominant interaction mode in India, leveraging linguistic diversity Multilingual AI models constitute a strategic moat for India
Despite coming from different sectors-Sunil from a sovereign cloud provider and Ganesh from an academic-industry consortium-they both view voice-centric, multilingual AI as a unique competitive advantage for India, a convergence not obvious given their distinct focus areas [244-248][190-212].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy statements highlight the strategic importance of multilingual, voice-first AI for national inclusion, with regulatory guidance encouraging such collaborations [S48][S49].
Overall Assessment

The panel shows strong consensus on four pillars: (1) addressing compute scarcity through shared facilities and domestic chip manufacturing; (2) ensuring AI remains human‑centric, aligned and serves the masses; (3) establishing robust data sovereignty via localisation, ownership rights and interoperable data stacks; (4) fostering collaborative, interoperable ecosystems and multilingual/voice‑first AI to drive adoption. There is also broad agreement on the need for massive up‑skilling and government support for the inference phase.

High consensus across industry, academia and startups, indicating a unified direction for India’s sovereign AI strategy. This alignment suggests that policy measures, public‑private partnerships and capacity‑building initiatives are likely to receive coordinated support, accelerating progress toward a sovereign, inclusive AI ecosystem.

Differences
Different Viewpoints
How to resolve India’s compute scarcity for sovereign AI
Speakers: Sunil Gupta, Kalyan Kumar
GPU scarcity hampers AI scaling Shared, government‑empanelled compute pool as a commodity Urgent investment in domestic semiconductor manufacturing is critical for sovereign compute Shift from services to building proprietary IP; need smarter engineers and quantum research
Sunil argues that the main bottleneck is a lack of GPUs and proposes expanding a shared, government-empanelled pool of GPU capacity as a commodity to be subsidised and scaled ([54-60][75-78][224-236]). Kalyan counters that relying on external GPU allocations is insufficient; instead, India must develop its own semiconductor fab and invest in quantum-level research and smarter engineering talent to create indigenous compute resources ([441-448][449-452][286-290][298-304]).
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses quantify the GPU shortfall and propose shared-infrastructure models and government-led data centre construction as solutions to sovereign compute scarcity [S45][S46][S60].
What constitutes the primary barrier to AI adoption in India
Speakers: Sunil Gupta, Brandon Mello
GPU scarcity hampers AI scaling ROI invisibility, lack of executive sponsorship, and departmental silos block AI projects Data trust, compliance friction and need for data contracts
Sunil maintains that insufficient compute resources are the chief obstacle to scaling AI for the masses ([54-60][75-78]). Brandon, however, emphasizes organisational and financial hurdles-CFOs cannot quantify ROI, executives do not champion projects, and data-compliance red-tape stalls implementation ([119-124][129-136]). Thus they disagree on whether the bottleneck is technical (compute) or organisational/financial.
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholders identify the concentration of compute resources and energy constraints as key barriers, warning that limited access deepens the AI divide and hampers adoption [S43][S44][S57].
Approach to building AI talent and skills in the country
Speakers: Ankit Bose, Kalyan Kumar
Massive developer up‑skilling program and curriculum overhaul Shift from services to building proprietary IP; need smarter engineers and quantum research
Ankit outlines a rapid up-skilling drive targeting 150,000 developers and a curriculum rewrite to create AI-ready graduates ([312-319]). Kalyan argues that merely training large numbers of coders is insufficient; the focus should be on hiring fewer but smarter engineers with systems-thinking and research capabilities, plus long-term quantum research ([286-290][298-304]). Both seek a skilled workforce but differ on scale and depth of training.
POLICY CONTEXT (KNOWLEDGE BASE)
National AI capacity-building programs emphasize up-skilling developers through training infrastructure, industry partnerships and education initiatives to create a robust talent pipeline [S40][S41][S61].
Unexpected Differences
Hardware solution: more GPUs vs new AI‑specific hardware designs
Speakers: Sunil Gupta, Ganesh Ramakrishnan
GPU scarcity hampers AI scaling Existing GPU hardware is not optimized for AI workloads; need specialized designs and better model‑serving engines
While both acknowledge a hardware bottleneck, Sunil proposes scaling the existing GPU pool through government-empanelled procurement ([54-60][75-78][224-236]), whereas Ganesh argues that the current generation of GPUs was never built for AI and that India must develop new hardware architectures such as SIG designs and improved serving engines ([426-434]). The divergence between scaling existing GPUs and redesigning hardware was not anticipated given their shared focus on sovereignty.
POLICY CONTEXT (KNOWLEDGE BASE)
India’s compute roadmap quantifies the GPU deficit, suggesting immediate scaling of GPU inventories as the primary hardware response, while longer-term research into AI-specific chips remains under discussion [S45].
Emphasis on quantum and physics research versus immediate compute expansion
Speakers: Kalyan Kumar, Sunil Gupta
Urgent investment in domestic semiconductor manufacturing is critical for sovereign compute Shift from services to building proprietary IP; need smarter engineers and quantum research GPU scarcity hampers AI scaling
Kalyan highlights long-term quantum research and fundamental physics as essential for future compute paradigms ([298-304]), a focus that is surprisingly absent from Sunil’s more immediate, short-term strategy of expanding GPU capacity and government-funded inference ([54-60][75-78][224-236]). The contrast between a futuristic research agenda and a near-term deployment plan was not evident earlier in the discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs note a growing focus on quantum computing as an immediate strategic priority, potentially diverting attention from near-term compute expansion needed for AI deployment [S39][S50].
Overall Assessment

The panel shows strong consensus on the need for a sovereign AI ecosystem that benefits the Indian population and the Global South. However, significant disagreements arise around the primary bottleneck (compute vs organisational/financial barriers), the optimal path to resolve compute scarcity (shared GPU pools vs indigenous chip fabrication and quantum research), and the preferred model for talent development (mass up‑skilling vs elite, research‑oriented engineering). These divergences reflect differing strategic horizons—short‑term deployment versus long‑term technological independence.

Moderate to high. While the overarching goal is shared, the contrasting views on technical, policy and capacity‑building approaches could lead to fragmented initiatives unless a coordinated roadmap reconciles these perspectives. The implications are that without alignment, efforts may duplicate, compete for resources, or stall, potentially slowing India’s progress toward AI sovereignty.

Partial Agreements
All four speakers agree that India must build a sovereign AI ecosystem that serves the nation and the Global South. However, Sunil pushes a market‑driven shared GPU pool funded by the government ([224-236]), Kalyan stresses building indigenous chip fabs and quantum research ([441-452]), Ganesh calls for layered interoperability and consortium‑driven co‑design ([151-176]), while Ankit focuses on massive developer up‑skilling and curriculum changes ([312-319]). The shared goal is clear, but the pathways diverge.
Speakers: Sunil Gupta, Kalyan Kumar, Ganesh Ramakrishnan, Ankit Bose
Shared, government‑empanelled compute pool as a commodity Urgent investment in domestic semiconductor manufacturing is critical for sovereign compute Interoperability across layers enables participation and multilingual data products Massive developer up‑skilling program and curriculum overhaul
Both agree that AI must reach the masses, but Sunil sees compute as the decisive factor, whereas Brandon sees organisational and financial structures as the key to unlocking adoption. Their end‑goal aligns, but the means differ.
Speakers: Sunil Gupta, Brandon Mello
GPU scarcity hampers AI scaling ROI invisibility, lack of executive sponsorship, and departmental silos block AI projects
Takeaways
Key takeaways
Compute scarcity, especially GPUs, is the primary bottleneck for scaling sovereign AI in India; a shared, government‑empanelled compute pool is being built to address this. A robust data infrastructure—including unified data stacks, vector databases, edge inference, and interoperable layers—is essential for AI adoption and for enabling multilingual, trustworthy data products. Talent development must shift from service‑oriented coding to building proprietary IP, with large‑scale up‑skilling programs, curriculum redesign, and focus on smarter engineers and quantum research. Adoption barriers are dominated by ROI invisibility, lack of executive sponsorship, departmental silos, and language/​data‑trust challenges; real‑world, high‑impact use cases and tool consolidation are needed to overcome them. Collaboration across academia, industry, and government (consortia, co‑design, cross‑border partnerships) accelerates model development and reduces duplication. AI must remain a human‑centric tool that serves the masses; provenance, transparency, and alignment education are critical to avoid AI becoming a product itself. Policy momentum is evident through the launch of the Sovereign AI research report, an MOU with Amrita Vishwa Vidyapeetham, and government commitments to expand shared GPU capacity.
Resolutions and action items
Release of the Sovereign AI research report by Amrita Vishwa Vidyapeetham. Signing of an MOU between NASCOM and Amrita Vishwa Vidyapeetham. Government to continue empaneling GPU providers and add 20,000 GPUs to the shared compute pool; target to scale to 50‑60,000 GPUs in the near term. NASCOM to launch a developer up‑skilling initiative targeting 150,000 developers over the next six months and to revise technical curricula (BTEC, MTEC, MCA, etc.) with specialization tracks. Participants invited to scan the QR code on the digital background to provide feedback on the report and contribute to the forthcoming sovereign AI policy document. HCL announced a joint venture with Foxconn (India Chips Limited) to develop 16/32 nm fabs, signaling a long‑term hardware roadmap. Call for government funding to support the first inferencing cycle of sovereign models to enable early adoption and revenue generation.
Unresolved issues
How to scale GPU availability from the current tens of thousands to the millions required for nationwide inferencing across sectors. Establishing standardized data contracts, monetization models, and governance frameworks for multilingual data products. Developing reliable ROI measurement tools and processes for AI projects within enterprises. Defining concrete timelines and responsibilities for the transition from shared compute for training to shared compute for inferencing. Addressing quantum‑compute research needs and integrating such capabilities into the sovereign AI stack. Ensuring consistent AI alignment and safety across diverse applications without a unified regulatory mechanism.
Suggested compromises
Adopt a shared‑commodity model for compute where multiple providers contribute GPUs and compete on price, while the government sets baseline access terms. Balance scaling‑up (larger centralized clusters) with scaling‑out (edge and distributed inference) to meet both latency and coverage requirements. Provide choice of infrastructure providers (hyperscalers, IOTA, CIFI, etc.) to give enterprises flexibility while maintaining sovereignty. Co‑design approach that combines academic research, industry implementation, and government policy to align incentives and share risk.
Thought Provoking Comments
The biggest problem for taking AI to the masses in India is how to make compute available in an abundant way – we need millions of GPUs, not just a few thousand, and the government is creating a shared compute facility to address this.
He identified compute scarcity as the fundamental bottleneck for sovereign AI, shifting focus from software and data to the physical infrastructure needed at scale.
Set the agenda for the rest of the discussion on hardware infrastructure; prompted other panelists (e.g., Kalyan and Ganesh) to talk about data platforms and interoperability as complementary layers, and led to a deeper dive into government‑led shared GPU pools.
Speaker: Sunil Gupta
Beyond infrastructure and models, the data layer is critical – we need centralized yet edge‑distributed vector databases, data contracts, and catalogs to ensure high‑quality, usable data for AI at scale.
He introduced the often‑overlooked importance of a robust data stack, linking it to edge inference and the need for data‑centric products.
Expanded the conversation from raw compute to data architecture, prompting Ganesh to discuss interoperability and prompting the group to consider how data sovereignty ties into the overall AI ecosystem.
Speaker: Kalyan Kumar
95 % of AI pilots never reach production because of ROI invisibility, data‑trust/compliance friction, and the champion problem – lack of executive sponsorship and clear metrics stall adoption.
He highlighted systemic organizational barriers rather than technical ones, reframing the challenge as a business‑process issue.
Shifted the tone from technology‑centric to adoption‑centric, leading Ankit and others to ask about concrete steps for scaling AI in enterprises and prompting discussion on executive buy‑in and measurement.
Speaker: Brandon Mello
Interoperability at every layer encourages participation, offers alternatives, and enables scaling out; it also requires data products, catalogs, and contracts to make data a tradable asset.
He introduced a unifying principle—interoperability—that connects compute, data, and model layers, and linked it to economic models of data ownership.
Created a turning point where the panel moved from isolated challenges to a holistic framework; other speakers referenced interoperability when discussing shared compute and data platforms.
Speaker: Ganesh Ramakrishnan
Collaboration and co‑design are essential – we built a multilingual speech‑to‑text model using mixture‑of‑experts where experts for Hindi and Marathi were shared, and Telugu leveraged collaboration between Hindi and Tamil experts.
He provided a concrete example of how interdisciplinary collaboration yields technical breakthroughs, emphasizing empathy between linguists and engineers.
Deepened the technical discussion, illustrating how collaborative design can overcome language diversity challenges; reinforced the earlier call for interoperability and data sharing.
Speaker: Ganesh Ramakrishnan
The government’s empaneling of GPU providers has created a shared compute facility of 38,000 GPUs, with plans to add 20,000 more; we need to extend this model to inferencing and subsidise the first cycle of usage to drive adoption.
He detailed a concrete policy mechanism that turns the abstract compute problem into an actionable program, and highlighted the need for ongoing support beyond training.
Validated Sunil’s earlier compute argument with policy evidence, steering the conversation toward implementation timelines and the role of public funding in scaling AI services.
Speaker: Sunil Gupta
India must pivot from a service‑oriented model to building its own IP; we need smarter engineers, focus on quantum/computational research, and invest in long‑term talent rather than short‑term hiring.
He challenged the status quo of Indian tech firms, urging a strategic shift toward indigenous product development and fundamental science.
Prompted a broader reflection on skill development and long‑term competitiveness, influencing later remarks about developer upskilling and curriculum redesign.
Speaker: Kalyan Kumar
If AI is not aligned, we become products rather than consumers; provenance at every stack level—data aggregation, metadata, tokenisation, observability—is essential to keep humans in control.
He raised the ethical dimension of AI sovereignty, linking technical provenance to societal control and preventing AI from dictating human behavior.
Shifted the final segment toward ethical considerations, reinforcing Sunil’s call for AI to serve the masses and prompting the panel to conclude on responsible AI governance.
Speaker: Ganesh Ramakrishnan
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved the conversation from identifying a single bottleneck (compute) to constructing a comprehensive sovereign AI ecosystem. Sunil Gupta’s emphasis on GPU scarcity anchored the need for hardware, Kalyan Kumar broadened the view with a data‑centric stack, and Brandon Mello reframed the challenge as organizational adoption. Ganesh Ramakrishnan’s calls for interoperability, collaborative model design, and provenance tied these technical and business strands together, while his ethical warning capped the dialogue. Together, these comments redirected the panel from isolated problems to an integrated strategy encompassing infrastructure, data, talent, policy, and responsible use, ultimately defining a roadmap for India’s sovereign AI ambition.

Follow-up Questions
How can India scale GPU compute to the millions needed for nationwide AI deployment?
Sunil highlighted the current shortfall of GPUs and the need for a massive increase to support training and inferencing for billions of users, indicating a critical infrastructure gap.
Speaker: Sunil Gupta
What strategies are needed to build a robust, distributed data stack (including vector databases and edge inference) for sovereign AI?
Kalyan emphasized the importance of data platforms, vector DBs, and edge capabilities as a foundational layer beyond compute, suggesting further development and research.
Speaker: Kalyan Kumar
How can organizations reliably quantify AI ROI to overcome ‘ROI invisibility’ and secure budget approval?
Brandon noted that a third of CFOs cannot measure AI ROI, leading to stalled pilots; developing metrics and tools is a research and practice need.
Speaker: Brandon Mello
What mechanisms can reduce data trust, compliance friction, and departmental silos that impede AI project progress?
He identified bureaucratic hurdles across IT, procurement, etc., as a barrier, calling for streamlined processes and governance models.
Speaker: Brandon Mello
How can companies ensure strong executive sponsorship (the ‘champion problem’) for AI initiatives?
Lack of senior leadership support leads to project abandonment; identifying effective sponsorship models is an open issue.
Speaker: Brandon Mello
What standards and frameworks are required to achieve interoperability across AI layers (data, models, applications) in India?
Ganesh advocated for interoperability to enable participation, alternatives, and scaling, implying need for common protocols and contracts.
Speaker: Ganesh Ramakrishnan
How should data products, catalogs, and contracts be designed to enforce data ownership and sovereignty?
He discussed the concept of data catalogs and contracts as essential for interoperable, sovereign data ecosystems.
Speaker: Ganesh Ramakrishnan
What research is needed in quantum computing and fundamental physics to address future compute demands for AI?
Kalyan suggested that breakthroughs in quantum and physics could reshape compute paradigms, indicating a long‑term research direction.
Speaker: Kalyan Kumar
How can AI systems be kept aligned with human values and maintain provenance throughout the stack?
He warned that without alignment and provenance, AI becomes a product rather than a tool, highlighting a need for governance and transparency research.
Speaker: Ganesh Ramakrishnan
What approaches can improve AI adoption by consolidating tools, supporting multilingual contexts, and ensuring data security?
Brandon identified tool fragmentation, language diversity, and data privacy as adoption barriers that require targeted solutions.
Speaker: Brandon Mello
How should government policies support a shared compute commodity and fund the first cycle of AI inferencing?
Sunil described the shared compute facility model and called for government backing of early inferencing phases to drive adoption.
Speaker: Sunil Gupta
What curriculum changes and developer upskilling programs are needed to prepare 150,000 AI‑ready developers in six months?
Ankit mentioned upcoming initiatives to rewrite technical curricula and massive developer training, indicating a need for educational design research.
Speaker: Ankit Bose
How can Indian software firms transition from service‑oriented models to building proprietary AI IP?
Kalyan highlighted the strategic pivot required for sovereignty, suggesting research into product development, IP creation, and market entry.
Speaker: Kalyan Kumar
What frameworks are needed to manage AI deployment across consumer, enterprise, government, and critical national infrastructure sectors?
He broke AI impact into four domains, each with distinct requirements, calling for sector‑specific governance and technical standards.
Speaker: Kalyan Kumar
How can AI alignment challenges be addressed to prevent AI systems from becoming mere products without human control?
Sunil stressed the importance of human‑in‑the‑loop and avoiding AI as a product, pointing to ethical and control research needs.
Speaker: Sunil Gupta

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit

Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

Speaker 1 opened by describing a shift from the past 15 years of “streaming or consumption”-a period of passive video viewing dominated by platforms that simply repackaged studio content-to a new “era of creation” powered by rapidly advancing video-generation AI that collapses production costs and cycles to hours [3-5][10-13]. He noted that short-form “augmented” stories of 30-60 seconds are evolving into complete narratives as AI tools become more capable [11-13]. The speaker outlined four pillars of an AI storytelling civilization: every creator now functions as a studio, automatic translation makes every language globally accessible, stories are becoming participatory with branching narratives, and cultural myths and folklore can be exported in novel ways [15-23]. He explained that “creative intelligence systems” and generative engines now automate camera work, lighting, and narrative design, providing immersive interfaces for multi-path storytelling [31-35]. Drawing on his gaming background, he contrasted traditional multi-year production cycles with a “live-ops” model where micro-dramas are continuously generated and refined based on real-time audience feedback, enabling rapid episode creation [36-40]. He described micro-dramas as the first truly digital format, compressing character development into seconds and fitting a “cube” model where audio, video, games, live and extended reality blend seamlessly across platforms [44-53]. Turning to India, he highlighted the nation’s demographic energy, linguistic complexity, five-to-six-thousand-year storytelling heritage, and vibrant startup ecosystem as unique strengths for leading this AI-driven storytelling wave [61-66][71-78]. He projected that by 2030 India could host ten million AI-assisted creators, regional studios, real-time cinematic production and immersive cultural platforms, positioning the country at the forefront of the emerging media landscape [79]. He warned that moving from finite to infinite content will demand new business models that shift focus from advertising and subscription to commerce integrated with community engagement [94-96]. Emphasizing that civilizations are defined by the stories they tell, he concluded that while AI technology will be built everywhere, the next storytelling civilization could arise in India [97-101].


After his remarks, Speaker 2 thanked him and introduced the next keynote speaker, Naveen Tiwari, founder and CEO of Mobi [102-104]. Naveen Tiwari greeted the audience and congratulated the AI Impact Center for organizing the event [105]. The discussion therefore underscored a vision of AI-enabled, culturally rich, continuously generated storytelling and positioned India as a potential global leader in this emerging domain [79][97-101].


Keypoints


Shift from passive consumption to AI-driven creation – The speaker contrasts the last 15-20 years of “passive consumption” streaming with today’s emerging “era of creation,” driven by short-form video and rapidly improving video-generation models that collapse production costs and cycles to hours, positioning India as a “creation civilization.” [3-5][10-14]


Four pillars of an AI storytelling civilization – He outlines that every creator now functions as a studio, language barriers dissolve through auto-translation, stories become participatory with branching narratives powered by conversational AI, and cultural heritage can be exported at scale. [15-23]


New production paradigm enabled by generative AI – The discussion highlights “creative intelligence systems,” autonomous agents handling everything from camera work to lighting, and the rise of “micro-dramas” that allow live-ops-style, iterative content creation where new episodes are generated on-the-fly based on audience feedback. [32-40]


India’s strategic advantages to lead the AI storytelling wave – Demographic energy, linguistic diversity, millennia of storytelling tradition, and a vibrant startup ecosystem are presented as the foundation for India to become a global hub for AI-assisted creators and immersive cultural platforms by 2030. [61-78]


Re-imagining business models for an infinite-content future – The speaker warns that traditional finite content production and revenue models (advertising, subscriptions) will be unsustainable in a generative-AI world, urging a shift toward commerce-centric, community-driven ecosystems. [91-96]


Overall purpose/goal


The speaker aims to persuade the audience that AI is about to overhaul the storytelling industry, creating a new “AI storytelling civilization,” and that India possesses the cultural, linguistic, and entrepreneurial assets to lead this transformation and shape global media by 2030.


Overall tone


The tone is largely visionary and enthusiastic, celebrating technological breakthroughs and India’s potential. Mid-presentation it becomes more assertive and proud when citing national strengths, and toward the end it adopts a cautionary, pragmatic tone, highlighting sustainability challenges and the need to redesign business models. The shift from optimism to a sober call for re-thinking reflects a nuanced, forward-looking discourse.


Speakers

Speaker 1


– Role/Title: Event host / moderator (introduced the keynote) [S6][S7][S8]


– Area of Expertise: AI storytelling, media-entertainment, future of content creation


Speaker 2


– Role/Title: Moderator / chair (introduced the next keynote) [S1][S2][S3]


– Area of Expertise: (not specified)


Naveen Tiwari


– Role/Title: Founder and CEO, inMobi (Mobi) [S4][S5]


– Area of Expertise: AI-driven storytelling, mobile platforms


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

Speaker 1 opened by recalling a recent panel and framing his talk as a view into a new media paradigm. He argued that the past fifteen-to-twenty years have been dominated by a “streaming or consumption” era, characterised by passive viewing of repackaged studio content despite the rise of services such as Netflix and the shift of prime-time to on-demand platforms [1-9]. He then described the emerging “era of creation”, already visible in short-form video, where advances in video-generation AI are collapsing production costs and shrinking cycles to a matter of hours [10-14]. He noted that short-video operates as an “augmented” arena of 30- to 60-second stories that combine music and background elements, moving from incomplete “stories” toward fully-fledged narratives as generative models improve [11-13].


He outlined four pillars of an AI storytelling civilization. First, every creator now functions as a self-contained studio, able to generate output simply by speaking to an AI system [15-18]. Second, automatic translation removes language barriers, allowing a single piece of content to be understood globally in any language [19-22]. Third, stories are becoming participatory, with branching narratives enabled by conversational AI embedded in characters-a development already seen in gaming and now spilling into broader media [23-27]. Fourth, India’s rich mythological and folklore heritage can be exported at scale through these new tools, turning cultural assets into global storytelling commodities [28-31][44-45].


The technical infrastructure supporting this civilization consists of “creative intelligence systems” that integrate generative engines with autonomous agents handling camera work, lighting, and narrative design, while layered narrative engines and immersive interfaces enable multi-path experiences [31-35].


Drawing on his gaming experience, he contrasted the conventional 2-3-year game development plus a seven-year live-ops cycle with a new “live-ops” model for micro-dramas, where a handful of shots are prepared in advance and subsequent episodes are generated in real-time based on audience feedback and conversion data [36-40]. He described micro-dramas as the first truly digital format, compressing character development into seconds and resonating with a generation that consumes content non-linearly [44-49]. He summed up the future formula as storytelling that is premium, spectacular, and experiential [44-49].


Projecting to 2030, he envisioned a “cube” model of media consumption in which audio, video, games, live events and extended reality blend seamlessly; platforms will be embedded inside stories, enabling fluid transitions across media types [50-53]. He warned that the shift from a finite content universe-measured in thousands of films, TV channels and print publications-to an infinite, generative AI-driven output will render traditional advertising or subscription revenue models unsustainable [91-95]. Instead, he called for a re-imagined ecosystem where commerce is tightly integrated with community engagement, allowing sustainable monetisation of endless content streams [92-96].


He highlighted India’s strategic advantages. A youthful, energetic demographic provides a large pool of potential creators [61-63]. The country’s linguistic complexity-hundreds of languages and dialects-offers a built-in testbed for multilingual AI translation [64-73]. He illustrated this complexity with an anecdote about an American delegation, noting that even their models have been trained on the country’s chaotic, nuanced language [68-73]. Millennia of storytelling tradition, from mythological epics to folk tales, supplies a deep cultural reservoir to be digitised and exported [74-75]. Finally, a vibrant startup ecosystem and entrepreneurial culture furnish the organisational capacity to build and scale the required AI tools [76-78].


In his concluding remarks, he reiterated that civilizations are defined not by the tools they wield but by the stories they tell; while AI technology will be built worldwide, the next storytelling civilization could arise in India, where the nation not only scales AI but also narrates its own future [97-101].


After Speaker 1’s extensive address, Speaker 2 thanked him for his remarks and introduced the next keynote, naming Mr Naveen Tiwari, founder and CEO of Mobi, as the forthcoming speaker [102-104]. Naveen Tiwari then greeted the audience, expressed congratulations to the AI Impact Center for organising the event, and signalled the transition to his own presentation [105-106].


Session transcriptComplete transcript of the session
Speaker 1

I think even in the panel before, there was a conversation around that. And I’m going to, over the next couple of slides, just take you through why we see this as a window. Last, about, say, 15 years has really been, you know, the streaming or the consumption era as we know it. It was predominantly, you know, passive consumption. About 20 years back, a bunch of companies like Netflix, etc., they got content from studios, from broadcasters. And prime time basically became my time. There was search, there was recommendations, etc. But format hasn’t changed. Because seven years later, when they did their first original show, they pretty much did what HBO was already doing. So we haven’t really seen much change in format for almost now 20 years.

Cut to now. We’re seeing the era of creation, and that had already commenced in the short video space. But the short video space. was what I call as an augmented space, 30, 60 second stories, augmented with music, augmented with background. You can call them stories, but they were not really complete as it were. I believe with the manner in which video generation models are developing, creation costs are collapsing, production cycles are now within hours, and AI will make India a creation civilization. Why do I say so? So when I look at, you know, what I term as the four pillars of an AI storytelling civilization, it starts with the fact that every creator is already a studio.

That is the reality now. I mean, you could literally be able to speak, and there is a component around, you know, an output that will happen as far as. Second, every language is global. We don’t need to. We have platforms where there is already auto translation, and this will continue to progress even more. Even the ones that you wear, I will be speaking in English, you will listen to me in French, you can reply back in Spanish and I’ll still comprehend. Our stories are becoming participatory. We’re beginning to see branching of narratives. I’ve been on forums like this for the last 15 years. Many a times spoken about terms like, which are more often than used, abused, which is convergence.

But today it is truly beginning to happen because we have conversational AI within characters. It’s already happened within gaming and it’s beginning to happen in this. And lastly, to me, culture is truly an opportunity of export in a very different way. I think from our stories, whether they’re mythological or folklore, we have an ability to extend these. And why do I say so? Because I think the technology stack… which is getting laid out, will make this a possibility. From production pipelines, we’re getting into what we call as creative intelligence systems. We already have generative engines. There is an autonomous sort of creative cycle and agents which are doing this from camera work to the kind of manner of lighting, etc.

We have the layer of narrative engines. You will have more interfaces for immersiveness, etc. And that leads to multi -path and components which we’ve seen parts of. But I think where we are heading is I come from a gaming world as well. We used to take two, three years, make a game, and then do seven years of live ops. I believe with categories like micro dramas, etc., for the first time, we are in for a live op scenario where I will make 10 shots. They’re ready, ingested. By the time you’re watching the fourth, you know, basis that feedback, basis that conversion sort of consumption pattern. your 11th and your 12th and your 13th episode is getting created.

So a vision which was typically one to million, that of a director, scriptwriter, etc., is now heading for a million to million kind of interaction and interface. So why 2030? I believe the camera will no longer be the primary tool of storytelling. Intelligence will. And why do I say that? You see, we are already seeing parts of this. You know, one of the first most significant, in fact, I believe in 20 years, micro dramas are the first truly digital format that have emerged. When we made films, a filmmaker could take four minutes, five minutes in setting up a character. When I did original shows, you know, which could be extending to four hours, I could take eight hours to show this person as an alcoholic, very elderly person, sort of, you know, whatever.

And by the seventh minute, you’re finding that he’s a genius as well. Here, in 15 seconds, they are unabashed, they don’t care, they will put, they’ll show you a face, it’s a billionaire playboy and there will be a little thing coming around and in one stroke, that’s the kind of, so it’s a format. And it is a format of narration which a generation who hasn’t seen things in horizontal is embracing at a pace which is unprecedented. We’re seeing that and I feel where we are heading is a world which I like to describe just from a visual point of view like a cube of sorts. Up till now, we’ve all consumed content as, you know, audio, video, game, live, extended reality.

We’re going to kind of move from one to the other seamlessly and that is why it is exciting. From multi platforms to stories which will no longer live on platforms but platforms will live inside these stories. And why do I say that, you know, because as I said, the creator explosion has already commenced. this is what typically it looked like if I really wanted to be very very generous let’s say for every author who’s been living and has been published for every lyricist who has written a song for every singer for every director every filmmaker in any form anyone from a literary sense if I were to out of eight billion people right now my sense is that number or whatever would probably be about 10 million but if I took the entire creator economy of what’s happening I’ll probably jump a little more we are heading from that world to potentially billions of sort of creators across this entire space and that is the reason why I feel the next Disney our own YRF or Marvel may not be a company but it could very well be a community which is coming and therefore So, you know, let me, no talk is incomplete on media entertainment without, you know, some perspective, you know, on our most visual form, which is still the most sort of, you know, expansive form of theaters.

I don’t believe, you know, that we are going to see the end of that. What we are going to see is more eventized immersive screenings, more mixed reality environments, and hopefully interactive participation. So the formula one is storytelling, premium, spectacular, and experiential. And with this in mind, I feel now coming to the last two slides of why India can lead. In an era where cultural depth becomes a comparative advantage. It’s important, and I really hope that, you know, this is something, you know, we’re a nation with so much of history. The first is the fact that we have demographic energy. We all know this, right? We have linguistic complexity. I often say, you know, we were hosting the American delegation three days back, and there were 120 of us.

We were the first group of them. And I said to them, I said, you know, this is probably the most somber American delegation I have seen across industries. And I asked them, I said, you know, why is it? I mean, is it because of whatever is happening in the traffic, et cetera? Or is somebody actually sort of, you know, concerned or has been using the T word with you all? So I said, you know, as far as I know, intelligence is still pretty much duty free. But having said that, the point I was making to them was I said, listen, even our models here in India have been trained on chaos. And complexity of language and nuances have a huge opportunity.

I think we have massive, you know, cultural depth. Five, six thousand years of storytelling experience. And finally, we’re a nation of startups, you know. We’re an entrepreneurial ecosystem across every sector. And that is one of the reasons why I feel. This is certainly a category where India can lead and show the world what’s possible. So with this in mind, my sense is by 2030, I believe 10 million AI -assisted creators, regional studios, real -time cinematic production, immersive, devotional, cultural platforms, and leading to mainstream sort of events. There’s a thought I’ll leave for you. With all of this, it looks very good. But there is, you know, nothing looks sort of just hunky -dory, as it were. And the thought is we’re also moving from a world of finite.

If I look at content today, in whichever platform it is, right? We make 1 ,500 films. Hollywood makes 250 films. We have 900 TV channels. We produce so many hours across the world. It is this. We have so many radio networks. We have 2 ,500 print publications. It’s all finite. In a gen AI leading to… an AGI world, we will move from finite to infinite. Now, no industry is in a position to, if I’m doing cement and I have 30 million tons of cement capacity, I know India is growing in a particular way, I add a couple of million tons and that’s fine. But if I go in that category and start adding 30 million tons all over again, you know, it’s not sustainable.

So there is a thought, there have to be reimagination of, you know, business models. And to me, the biggest reimagination of this is no more linkages to just advertising and the traditional subscription, etc. This ecosystem is made for commerce. We need to get and engage into that world from community to commerce is an integral part of leading this way. Which is why I feel that civilizations are not defined by the tools they use. They are defined by the stories they tell. Thank you. And artificial intelligence will be built everywhere, but the next storytelling civilization… can rise right here. By 2030, let it be said that India just did not scale AI. We narrated it. Thank you very much.

Speaker 2

Thank you, sir, for your wonderful remarks. For our next keynote, we have Mr. Naveen Tiwari, founder and CEO in Mobi. We welcome you to the stage, sir.

Naveen Tiwari

So good to see everybody here. Thank you. Firstly, I must congratulate the event organizers, the AI Impact Center.

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Advances in video‑generation AI are collapsing production costs and shrinking cycles to a matter of hours, heralding an emerging “era of creation” visible in short‑form video.”

The knowledge base notes that AI is transforming content creation by reducing costs and increasing efficiency, supporting the claim that production cycles can be dramatically shortened [S54].

Confirmedhigh

“Automatic translation removes language barriers, allowing a single piece of content to be understood globally in any language.”

Simultaneous translation of content is highlighted as a way to boost discoverability across languages, confirming the role of translation in reaching global audiences [S63].

Confirmedhigh

“Stories are becoming participatory, with branching narratives enabled by conversational AI embedded in characters—a development already seen in gaming and now spilling into broader media.”

The source explicitly states that conversational AI within characters has already happened in gaming and is beginning to appear in other media [S9].

Additional Contextmedium

“The past fifteen‑to‑twenty years have been dominated by a “streaming or consumption” era, characterised by passive viewing of repackaged studio content despite the rise of services such as Netflix and the shift of prime‑time to on‑demand platforms.”

A decade-old discussion notes that streaming services were just emerging ten years ago, providing background on how the streaming era has developed over the last decade [S38].

Additional Contextlow

“Short‑form video operates as an “augmented” arena of 30‑ to 60‑second stories that combine music and background elements.”

Industry observations describe short-form video as the dominant trend, though they do not specify the 30-60 second format; this adds nuance about the overall prominence of short videos [S55].

Additional Contextmedium

“Every creator now functions as a self‑contained studio, able to generate output simply by speaking to an AI system.”

Reports on AI-enabled content creation highlight reduced costs and new business models, indicating a move toward more autonomous creation, but do not specifically mention voice-driven generation [S54].

Additional Contextmedium

“India’s rich mythological and folklore heritage can be exported at scale through these new tools, turning cultural assets into global storytelling commodities.”

India’s strategic focus on AI development is noted, suggesting a national capacity to leverage AI for cultural export, though the source does not detail mythological content specifically [S57].

External Sources (69)
S1
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S2
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S3
S4
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Naveen Tiwari: Founder and CEO of Mobi (mentioned as “in Mobi” in the transcript). Area of expertise not detailed in th…
S5
Keynote by Naveen Tewari Founder &amp; CEO, inMobi India AI Impact Summit — No disagreements identified in the transcript This appears to be a keynote presentation rather than an interactive disc…
S6
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S7
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S8
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S9
https://dig.watch/event/india-ai-impact-summit-2026/keynote_-2030-the-rise-of-an-ai-storytelling-civilization-_-india-ai-impact-summit — And by the seventh minute, you’re finding that he’s a genius as well. Here, in 15 seconds, they are unabashed, they don’…
S10
WS #155 Digital Leap- Enhancing Connectivity in the Offline World — Speaker 1: Yeah, I think two things, I’d make two simple points. One is, of course, that as far as service provision …
S11
We are the AI Generation — Doreen Bogdan Martin: Thank you. Good morning and welcome to Geneva for the AI for Good Global Summit 2025. I want to th…
S12
High Level Session 3: AI &amp; the Future of Work — Joseph Gordon-Levitt: I get to go next. Cool. Thank you. Thanks for having me. Well, I’ll talk about, you asked, what ar…
S13
Open Forum #47 Demystifying WSis+20 — Kurtis Lindqvist: We often talk about the IGF and the WSIS 20-plus and what has been achieved. Hidden in that I think we…
S14
Conversation: 01 — Artificial intelligence
S15
Conversational AI in low income &amp; resource settings | IGF 2023 — Olabisi Ogunbase:Okay. Greetings to everybody. I work in a general hospital, a maternal and child center, and we see chi…
S16
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S17
Panel Discussion: 01 — in building this healthy and fair ecosystem to boost the innovation on artificial intelligence.
S18
Empowering the Ethical Supply Chain: steps to responsible sourcing and circular economy (Lenovo) — A global agreement involving key players and setting loose principles is proposed as a means to achieve a better circula…
S19
Business Engagement Session — David Okpatuma: So, ladies and gentlemen, permit me to introduce firstly Dr. Mohamed Alsourf, the founder and presiden…
S20
Masterclass#1 — Gratitude was expressed towards both presenters and participants for engaging in the dialogue. The speaker expressed gr…
S21
[Parliamentary Session Closing] Closing remarks — 6. Appreciation and Closing Remarks: Audience: Gracias. Aileen del Parlamento Cubano. Subrayo, todo lo que mencionó …
S22
Keynote-Rajesh Subramanian — This shifts the narrative from passive adoption to active creation and responsibility. It challenges organizations to mo…
S23
AI That Empowers Safety Growth and Social Inclusion in Action — Tammsaar outlines four pillars that member states prioritize: trustworthy AI, closing capacity gaps, interoperability, a…
S24
Global AI Policy Framework: International Cooperation and Historical Perspectives — So I think that today’s problem, as well as the IP policies, that how to facilitate those creation based on the IP mater…
S25
Workshop 7: Generative AI and Freedom of Expression: mutual reinforcement or forced exclusion? — Alexandra Borchardt: Yeah, thank you so much, Giulia. And thanks everyone for being in the audience. We have almost full…
S26
Panel Discussion AI and the Creative Economy — And that has huge implication in terms of these models and whether or not they enhance creativity or whether or not they…
S27
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Julie Sweet from Accenture highlighted another crucial advantage: India’s human capital. With over 350,000 employees in …
S28
Building the AI-Ready Future From Infrastructure to Skills — The emphasis on open ecosystems, linguistic diversity, human oversight, and broad adoption provides a framework balancin…
S29
AI Infrastructure and Future Development: A Panel Discussion — And of course, Sora, because now we have multimodal. So the product platform is multidimensional. And then finally, the …
S30
Balancing act: advocacy with big tech in restrictive regimes | IGF 2023 — The current focus on data mining and using data for profit is seen as impeding the effectiveness of human rights policie…
S31
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Speaker 2 formally welcomes the next presenter, thanks the current speaker for his remarks, and introduces Mr. Naveen Ti…
S32
Keynote by Naveen Tewari Founder &amp; CEO, inMobi India AI Impact Summit — No consensus analysis possible – single speaker presentation format with only procedural interjections from event modera…
S33
Fireside Chat Intel Tata Electronics CDAC &amp; Asia Group _ India AI Impact Summit — The conversation maintained a consistently pragmatic and candid tone throughout. Both panelists were refreshingly honest…
S34
Masterclass#1 — Gratitude was expressed towards both presenters and participants for engaging in the dialogue. The speaker expressed gr…
S35
Presentation of outcomes to the plenary — The speaker expresses both astonishment and gratitude for the unexpected success of their initiative, much surpassing in…
S36
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — And by the seventh minute, you’re finding that he’s a genius as well. Here, in 15 seconds, they are unabashed, they don’…
S37
Keynote-Rajesh Subramanian — This shifts the narrative from passive adoption to active creation and responsibility. It challenges organizations to mo…
S38
A Decade Later-Content creation, access to open information | IGF 2023 WS #108 — Efforts to improve the internet for efficient content creation and consumption have been ongoing. Users now demand more …
S39
AI That Empowers Safety Growth and Social Inclusion in Action — Tammsaar outlines four pillars that member states prioritize: trustworthy AI, closing capacity gaps, interoperability, a…
S40
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — Creativity, cognition, and culture are key pillars that define human beings and will remain crucial differentiators
S41
Workshop 7: Generative AI and Freedom of Expression: mutual reinforcement or forced exclusion? — Alexandra Borchardt: Yeah, thank you so much, Giulia. And thanks everyone for being in the audience. We have almost full…
S42
Lights, Camera, Deception? Sides of Generative AI | IGF 2023 WS #57 — Generative AI advancements brought opportunities for more applications and use, so are there applications that can suppo…
S43
Panel Discussion AI and the Creative Economy — And that has huge implication in terms of these models and whether or not they enhance creativity or whether or not they…
S44
https://dig.watch/event/india-ai-impact-summit-2026/keynote_-2030-the-rise-of-an-ai-storytelling-civilization-_-india-ai-impact-summit — But today it is truly beginning to happen because we have conversational AI within characters. It’s already happened wit…
S45
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Julie Sweet from Accenture highlighted another crucial advantage: India’s human capital. With over 350,000 employees in …
S46
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particul…
S47
AI Infrastructure and Future Development: A Panel Discussion — And of course, Sora, because now we have multimodal. So the product platform is multidimensional. And then finally, the …
S48
WIPO Conference on the Global Digital Content Market — As in other content industries, the delivery, production and consumption models for publishing are changing.Emerging tec…
S49
Global South Solidarities for Global Digital Governance | IGF 2023 Networking Session #110 — This initiative aims to reduce inequalities and promote sustainable development in line with SDG 10: Reduced Inequalitie…
S50
Briefing on the Global Digital Compact- GDC (UNCTAD) — In this analysis, several important points are raised by the speakers. The first speaker argues that the power of corpor…
S51
New Technologies and the Impact on Human Rights — Pablo Hinojosa: Please welcome to the stage the moderators Allison Gilwald, ICT Research Africa, Civil Society Africa, a…
S52
Net neutrality &amp; Covid-19: trends in LAC and Asia Pacific | IGF 2023 — Most of the internet traffic is heavily concentrated on streaming services.
S53
IGF leadership panel explores future of digital governance — As theInternet Governance Forum (IGF)prepares to mark its 20th anniversary, members of the IGF Leadership Panel gathered…
S54
Sticking with Start-ups / DAVOS 2025 — Bhatnagar explains how AI is transforming content creation and enabling new business models. He highlights the reduced c…
S55
Geneva Engage Awards 2024: Digital Diplomacy in the Age of Artificial Intelligence — Video content, especially short-form video, is becoming the dominant trend, while longer formats build trust and engagem…
S56
New Gemini AI tool animates photos into short video clips — Google has rolled out anew feature for Gemini AIthat transforms still photos into short, animated eight-second videos wi…
S57
Secure Finance Risk-Based AI Policy for the Banking Sector — Ajay Kumar Chaudhary opened by highlighting India’s opportunity to lead in AI development while managing associated risk…
S58
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Dr. Garg also referenced observations about the contrast between current AI systems requiring gigawatts of power and hum…
S59
The intellectual property saga: The age of AI-generated content | Part 1 — The intellectual property saga: AI’s impact on trade secrets and trademarks | Part 2 The intellectual property saga: app…
S60
Challenging the status quo of AI security — Babak Hodjat: Thank you very much, Sounil. Yeah, we came out here for two reasons, as cognizant, one, to get people invo…
S61
Harnessing Collective AI for India’s Social and Economic Development — <strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in speci…
S62
Importance of Professional standards for AI development and testing — Moira De Roche: test review the output to make sure it’s relevant, to make sure that it hasn’t gone off on its own littl…
S63
La découvrabilité des contenus numérique: un facteur de diversité culturelle et de développement (Délégation Wallonie-Bruxelles, Belgian Mission to the UN in Geneva) — Simultaneous translation of some content, if well used, can boost the discoverability of minor languages content.
S64
Leaders TalkX: Looking Ahead: Emerging tech for building sustainable futures — Dr. Sharon Weinblum:Thank you very much for giving me this opportunity to speak on such an important subject and to be a…
S65
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — Ryan Budish :I’m coming from Boston, Massachusetts, where it is quite late at night. So I’m going to try not to speak to…
S66
Harnessing digital public goods and fostering digital cooperation: a multi-disciplinary contribution to WSIS+20 review — Mary-Ruth Mendel: Hello, everybody. Now you can hear me. This is the topic I’d like to address tonight. And it’s about. …
S67
Better governance for fairer digital markets: unlocking the innovation potential and leveling the playing field (UNCTAD) — The shift in conversation has coincided with advancements in Artificial Intelligence
S68
Open Forum #64 Women in Games and Apps: Innovation, Creativity and IP — Julio Raffo: you, Christine. Thank you, Richard. Thank you to the IGF for organizing this very important session and e…
S69
International Cooperation for AI &amp; Digital Governance | IGF 2023 Networking Session #109 — Rafik Hadfi:So thank you, Professor Park, for the invitation, and thank you everyone for being here at this early time o…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
9 arguments157 words per minute1771 words673 seconds
Argument 1
Streaming era was passive consumption with unchanged formats for two decades (Speaker 1)
EXPLANATION
The speaker describes the past fifteen to twenty years of streaming as a period dominated by passive consumption, where the underlying content formats have remained largely static despite the rise of platforms like Netflix. He notes that even original productions followed existing models rather than innovating new formats.
EVIDENCE
He points to the timeline of the streaming era, stating that it has been about fifteen years of consumption-focused streaming [3], that the consumption was predominantly passive [4], and that early services simply sourced content from studios and broadcasters without altering the format [5-7]. He further observes that when original shows emerged, they merely replicated what traditional networks like HBO were already doing, indicating a lack of format change over roughly two decades [8].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote notes that the last 15-20 years of streaming were dominated by passive consumption and that content formats have not fundamentally changed in two decades [S4].
MAJOR DISCUSSION POINT
Passive consumption vs. format innovation
Argument 2
Short‑video “augmented stories” introduced a new creation paradigm (Speaker 1)
EXPLANATION
The speaker highlights the emergence of short‑video platforms as the first wave of a creator‑centric era, where 30‑60 second clips are enriched with music and visual backgrounds, creating an “augmented” storytelling format. This marks a shift from merely consuming content to actively producing it.
EVIDENCE
He transitions to the present by noting that creation is now evident in the short-video space [10], describing these clips as “augmented” stories that combine brief video with music and background elements [11], and acknowledging that while they are called stories, they were not yet fully complete formats [12]. He links this development to the rapid advancement of video generation models that are collapsing creation costs [13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker describes short-video clips as 30-60 second “augmented” stories with music and background elements, a point highlighted in the keynote transcript [S4].
MAJOR DISCUSSION POINT
Rise of augmented short‑video creation
Argument 3
Every creator now functions as a studio, enabled by instant AI output (Speaker 1)
EXPLANATION
The speaker argues that AI tools have democratized production, allowing individual creators to act as full‑fledged studios that can generate content instantly from simple inputs such as spoken words. This reduces the need for traditional production infrastructure.
EVIDENCE
He introduces the concept of a four-pillar AI storytelling civilization, beginning with the claim that “every creator is already a studio” [15] and confirming that this is the current reality [16]. He illustrates the immediacy of AI output by stating that one could simply speak and an output would be generated automatically [17].
MAJOR DISCUSSION POINT
Creator‑as‑studio model
Argument 4
Automatic multilingual translation makes stories globally accessible and participatory (Speaker 1)
EXPLANATION
The speaker emphasizes that AI‑driven automatic translation enables content to be understood across languages, turning stories into participatory experiences that transcend linguistic borders. This facilitates a truly global audience for locally produced narratives.
EVIDENCE
He notes that “every language is global” and that platforms already provide auto-translation, which will continue to improve [18-20]. He gives a concrete example where a speaker could speak English, listeners hear French, and replies could be in Spanish, all while maintaining comprehension [21]. He then links this capability to the participatory nature of modern stories [22].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote emphasizes that “every language is global” through advanced auto-translation, enabling real-time multilingual communication [S4].
MAJOR DISCUSSION POINT
AI‑powered multilingual storytelling
Argument 5
Conversational AI allows branching narratives and interactive characters (Speaker 1)
EXPLANATION
The speaker claims that conversational AI embedded within characters enables dynamic, branching storylines, turning narratives into interactive experiences. He cites early adoption in gaming as evidence that this technology is moving into broader media.
EVIDENCE
He mentions the emergence of narrative branching [23] and references his long-standing involvement in related forums [24-26]. He explains that conversational AI within characters is now making these branching narratives possible [26], and that similar capabilities have already been realized in gaming and are beginning to appear in other formats [27-28].
MAJOR DISCUSSION POINT
Interactive, AI‑driven narratives
Argument 6
Demographic energy, linguistic complexity, and deep cultural heritage give India a comparative edge (Speaker 1)
EXPLANATION
The speaker outlines India’s strategic advantages: a large, youthful population; a multitude of languages; and a rich, millennia‑old storytelling tradition. These factors together create a fertile ground for AI‑enabled storytelling to flourish.
EVIDENCE
He cites demographic energy as a key factor [61-62] and highlights India’s linguistic complexity, illustrated by an anecdote about an American delegation encountering the country’s language diversity [63-66]. He further argues that Indian AI models have been trained on this linguistic chaos, providing a huge opportunity [71-74], and references five to six thousand years of cultural storytelling depth [74-75].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s demographic vigor, linguistic diversity, and millennia-old storytelling tradition are cited as strategic advantages in the keynote [S4].
MAJOR DISCUSSION POINT
India’s cultural and demographic strengths
Argument 7
A vibrant startup and entrepreneurial ecosystem supports rapid AI‑driven media innovation (Speaker 1)
EXPLANATION
The speaker points to India’s robust startup culture and entrepreneurial ecosystem across sectors as a catalyst that can accelerate AI‑powered media creation and help the country lead globally in this domain.
EVIDENCE
He notes that India is “a nation of startups” and describes an entrepreneurial ecosystem that spans every sector [75-76], concluding that these conditions make India well-positioned to lead in AI storytelling [77-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote highlights India as “a nation of startups” with an entrepreneurial ecosystem across sectors, supporting AI-powered media creation [S4].
MAJOR DISCUSSION POINT
Startup ecosystem as an enabler
Argument 8
Move from finite content production to infinite AI‑generated content demands new revenue models beyond ads and subscriptions (Speaker 1)
EXPLANATION
The speaker warns that the shift to AI‑generated, potentially infinite content will render traditional finite production models unsustainable, necessitating new business models that go beyond advertising and subscription revenue streams.
EVIDENCE
He contrasts the current finite media landscape-citing numbers of films, TV channels, radio networks, and print publications-to an envisioned infinite AI-driven world, stating that we will move “from finite to infinite” [91]. He argues that existing capacity-based models (e.g., cement production) are not applicable to this new paradigm and that business models must be reimagined [92-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker contrasts today’s finite media output with an envisioned infinite AI-generated world and calls for new business models beyond advertising and subscriptions; both the keynote and a separate remark discuss this shift [S4][S9].
MAJOR DISCUSSION POINT
Need for new monetisation models
Argument 9
Integrating commerce with community engagement is essential for a sustainable ecosystem (Speaker 1)
EXPLANATION
The speaker asserts that future media ecosystems should intertwine commerce directly with community interaction, moving away from reliance solely on advertising or subscription models, to create a sustainable economic loop.
EVIDENCE
He describes the biggest reimagination as moving “no more linkages to just advertising and the traditional subscription” and positioning the ecosystem as “made for commerce” [95], followed by a call to engage community-to-commerce as an integral part of leading the way [96].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to move away from pure advertising and subscription models toward community-driven commerce is mentioned in the discussion on reimagining business models [S9].
MAJOR DISCUSSION POINT
Community‑driven commerce model
S
Speaker 2
2 arguments110 words per minute28 words15 seconds
Argument 1
Expression of gratitude to the previous speaker and acknowledgment of his remarks (Speaker 2)
EXPLANATION
Speaker 2 thanks the preceding presenter for his remarks, signalling a transition to the next segment of the event.
EVIDENCE
He says, “Thank you, sir, for your wonderful remarks” [102].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The transcript records Speaker 2 thanking the prior presenter for his remarks before the next segment [S4].
MAJOR DISCUSSION POINT
Acknowledgement of prior speaker
AGREED WITH
Naveen Tiwari
Argument 2
Introduction of Naveen Tiwari as the next keynote speaker (Speaker 2)
EXPLANATION
Speaker 2 announces that Naveen Tiwari, founder and CEO of Mobi, will deliver the next keynote address.
EVIDENCE
He introduces the next keynote by stating, “For our next keynote, we have Mr. Naveen Tiwari, founder and CEO in Mobi” and then welcomes him to the stage [103-104].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Speaker 2 formally introduces Naveen Tiwari as the upcoming keynote in the event agenda [S4].
MAJOR DISCUSSION POINT
Keynote handover
AGREED WITH
Naveen Tiwari
N
Naveen Tiwari
1 argument90 words per minute19 words12 seconds
Argument 1
Naveen Tiwari congratulates the AI Impact Center organizers (Naveen Tiwari)
EXPLANATION
Naveen Tiwari thanks the audience and offers congratulations to the AI Impact Center for organizing the event.
EVIDENCE
He opens by saying, “So good to see everybody here. Thank you. Firstly, I must congratulate the event organizers, the AI Impact Center” [105-106].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Naveen Tiwari opens his address by thanking the audience and congratulating the AI Impact Center organizers [S4].
MAJOR DISCUSSION POINT
Opening remarks and gratitude
AGREED WITH
Speaker 2
Agreements
Agreement Points
Both Speaker 2 and Naveen Tiwari express gratitude and commendation toward the event organizers and the preceding speaker
Speakers: Speaker 2, Naveen Tiwari
Expression of gratitude to the previous speaker and acknowledgment of his remarks (Speaker 2) Naveen Tiwari congratulates the AI Impact Center organizers (Naveen Tiwari)
Speaker 2 thanks the previous presenter for his remarks [102] and introduces the next keynote, while Naveen Tiwari opens his address by thanking the audience and congratulating the AI Impact Center organizers [105-106]; both share a common appreciative stance toward the event and its organizers.
POLICY CONTEXT (KNOWLEDGE BASE)
Expressing gratitude to organizers and prior speakers is a customary protocol in conference settings, reflected in other summit sessions such as the Masterclass#1 where presenters thanked fellow participants [S34], and reinforced by the event-level etiquette described for the AI Impact Summit transition [S31].
Speaker 2 formally hands over the stage to Naveen Tiwari, aligning with the event’s structured transition
Speakers: Speaker 2, Naveen Tiwari
Introduction of Naveen Tiwari as the next keynote speaker (Speaker 2) Naveen Tiwari acknowledges the audience and begins his remarks (Naveen Tiwari)
Speaker 2 announces the next keynote and welcomes Naveen Tiwari to the stage [103-104]; Naveen Tiwari then takes the stage and begins his address [105-106], reflecting a coordinated hand-over.
POLICY CONTEXT (KNOWLEDGE BASE)
The formal handover follows the structured transition guidelines of the AI Impact Summit, as documented in the keynote handover description where Speaker 2 introduces Naveen Tiwari and passes the stage in an orderly manner [S31].
Similar Viewpoints
Both speakers convey a positive, appreciative tone toward the event organizers and the preceding presenter, underscoring a shared view that the gathering is valuable and worthy of commendation [102][105-106].
Speakers: Speaker 2, Naveen Tiwari
Expression of gratitude to the previous speaker and acknowledgment of his remarks (Speaker 2) Naveen Tiwari congratulates the AI Impact Center organizers (Naveen Tiwari)
Unexpected Consensus
Both a moderator (Speaker 2) and the next keynote (Naveen Tiwari) independently emphasize gratitude toward the AI Impact Center, despite their distinct roles
Speakers: Speaker 2, Naveen Tiwari
Expression of gratitude to the previous speaker and acknowledgment of his remarks (Speaker 2) Naveen Tiwari congratulates the AI Impact Center organizers (Naveen Tiwari)
It is noteworthy that gratitude is voiced not only by the moderator but also by the incoming speaker, reinforcing a unified appreciative stance across different participants in the session [102][105-106].
Overall Assessment

The transcript shows limited substantive overlap among the speakers; the primary points of agreement are procedural and courteous—both Speaker 2 and Naveen Tiwari express appreciation for the event and its organizers, and they cooperate in a smooth hand‑over. No deep thematic consensus on AI‑driven storytelling or economic models emerges because those arguments are presented solely by Speaker 1.

Low substantive consensus; agreement is confined to procedural courtesy, indicating that while the participants share a common respect for the forum, divergent substantive positions are not evident in the excerpt.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The three speakers largely operate in separate conversational roles: Speaker 1 delivers an extensive vision on AI‑driven storytelling and India’s strategic advantages; Speaker 2 merely thanks Speaker 1 and introduces the next keynote; Naveen Tiwari offers brief opening remarks and congratulates the organizers. No opposing viewpoints or contested claims are presented across the transcript, indicating an overall consensus or at least an absence of direct conflict.

Very low – the discussion is sequential rather than argumentative, so there are no substantive disagreements that could affect the topics under consideration.

Takeaways
Key takeaways
The media landscape is moving from a passive consumption era to an AI‑enabled creation era, with short‑form “augmented stories” as a precursor. AI is turning every creator into a self‑contained studio, enabling instant, low‑cost production and global multilingual reach. Conversational AI and narrative engines are introducing participatory, branching storylines and interactive characters. India possesses strategic advantages—large, youthful demographics, linguistic diversity, deep cultural heritage, and a vibrant startup ecosystem—to lead the emerging AI storytelling civilization. The shift to potentially infinite AI‑generated content requires reimagining business models beyond traditional advertising and subscription, integrating commerce with community engagement. Future storytelling will be less about camera tools and more about intelligence, with immersive, multi‑platform experiences where platforms exist inside stories.
Resolutions and action items
None identified
Unresolved issues
Specific strategies for developing sustainable revenue models in an infinite‑content environment were discussed but not defined. How to operationalize the proposed AI storytelling ecosystem (technology stack, talent development, regulatory considerations) remains unanswered. Details on scaling AI‑assisted creators to the projected 10 million by 2030 were not addressed. Mechanisms for integrating commerce seamlessly into AI‑driven narrative experiences were mentioned as necessary but not concretized.
Suggested compromises
None identified
Thought Provoking Comments
We haven’t really seen much change in format for almost now 20 years… we are seeing the era of creation, and that had already commenced in the short‑video space. With video‑generation models collapsing creation costs, AI will make India a ‘creation civilization’.
This reframes the media landscape from a passive consumption era to an active creation era, highlighting a structural shift driven by generative AI rather than incremental platform changes.
It serves as the opening pivot of the talk, moving the conversation from a historical recap of streaming to a forward‑looking thesis. It sets up the need to discuss new production pipelines, creator economics, and the cultural implications that follow.
Speaker: Speaker 1
Every creator is already a studio; every language is global thanks to auto‑translation; our stories are becoming participatory with branching narratives; conversational AI within characters is now possible.
These four pillars synthesize technological trends into a concrete framework, linking AI capabilities (translation, generative engines, narrative engines) with a democratized creative ecosystem.
The enumeration of pillars creates a roadmap for the rest of the monologue. It prompts the speaker to dive deeper into production pipelines, live‑ops, and the cultural export potential, thereby expanding the scope of the discussion.
Speaker: Speaker 1
The camera will no longer be the primary tool of storytelling – intelligence will.
It directly challenges the long‑standing assumption that visual capture devices are the core of media creation, proposing AI as the new creative instrument.
This bold claim shifts the tone from descriptive to speculative, leading to the discussion of “micro dramas”, real‑time cinematic production, and the eventual need to rethink business models.
Speaker: Speaker 1
We are moving from a world of finite content to infinite content with generative AI; consequently, traditional revenue models tied to advertising or subscription are unsustainable – we must re‑imagine commerce‑centric ecosystems.
It raises a systemic economic concern, connecting the technical possibility of endless content to the practical limits of existing monetisation structures.
This comment acts as a turning point that transitions the talk from technological optimism to a critical examination of sustainability, prompting the audience to consider how the industry must adapt.
Speaker: Speaker 1
India can lead because of demographic energy, linguistic complexity, five‑to‑six‑thousand‑year storytelling heritage, and a vibrant startup ecosystem.
It contextualises the global AI‑storytelling narrative within a specific national advantage, turning a generic forecast into a strategic call‑to‑action for Indian stakeholders.
By tying the earlier abstract ideas to concrete Indian strengths, the speaker galvanises regional interest and frames the concluding vision of a “next storytelling civilization” rooted in India.
Speaker: Speaker 1
Micro dramas are the first truly digital format – 15‑second stories that can convey complex characters instantly, a format embraced by a generation that doesn’t read horizontally.
Identifies a nascent content format that exemplifies the shift to bite‑sized, AI‑generated narratives, illustrating the practical manifestation of the earlier “creation civilization” concept.
Provides a tangible example that bridges the high‑level theory of AI‑driven storytelling with an observable market trend, reinforcing the argument that new formats are already emerging.
Speaker: Speaker 1
Overall Assessment

The discussion is driven almost entirely by Speaker 1’s expansive vision of an AI‑powered storytelling civilization. Key comments act as structural anchors—first redefining the media era, then outlining four foundational pillars, followed by a provocative claim that intelligence, not the camera, will become the primary storytelling tool. Subsequent remarks on infinite content and the need for new commerce models introduce critical tension, while the emphasis on India’s unique cultural and entrepreneurial assets grounds the vision in a concrete geopolitical context. Together, these moments steer the monologue from historical recap to speculative future, from technological optimism to economic realism, and finally to a rallying call for Indian leadership, shaping the overall narrative arc of the session.

Follow-up Questions
How will the collapse of creation costs and hour‑long production cycles driven by video generation models reshape the creator economy?
Assessing cost dynamics is essential to predict how AI will enable billions of creators and alter traditional production pipelines.
Speaker: Speaker 1
What technical and linguistic challenges must be solved to achieve seamless, real‑time multilingual translation and cross‑language interaction in AI‑generated stories?
Global participation hinges on reliable auto‑translation; research is needed on accuracy, latency, and cultural nuance.
Speaker: Speaker 1
How can conversational AI be integrated into characters to enable branching, participatory narratives at scale?
Designing interactive story engines requires advances in dialogue management, personality modeling, and real‑time adaptation.
Speaker: Speaker 1
What strategies can leverage India’s mythological and folklore heritage to export culture through AI‑driven storytelling?
Identifying pathways to monetize cultural assets globally could give India a comparative advantage in the emerging market.
Speaker: Speaker 1
What is the roadmap for building ‘creative intelligence systems’ that combine generative engines, autonomous creative cycles, and narrative engines?
A technical blueprint is needed to move from isolated AI tools to end‑to‑end production pipelines for video, lighting, camera work, etc.
Speaker: Speaker 1
How should business models be re‑imagined for an infinite‑content world, moving beyond advertising and traditional subscriptions to community‑driven commerce?
Sustainable monetisation will determine the viability of AI‑generated media at massive scale.
Speaker: Speaker 1
Which specific Indian strengths—demographic energy, linguistic complexity, startup ecosystem—can be mobilised to lead the AI storytelling civilization?
Empirical studies are required to quantify these advantages and translate them into actionable policy or investment plans.
Speaker: Speaker 1
What are the design principles and audience impact of ‘micro dramas’ and live‑ops storytelling formats?
Understanding how ultra‑short, continuously updated narratives affect engagement will guide content strategy.
Speaker: Speaker 1
How will immersive, mixed‑reality and eventised screenings evolve when platforms become embedded inside stories rather than hosting them?
Research into hardware, UX, and distribution models is needed to realize seamless transitions across audio, video, games, and XR.
Speaker: Speaker 1
What milestones and metrics are required to reach 10 million AI‑assisted creators in India by 2030?
Defining measurable targets will help track progress and allocate resources effectively toward the envisioned ecosystem.
Speaker: Speaker 1

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit

Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

Speaker 1 opened the session by framing the talk around AI sovereignty, stressing its strategic importance for countries such as India that aim to lead in artificial intelligence [1-3]. He defined sovereignty as the capacity to own, control, and flexibly manage data and AI models while maintaining security, without excessive dependence on third-party providers [4-10]. Fujitsu leveraged its 90-year heritage and recent milestones-including the world’s first two-nanometre ARM-based servers and a quantum roadmap-to position itself as a provider of sovereign AI solutions [14-22][20-21]. The speaker identified three essential pillars for any AI platform-software, compute, and networking-and argued that true sovereignty requires independence across all three areas [26-29]. Fujitsu highlighted its Japanese-made hardware, notably the Monaca two-nanometre chip powering a planned 20-exaflop AI supercomputer and an upcoming 1.4-nanometre processor with integrated NPU for inferencing, both featuring hardware-level confidential computing [35-41][42-48][55-58]. He emphasized that the accompanying software stack is fully open source, preventing vendor lock-in and allowing customers to fine-tune models for specific domains [44-52][84-86]. In the quantum domain, Fujitsu claimed a top-three global position, targeting a 250-logical-qubit system by 2030 and a 10,000-qubit machine within three years, to be combined with high-performance computing for mission-critical AI workloads [62-71][69-73]. Its networking strategy includes a 1.6-terabyte photonic switch with low-latency, long-reach capabilities and an open-RAN orchestration layer to move AI workloads efficiently across data centres and edge sites [75-76]. The Takane large-language-model platform and Kozuchi AI-agent framework were presented as tools for building domain-specific, secure, and customizable AI applications in sectors such as defense, healthcare, and finance [77-84]. These technologies are intended to converge on edge devices-including robots, drones, and medical equipment-enabling autonomous operation while preserving data sovereignty [87-90][91-93]. Fujitsu’s go-to-market approach relies on integrated solutions and partnerships with companies like AMD, Lockheed Martin, and Supermicro rather than selling isolated components [95-99]. The session concluded with Speaker 2 announcing the next fireside chat featuring executives from CDAT and Intel and requesting speakers and the audience to clear the stage [100-104]. Overall, the discussion underscored Fujitsu’s strategy to combine proprietary Japanese hardware, open software, quantum advances, and network innovations to deliver sovereign AI capabilities for nations seeking secure, flexible, and independent AI infrastructures.


Keypoints


Major discussion points


AI sovereignty as a strategic priority – The speaker frames sovereignty as “being flexible and secure,” stressing the need for countries (e.g., India) to own and control their data and AI models without over-reliance on third parties [1-9][10-11].


Fujitsu’s sovereign-focused hardware portfolio – Highlights the upcoming two-nanometer ARM-based “Monaco” servers, a future 1.4 nm chip with 256-core + 128-core CPUs and an NPU for inferencing, and a planned 20-exaflop AI supercomputer built on the Fujitsu Monaca chip with confidential-computing features [20-23][36-43][44-48][55-58][60-62].


Open software stack and AI platform – Emphasizes that the software stack is completely open with no lock-in, incorporating the Takane large-language-model platform and the Kozuchi AI-agent technology to let customers fine-tune domain-specific, secure models for sectors such as defense, healthcare, and finance [44-49][78-84][85-86].


Quantum-HPC integration for mission-critical AI – Announces a roadmap to 250 logical qubits by 2030, a 1,000-qubit machine going live next month, and a 10,000-qubit system in three years, positioning quantum together with high-performance computing as a driver for advanced AI workloads [62-70][71-73].


Advanced networking and photonics – Describes a 1.6 TB (future 3.2 TB) low-power optical switch for long-distance, low-latency transmission, and the use of open-RAN orchestration to move AI workloads efficiently across data-center and edge environments [76].


Overall purpose / goal


The discussion is a promotional briefing by Fujitsu aimed at positioning the company as a one-stop provider of “sovereign AI” solutions. By outlining its end-to-end capabilities-secure, open-source software, cutting-edge compute (including AI-optimized CPUs/NPUs and exascale supercomputers), quantum accelerators, and high-performance networking-the speaker seeks to convince governments and enterprises (particularly in India and Europe) that Fujitsu can deliver independent, privacy-preserving AI infrastructures without reliance on foreign cloud vendors.


Overall tone


The tone is confident, forward-looking, and technically detailed, consistently emphasizing innovation, security, and independence. It remains upbeat throughout the technical sections, shifting only at the very end when Speaker 2 takes over to announce the next session, where the tone becomes procedural and courteous. No major emotional or argumentative shift occurs within the main presentation.


Speakers

Speaker 1


– Area of expertise: Artificial intelligence, quantum computing, high-performance computing, networking, sovereign AI solutions


– Role: Presenter / speaker representing Fujitsu


– Title:


Speaker 2


– Area of expertise: Moderation / event facilitation


– Role: Moderator for the fireside-chat session [S1][S2][S3]


– Title:


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

The presentation opened by defining AI sovereignty as the ability to be both flexible and secure-to own and control data, to fine-tune AI models, and to do so without excessive reliance on third-party providers [1-9][10-11]. This framing linked sovereignty directly to national security and economic independence.


Fujitsu background – The speaker highlighted Fujitsu’s 90-year legacy, from early mainframes through pioneering DRAM to the world’s first two-nanometre ARM-based servers, and noted the recent launch of a U.S. brand that aggregates all Fujitsu solutions for customers [14-23][20-22][15-19].


Three-pillar framework – Sovereign AI was positioned as requiring three independent layers: software, compute, and networking [26-29]. Fujitsu, as a Japanese-made technology provider, was presented as an alternative to dominant American vendors, giving governments and enterprises a genuine choice for sensitive sectors such as defence, healthcare and finance [30-33].


Compute pillar

Fujitsu announced the imminent shipment of “Monaco” servers powered by its proprietary Monaca two-nanometre ARM chip, which embeds hardware-level confidential computing to protect data [35-41][60-62]. Building on this, the company disclosed a plan to deliver a 20-exaflop AI supercomputer within two years, also based on the Monaca chip [36-38]. The next-generation processor, a 1.4-nanometre device featuring a 256-core CPU, a 128-core CPU and an integrated NPU for AI inferencing, was described as the world’s first of its kind [42-43][55-58]. These chips are optimized for data-centre efficiency and include confidential-computing capabilities, reinforcing the security aspect of sovereignty [40-41].


For inference, the speaker distinguished between workloads: smaller- and medium-sized models can run on-premise using the NPU, whereas very large models may still require GPU-based or hybrid architectures [??].


Quantum pillar

Fujitsu outlined an ambitious quantum roadmap that the speaker described as a leading effort: a 250-logical-qubit system by 2030, a 1 000-qubit machine scheduled to go live next month in Kawasaki, and a 10 000-qubit machine expected within three years [62-71][65-68][70-71]. He also noted that Fujitsu designs its own quantum control electronics and is investing in advanced cryogenic cooling technology to support these systems [??]. The convergence of quantum computing with HPC was presented as a way to enable mission-critical AI workloads, shifting quantum from a standalone technology to an integral component of a hybrid compute ecosystem [69-73][S25][S39].


Networking pillar

Fujitsu unveiled a 1.6-terabit photonic switch (with a future 3.2-terabit version) that delivers low-power, long-reach (up to a thousand kilometres) optical transmission with low latency [75-76]. The switch is coupled with an open-RAN orchestration layer that can dynamically move AI workloads across data-centre and edge sites [75-76]. Fujitsu is one of only a few companies that offers both high-performance photonic switching and wireless solutions, enabling end-to-end AI-ready connectivity [??].


Software pillar

The software stack was portrayed as completely open source, eliminating vendor lock-in and allowing customers to fine-tune models to their specific needs [44-52]. It is optimised for AI, data-centre workloads and high-performance computing [44-48][49-51]. Domain-specific platforms sit atop Fujitsu’s proprietary security layer: Takane, a large-language-model platform, and Kozuchi, an AI-agent framework [77-84][85-86].


Edge and physical AI vision

Fujitsu described the “Kozuchi physical OS” as an operating system that embeds intelligence directly into robots, drones, medical devices and other edge equipment, allowing them to retain memory and operate autonomously [86-93][87-90][91-93]. This edge-centric approach unifies compute, networking and AI software into a single consumable platform, extending sovereign AI capabilities to the network periphery.


Services and ecosystem

The speaker emphasized that Fujitsu’s large services organization integrates hardware, software, networking and consulting to deliver a turnkey sovereign AI platform for customers [??]. Partnerships with AMD, Lockheed Martin, Supermicro and various robotics manufacturers were cited as ways to broaden the ecosystem, ensure interoperability and accelerate delivery of “physical AI” solutions across multiple industries [95-99][97-99].


Transition

The session concluded with the speaker handing the stage to Speaker 2, who announced an upcoming fireside chat featuring executives from CDAT and Intel and asked the audience to clear the stage [100-104].


Overall, the talk positioned Fujitsu as a one-stop provider of sovereign AI infrastructure, combining Japanese-made, security-focused hardware, an open software stack, a quantum roadmap, high-capacity photonic networking and a robust services ecosystem to deliver flexible, independent AI solutions for critical national and industrial applications.


Session transcriptComplete transcript of the session
Speaker 1

AI commerce. What I’m going to talk about is something that was discussed in the plenary session yesterday as well about sovereignty. And I believe something like sovereignty is very, very important for countries like India, which are trying to eke out a path in leading AI and being dominant in AI. Now, what is sovereignty, first of all? For us, it is being flexible. And being secure, right? So you want ownership of your data. You want to control that data. But you also want to have flexibility to manage that data, create models that meet your needs, that doesn’t have to be reliant on third party overwhelmingly. And you can modify and tune that data, right? Modify and tune those models.

So Fujitsu is on a path to – we’ve always been an innovative company, and we have a long history, and I’ll talk briefly. But how do we make that sovereign? And that’s what I’m going to talk about today. So Fujitsu, some of you might not know it. I mean, we have a 90 -year -old history, right? So we are a pretty old company. We have our roots all the way in technology. And if you look at some of the things that are demonstrated here, one megabit DRAM, for example, right? Of course, we were one of the pioneers of mainframe business along with IBM. In recent past, we’ve announced, which we will be shipping very shortly, the world’s first two nanometer servers, ARM -based servers.

We announced for quantum, if you are not aware, which I will be talking about shortly as well, we have the world’s leading quantum roadmap here that we are going to deliver. Same on networks. And our U .S. brand that we created in 2021, that effectively brings all of Fujitsu’s solutions. to be consumed by our customers. Now, how does this work in the context of AI? And why is this relevant in the context of AI? And that’s what I’m going to talk about. So, to effectively drive artificial intelligence, you need three key components, right? You obviously need software, you need compute, and you need three networks, right? If you don’t have those three, you can’t really build an AI platform that will suit your enterprise needs.

And our focus on sovereignty here is really being independent in all of these three areas and give customers a choice. We are a Japanese company. Our technology is made in Japan and that’s where we find ourselves at a very interesting point because we are a choice to a lot of American companies as an alternative. So if you’re looking for leading edge computing technology, leading edge quantum technology, leading edge network technology, leading edge AI software technology, agentic technology, and there’s an end user application on which you can build an AI platform in the area such as defense, government, healthcare, manufacturing, finance, where you do care about privacy, this becomes very, very important. Now how do we actually drive that?

Some of the speakers talked about commerce, which are big, but at the end of the day, if you don’t have a platform that helps you deliver that, you’re never going to be sovereign, you’re never going to control the AI business. Fujitsu has a couple of areas that we are focused on, as I mentioned, computing. If you think about CPUs, you think about AMD, you think about Intel, we were, until two years ago, we had the fastest supercomputer in the world for five years running. And we announced that we will be building a 20 AI exascale AI supercomputer in about two years from now, which will be driving pretty much AI application, AI workloads. This will be powered by our Fujitsu Monaca chip, which is a two nanometer chip.

It’s built in Japan, and it is completely ARM -based, highly power efficient, focused on data centers to reduce power efficiency. Okay, and it has confidential computing built at the hardware level to drive security. Now, this comes out, the servers come out in about two months from now, the test servers. It’s ARM -based. The follow -up of this is a 1 .4 nanometer, and that will also be the world’s first 1 .4 nanometer, which has two versions, 256 -core CPU plus 128 -core CPU plus an NPU to drive exactly what India needs, sovereign AI models focused on inferencing. And this is something that I believe we will drive a lot of value in countries like India as well as Europe. I’m not going to go into this in detail, but this stack is a completely open software stack.

I just want you to remember, it’s a completely open software stack. There’s nothing locked in. There’s nothing. You don’t get locked into a Fujitsu stack. All the software that you see here, it’s completely open. It’s focused on AI. It’s focused on data centers, and it’s focused on HPC. This is what you need for AI, right? All the key areas are a lot of open source software that we have fine -tuned to work on this process. This can help you drive your AI workloads today on the Monaco servers. As I mentioned, what’s coming? There are two versions, the 256 -core CPU plus 128 -core CPU with the NPU on it. The NPU is focused on AI inferencing. You will see a lot of work going into inferencing moving forward.

And especially when you talk about sovereign, this will become extremely important, especially with small language models and medium language models. So you can contain that in a private or a semi -private environment that you can choose. Obviously, if you want large language models, you can choose what is going on on GPUs. And you can obviously choose the Monaco GPU hybrid architecture as well. Now, for those of you who might not be aware, we are extremely highly invested in quantum. We need quantum in Japan. I would say we are probably one of the top three players in quantum worldwide. We have announced a 250 logical qubit roadmap by the end of 2030, which is ahead of any other company in the world that I know of.

We make our own control systems. We are going to focus on driving the cooling systems as well. And this is going to become extremely important as you go ahead. Quantum plus HPC together driving mission -critical AI workloads. The 10 ,000 -qubit machine will go live in about three years from now. Next month, the 1 ,000 -qubit machine goes live in Kawasaki, Japan. As I mentioned, you would have HPC and Quantum working together to drive AI workloads. This is how computing will be consumed moving forward. And the software stack that we are working on, it will make transparent to you and users to use to consume compute and the workload can be optimized to whichever computer you want. Now, I’ll briefly talk about the networks because that’s the other part of the puzzle.

And finally, I’ll talk about the software for AI. photonics and wireless we’re one of the probably want two companies that does both no cares another one right and we are doing a 1 .6 terabyte switch that travels that is highly power efficient that drives about a thousand kilometer this distance distance on this long range transmission low latency low power consumption that’s the beauty of the switch right and and we are we will go on to 3 .2 tera this is very strong implicate implications and data centers that are being built in India as that would be highly highly power hungry and you would need to connect that through optical fibers and same with the wireless mobile systems okay now what we do is we also connect with open RAN and the network orchestration stack to bring the AI workloads move them in a highly efficient manner and we’re going to do that by using the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the RAN and the This is the third part that really brings everything together, the AI software stack.

Fujitsu, as I mentioned at the very beginning, we are focused on sovereign. When we talk about sovereign, it’s got to be domain specific, something for defense. If you’re making nuclear plants or submarines or healthcare, this is not the data you want to put on public cloud. You want to define and build these domain specific models. Second, you need to have flexibility. You, as a company, should be able to fine tune these models to your own benefit, to your own needs. They need to be highly secure. These are the three key areas that we are focused on using what we call our Takane, the large language model platform, as well as our AI agent tech model from Kozuchi, powered by the security platform that we have built within our own research teams.

It’s a complete platform that you can use to build your own applications. Thank you. Again, I won’t go into details on this. but this is a platform that also uses third -party tools and you see on the extreme right where we have issue with their government manufacturing health care finance applications so Fujitsu has a fairly large business and services which brings all of this together so we are not just selling you pieces of technology we are selling your total solution or we are asking you to use a total solution here from compute all the way to networks and the application stack together and this is our vision that we want to continue to build on this continue to bring this to other to the end customers as well as users now where are we headed right we see all this converge in the physical AI platform space and what we are building is Kozuchi physical OS which will have the intelligence based on the brain intelligence for the robot and what that means is robots tend to forget And what we are working on, some intelligence work and research so that robots can continue to remember.

But then this technology, the compute networks, as well as the AI platform stack, comes together in edge devices. Robots are one example, but even drones or medical devices or your healthcare on your iPhones. That’s where it will all come together. And that’s the world we are aiming for. That will bring together the AI agentic platform together. That will bring the security platform together in the complete platform that could be consumed for our end users, our companies. And you can choose to play in a part that is comfortable for you. And we are obviously going to partner with a lot of different companies on this. So, as I mentioned, the software, compute, network, the three pillars.

And we are going to be able to do that. We announced in October last year, our CEO Tokita and Jensen were on stage together announcing a huge partnership on physical AI, where we’re partnering with different robotics manufacturers. So it’s working with AMD, working in defense with Lockheed Martin, Supermicro. So this is something

Speaker 2

Thank you. Thank you so much. For the next session, we have a fireside chat between Mr. Vivek Kaneja, Executive Director, CDAT, Mr. Nitin Bajaj, Director, Sales and Marketing, Intel, and the session will be moderated by Mr. Aman Khanna, Vice President of the Asia Group. May I request all the speakers to join us on the stage, please? I also request everybody to please clear the pathway. May I request the audience to please clear the pathway?

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“AI sovereignty is defined as the ability to be flexible and secure—own and control data, fine‑tune AI models, and reduce reliance on third‑party providers.”

The knowledge base explicitly describes AI sovereignty in terms of flexibility, security and reduced third-party dependence, matching the report’s definition [S8] and also notes the focus on control over data, models and security measures [S45].

Confirmedhigh

“Sovereign AI requires three independent layers – software, compute, and networking – and Fujitsu, as a Japanese‑made provider, offers a genuine alternative to dominant American vendors.”

A Fujitsu representative states that sovereignty means independence in three areas and highlights the company’s Japanese origin as a choice against American suppliers [S26].

Additional Contextmedium

“Linking AI sovereignty to national security and economic independence.”

The broader discussion connects technological sovereignty with economic independence and strategic security, emphasizing the risk of “renting intelligence” from foreign providers [S46].

Additional Contextmedium

“Emphasis on reducing third‑party dependence as a core aspect of AI sovereignty.”

The knowledge base highlights a nuanced framework that balances selective control with collaboration, underscoring the importance of limiting reliance on external vendors [S21].

Additional Contextlow

“Incorporation of hardware‑level confidential computing to protect data in Fujitsu’s compute solutions.”

Fujitsu’s security-by-design approach, integrating security into software and hardware development, provides additional context for the confidential-computing claim [S17].

External Sources (53)
S1
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S2
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S3
S4
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S5
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S6
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S7
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 part 4 — Mozambique: Thank you, Mr. Chair, for giving me the floor. Mr. Chair, Mozambique aligned itself with statement delive…
S8
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — This comment reframes AI sovereignty from a purely nationalistic concept to a practical business and security imperative…
S9
Global Internet Governance Academic Network Annual Symposium | Part 3 | IGF 2023 Day 0 Event #112 — Adio Adet Dinika:All right. Wonderful. Thanks for that. So, quickly moving on to the Crimean postcolonial critique, basi…
S10
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Takahito Tokita Fujitsu — <strong>Announcer:</strong> Please welcome Mr. Takahito Tokita, the President and CEO of Fujitsu. <strong>Takahito Toki…
S11
What policy levers can bridge the AI divide? — Ebtesam Almazrouei: Good afternoon, everyone. It’s our pleasure to have you here today with us again and discussing a ve…
S12
UNSC meeting: Scientific developments, peace and security — In this address to the Security Council, Guyana’s representative emphasised the critical need to adapt to rapidly advanc…
S13
Steering the future of AI — Nicholas Thompson: All right, Jann, you ready to be information-dense? That was a good introduction. How are you? I’m pr…
S14
9821st meeting — Secretary-General – Antonio Guterres:Mr. President, Excellencies, I thank the United States for convening the Meeting on…
S15
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S16
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — Basma Ammari: That’s a very good question. I think it’s a natural continuation of what my friend Ivana was just speak…
S17
Open/secure 5G and supplier diversification — From a vendor’s perspective, Fujitsu is concentrating on integrating security into their software development process. T…
S18
Fireside Conversation: 02 — -Moderator: Role – Event moderator/host (introducing speakers and facilitating the event) Absolutely. So the relationsh…
S19
Fireside chat with Dr Matthew Meselson — Ljupčo Gjorgjinski: Ah, all right, well, good morning, good afternoon, good evening, wherever you are in the world. …
S20
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S21
Panel Discussion Data Sovereignty India AI Impact Summit — This comment reframes the entire sovereignty debate by distinguishing between isolation and strategic control. It moves …
S22
Global AI Policy Framework: International Cooperation and Historical Perspectives — Mirlesse outlines practical steps for implementing open sovereignty, emphasizing domestic AI deployment in key sectors w…
S23
Secure Finance Risk-Based AI Policy for the Banking Sector — “Three dominate cloud capacity and a handful command foundation models threatening financial stability and economic sove…
S24
Global Perspectives on Openness and Trust in AI — And then exclusive partnerships and the systems being opaque. So those were the things identified in the market study. A…
S25
From High-Performance Computing to High-Performance Problem Solving / Davos 2025 — Georges Olivier Reymond It shifted the discussion from viewing quantum computing as a standalone technology to seeing i…
S26
https://dig.watch/event/india-ai-impact-summit-2026/keynote-by-vivek-mahajan-cto-fujitsu-india-ai-impact-summit — We have announced a 250 logical qubit roadmap by the end of 2030, which is ahead of any other company in the world that …
S27
The Virtual Worlds we want: Governance of the future web | IGF 2023 Open Forum #45 — Masahisa Kawashima:And also I want to just make one comment quickly. So to achieve high bandwidth, low latency radio com…
S28
Future Network System as Open Platform in Beyond 5G/6G Era | IGF 2023 Day 0 Event #201 — Tony Quek:Okay, I’m Tony. So I’m a faculty at a university, but I’m also serving as a director of the Future Communities…
S29
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S30
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — This comment reframes AI sovereignty from a purely nationalistic concept to a practical business and security imperative…
S31
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — This discussion features an 8-year-old prodigy presenting their perspective on global AI development and India’s strateg…
S32
Panel Discussion Data Sovereignty India AI Impact Summit — Both speakers agree that sovereignty should involve strategic partnerships and collaboration rather than complete self-r…
S33
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — This comment provides a philosophical and ethical framework for the entire biotech sovereignty agenda, showing how India…
S34
Open/secure 5G and supplier diversification — Anil Umesh:Thank you. So also from energy efficiency, we see a big part from two aspects. One is from the hardware persp…
S35
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Takahito Tokita Fujitsu — This discussion features Takahito Tokita, President and CEO of Fujitsu, presenting the company’s vision for artificial i…
S36
WS #208 Democratising Access to AI with Open Source LLMs — Daniele Turra: Thank you so much, Ahitha, for presenting me today. I’m so glad to be here to discuss this very importa…
S37
Global Perspectives on Openness and Trust in AI — And then exclusive partnerships and the systems being opaque. So those were the things identified in the market study. A…
S38
Democratizing AI: Open foundations and shared resources for global impact — The Swiss-made LLM represents the largest truly open language model designed to serve society, with complete transparenc…
S39
From High-Performance Computing to High-Performance Problem Solving / Davos 2025 — Georges Olivier Reymond It shifted the discussion from viewing quantum computing as a standalone technology to seeing i…
S40
Fujitsu and RIKEN expand quantum computing with 256 qubits — Fujitsu and RIKEN, a prominent Japanese research institute,have unveileda new 256-qubit superconducting quantum computer…
S41
Quantum and supercomputing converge in IBM-AMD initiative — IBM has announced plans todevelop next-generation computing architecturesby integrating quantum computers with high-perf…
S42
The Virtual Worlds we want: Governance of the future web | IGF 2023 Open Forum #45 — Masahisa Kawashima:And also I want to just make one comment quickly. So to achieve high bandwidth, low latency radio com…
S43
Researchers develop high-frequency, low-power switch to revolutionise 6G communications — Researchers atUAB, theUniversity of Texas at Austinand theUniversity of Lilledeveloped atelecommunications switchthat op…
S44
Panel #3: « Gouverner les données : entre souveraineté, éthique et sécurité à l’ère de l’interconnexion » — Drudeisha Madhub Merci beaucoup de m’avoir invitée à l’OIF. C’est vraiment un joli atelier depuis hier, c’est une belle …
S45
Building Sovereign and Responsible AI Beyond Proof of Concepts — Sovereignty dimension focuses on control over data, models, and security measures
S46
Keynote-Mukesh Dhirubhai Ambani — This is a profound strategic insight that connects technological sovereignty with economic independence. The metaphor of…
S47
Discussion Report: Sovereign AI in Defence and National Security — This comment addresses a key concern about AI sovereignty leading to fragmentation, instead positioning it as a foundati…
S49
MahaAI Building Safe Secure &amp; Smart Governance — Unexpected framing of data monetization not just as economic opportunity but as preventing exploitation by foreign entit…
S50
Regional Leaders Discuss AI-Ready Digital Infrastructure — “So these three S were introduced yesterday by ITU’s head, the three S of solutions, standards, and skills”[19]. “So whe…
S51
Building the AI-Ready Future From Infrastructure to Skills — And we delivered that first exascale system, Valfrontier at Utrich, for less than 20 megawatts. Everybody thought it was…
S52
Exploring AI developments: From brain implants to ChatGPT Enterprise — Next Monday is going to be an exciting day! Tesla is making a supercomputer that is going to be one of the most powerful…
S53
Quantum leap: The future of computing — If AI was the buzzword for 2023 and 2024,quantum computinglooks set to claim the spotlight in the years ahead. Despite g…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
7 arguments157 words per minute1953 words741 seconds
Argument 1
Definition of sovereignty as flexibility, security, and data ownership (Speaker 1)
EXPLANATION
Speaker 1 defines AI sovereignty as the ability of a nation to retain control over its data and AI models while maintaining flexibility to adapt them. It combines data ownership, security, and the capacity to modify and tune AI systems without dependence on external providers.
EVIDENCE
Speaker 1 states that sovereignty means being flexible and secure, requiring ownership and control of data, and the ability to modify and tune models without heavy reliance on third parties [4-8][9-11].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vivek Mahajan’s keynote explicitly defines AI sovereignty in terms of flexibility, security, and reduced third-party dependence, aligning with the speaker’s definition [S8].
MAJOR DISCUSSION POINT
Definition of AI sovereignty
Argument 2
Fujitsu’s 2 nm ARM‑based Monaca chip and upcoming 1.4 nm CPUs with NPU for AI inferencing, using an open software stack (Speaker 1)
EXPLANATION
Speaker 1 highlights Fujitsu’s development of a 2‑nm ARM‑based Monaca processor and forthcoming 1.4‑nm CPUs equipped with NPUs for AI inference, emphasizing that these are delivered with an open software stack. This infrastructure aims to give customers sovereign control over compute resources.
EVIDENCE
He mentions the Monaca chip as a two-nanometer ARM-based processor built in Japan with confidential computing at the hardware level [38-40], and describes upcoming 1.4-nm CPUs featuring 256-core and 128-core configurations plus an NPU for AI inference [42]. He also stresses that the software stack is completely open with no lock-in [45-48].
MAJOR DISCUSSION POINT
Open hardware and software for sovereign AI compute
Argument 3
Plan to build a 20 exaflop AI supercomputer powered by the Monaca chip (Speaker 1)
EXPLANATION
Speaker 1 announces Fujitsu’s intention to construct a 20‑exaflop AI supercomputer powered by the Monaca chip, positioning it as a national‑scale resource for AI workloads. The project builds on Fujitsu’s history of operating world‑leading supercomputers.
EVIDENCE
Speaker 1 notes that Fujitsu previously operated the world’s fastest supercomputer for five years and now plans to build a 20-exaflop AI supercomputer within two years, powered by the Monaca chip [36-38].
MAJOR DISCUSSION POINT
Planned exascale AI supercomputer
Argument 4
Fujitsu’s roadmap to 250 logical qubits by 2030, launch of a 1 000‑qubit machine, and coupling quantum with HPC for mission‑critical AI workloads (Speaker 1)
EXPLANATION
Speaker 1 outlines Fujitsu’s quantum ambitions, targeting 250 logical qubits by 2030 and the imminent deployment of a 1,000‑qubit machine, while emphasizing the integration of quantum processors with high‑performance computing to support mission‑critical AI. This strategy positions quantum as a complementary pillar to classical AI infrastructure.
EVIDENCE
He outlines Fujitsu’s quantum roadmap, including a target of 250 logical qubits by 2030 [65], the launch of a 1,000-qubit machine next month in Kawasaki [71], and a 10,000-qubit system expected in three years, emphasizing the coupling of quantum with HPC for AI workloads [62-70].
MAJOR DISCUSSION POINT
Quantum roadmap for AI
Argument 5
Development of a 1.6 Tbps photonic switch (future 3.2 Tbps), low‑latency optical links, and open RAN orchestration to move AI workloads efficiently (Speaker 1)
EXPLANATION
Speaker 1 describes the creation of a 1.6‑Tbps photonic switch (with a planned 3.2‑Tbps version) that provides low‑latency, power‑efficient optical links, and mentions the use of open RAN orchestration to efficiently transport AI workloads across networks. These networking advances are presented as essential for sovereign AI deployments.
EVIDENCE
Speaker 1 describes a 1.6-Tbps photonic switch that is highly power-efficient and supports low-latency, long-distance optical transmission, with plans to upgrade to 3.2 Tbps, and mentions the use of open RAN and network orchestration to move AI workloads efficiently [76].
MAJOR DISCUSSION POINT
High‑capacity photonic networking
Argument 6
Takane large‑language‑model platform and Kozuchi AI‑agent built on Fujitsu’s proprietary security layer, enabling domain‑specific, fine‑tuned, secure AI models (Speaker 1)
EXPLANATION
Speaker 1 presents Fujitsu’s Takane large‑language‑model platform and Kozuchi AI‑agent, both built on a proprietary security layer, enabling the development of domain‑specific, fine‑tuned, and secure AI models for sectors such as defense, nuclear, and healthcare. The emphasis is on sovereignty through security and customization.
EVIDENCE
He explains that sovereign AI requires domain-specific models for sectors like defense, nuclear, healthcare, and that Fujitsu’s Takane LLM platform and Kozuchi AI-agent, built on a proprietary security layer, enable fine-tuning and secure deployment of such models [78-84].
MAJOR DISCUSSION POINT
Secure domain‑specific AI platforms
Argument 7
Fujitsu offers a total solution from compute to applications, partnering with AMD, Lockheed Martin, Supermicro and others for physical AI, robotics, and industry use cases (Speaker 1)
EXPLANATION
Speaker 1 emphasizes that Fujitsu delivers a complete, end‑to‑end AI solution covering compute, networking, and applications, and cites partnerships with AMD, Lockheed Martin, Supermicro and others to advance physical AI, robotics, and industry use cases. The message is that customers can obtain a sovereign AI stack without piecemeal integration.
EVIDENCE
Speaker 1 states that Fujitsu provides a total solution from compute to applications, leveraging its own technologies and partnerships with AMD, Lockheed Martin, Supermicro and others for physical AI, robotics and industry use cases [86-99].
MAJOR DISCUSSION POINT
End‑to‑end AI solution and partnerships
S
Speaker 2
1 argument102 words per minute78 words45 seconds
Argument 1
Announcement of the upcoming fireside chat and request for speakers and audience to clear the pathway (Speaker 2)
EXPLANATION
Speaker 2 introduces the next session, announcing a fireside chat with senior executives from CDAT and Intel, and asks both speakers and the audience to clear the stage. The remarks serve to transition the program and manage logistics.
EVIDENCE
Speaker 2 thanks the audience, announces a fireside chat with Mr. Vivek Kaneja and Mr. Nitin Bajaj, and requests the speakers and the audience to clear the pathway [100-104].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The program schedule mentions a fireside chat with senior executives from CDAT and Intel, confirming the announced session and its participants [S8]; additional references describe the fireside conversation format [S18].
MAJOR DISCUSSION POINT
Session handover and logistics
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Overall Assessment

The transcript contains a single substantive contribution (Speaker 1) that outlines Fujitsu’s vision for AI sovereignty, emphasizing data ownership, open hardware/software, quantum‑HPC integration, high‑capacity photonic networking, and end‑to‑end AI solutions. Speaker 2 only provides a procedural hand‑over to the next session and does not present any substantive policy or technical arguments. Consequently, there is no demonstrable overlap or shared stance between the two speakers on any of the listed arguments.

Very low – the only points of convergence are procedural (both participants are part of the same event). The lack of substantive agreement limits any immediate implications for policy or technical coordination on AI sovereignty.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains a detailed presentation by Speaker 1 on Fujitsu’s sovereign AI strategy and a brief logistical hand‑over by Speaker 2. No substantive policy or technical positions are offered by Speaker 2 that could be compared with Speaker 1, resulting in an absence of identifiable disagreement or partial agreement between the speakers.

Very low – the two speakers do not present conflicting viewpoints, so the discussion proceeds without contention, implying smooth transition but offering no debate on the AI sovereignty topics.

Takeaways
Key takeaways
AI sovereignty is defined as the combination of flexibility, security, and ownership/control of data. Fujitsu is developing sovereign AI infrastructure across compute, network, and software layers. Compute: 2 nm ARM‑based Monaca chip, upcoming 1.4 nm CPUs with integrated NPU for AI inferencing, and a planned 20 exaflop AI supercomputer. All compute solutions use a completely open software stack, avoiding vendor lock‑in. Quantum: roadmap to 250 logical qubits by 2030, launch of a 1 000‑qubit machine, and integration of quantum with HPC for mission‑critical AI workloads. Network: 1.6 Tbps (future 3.2 Tbps) photonic switch, low‑latency long‑range optical links, and open RAN orchestration to move AI workloads efficiently. AI software: Takane large‑language‑model platform and Kozuchi AI‑agent built on Fujitsu’s proprietary security layer, enabling domain‑specific, fine‑tuned, secure models. Fujitsu offers an end‑to‑end, total solution—from compute and networking to applications—and is partnering with companies such as AMD, Lockheed Martin, and Supermicro for physical AI and robotics use cases. The session concluded with a transition announcement for a fireside chat featuring Intel and CDAT executives.
Resolutions and action items
None identified
Unresolved issues
None identified
Suggested compromises
None identified
Thought Provoking Comments
For us, sovereignty is being flexible and being secure – you want ownership of your data, control over it, and the ability to manage and tune models without overwhelming reliance on third‑party providers.
Sets a clear, nuanced definition of AI sovereignty that goes beyond simple data ownership, framing it as a balance of flexibility and security.
Establishes the conceptual foundation for the rest of the talk, prompting the speaker to structure the presentation around three independent pillars (software, compute, networks) and influencing listeners to view sovereignty as a multi‑dimensional requirement.
Speaker: Speaker 1
To effectively drive artificial intelligence, you need three key components: software, compute, and networks. Our focus on sovereignty is being independent in all three areas and giving customers a choice.
Introduces a concrete three‑pillar framework that organizes the technical discussion and highlights independence as the core of sovereignty.
Creates a turning point that shifts the narrative from abstract definition to concrete technology domains, leading to separate deep‑dives into compute, quantum, and networking later in the talk.
Speaker: Speaker 1
We will be building a 20 AI‑exascale supercomputer in about two years, powered by our Fujitsu Monaca 2‑nm ARM‑based chip with confidential computing built at the hardware level.
Announces a tangible, cutting‑edge hardware roadmap that directly addresses the compute pillar and ties hardware design to security (confidential computing).
Introduces a new topic—high‑performance, secure compute—prompting the audience to consider how such hardware can underpin sovereign AI workloads and setting up later references to NPUs and inference‑focused designs.
Speaker: Speaker 1
The software stack we are delivering is completely open – there is nothing locked in, no vendor lock‑in.
Challenges the common perception that enterprise AI platforms are proprietary, positioning Fujitsu’s offering as uniquely flexible and interoperable.
Deepens the conversation about sovereignty by linking it to software openness, reinforcing the earlier flexibility theme and preparing listeners for the later discussion of domain‑specific models.
Speaker: Speaker 1
We have announced a 250‑logical‑qubit roadmap by the end of 2030, with a 1,000‑qubit machine going live next month and a 10,000‑qubit machine in three years – quantum plus HPC together will drive mission‑critical AI workloads.
Introduces quantum computing as a strategic pillar, positioning Fujitsu as a top‑three global player and tying quantum progress to AI workload acceleration.
Creates a major shift in the discussion, expanding the scope from classical compute to quantum, and adds complexity by suggesting a future where quantum‑HPC hybrid systems become part of sovereign AI infrastructure.
Speaker: Speaker 1
We are developing a 1.6 TB (soon 3.2 TB) photonic switch that delivers low‑latency, low‑power transmission over a thousand‑kilometre distances, and we integrate it with open‑RAN for AI‑driven network orchestration.
Presents a novel networking solution that directly addresses the third pillar, emphasizing ultra‑high bandwidth and energy efficiency for data‑center scale deployments.
Shifts the conversation to the networking layer, showing how Fujitsu’s hardware complements the compute and software stacks, and reinforces the end‑to‑end sovereignty narrative.
Speaker: Speaker 1
Sovereign AI must be domain‑specific – for defense, nuclear, healthcare, finance you cannot use public clouds; you need to build and fine‑tune models that are secure and tailored to each sector.
Deepens the sovereignty concept by linking it to regulatory and risk considerations across critical sectors, moving beyond technology to real‑world application constraints.
Guides the audience to think about use‑case driven deployments, setting the stage for the later mention of the Takane LLM platform and AI agent tech as tools for building such domain‑specific solutions.
Speaker: Speaker 1
Our vision is the Kozuchi physical OS – a physical AI platform that gives robots (and other edge devices like drones or medical equipment) a persistent memory and intelligence, merging compute, network, and AI software into edge‑ready solutions.
Introduces a forward‑looking, integrated edge‑AI concept that unifies all three pillars into a single physical operating system, highlighting a future direction beyond data‑center centric AI.
Acts as a concluding turning point, tying together compute, networking, and software into a cohesive product vision, and signals upcoming partnerships and ecosystem development.
Speaker: Speaker 1
Overall Assessment

The discussion was driven by a series of strategically placed, thought‑provoking statements that each expanded the notion of AI sovereignty from a high‑level definition to concrete, technology‑specific commitments. By first framing sovereignty as flexibility plus security, the speaker set a lens through which every subsequent claim—whether about open software, exascale compute, quantum roadmaps, ultra‑high‑bandwidth networking, domain‑specific model requirements, or an integrated edge OS—was interpreted as a step toward independent, secure AI capability. Each turning point introduced a new pillar or application layer, progressively deepening the conversation and reinforcing Fujitsu’s narrative of offering a complete, sovereign AI stack. This layered approach kept the audience engaged, shifted focus smoothly across topics, and culminated in a holistic vision that ties all components together, thereby shaping the overall discourse around a unified, sovereign AI ecosystem.

Follow-up Questions
Can you provide detailed specifications and roadmap for the open software stack that Fujitsu claims is completely open and fine‑tuned for AI, compute, and HPC?
Understanding the components, licensing, and integration points of the open stack is crucial for customers to assess compatibility and avoid vendor lock‑in.
Speaker: Speaker 1
What are the exact performance characteristics, availability timeline, and deployment plans for the 1.4 nm chip featuring 256‑core CPU, 128‑core CPU, and an NPU for AI inferencing?
Clarifying these details will help stakeholders evaluate the chip’s suitability for sovereign AI workloads, especially in markets like India and Europe.
Speaker: Speaker 1
How is confidential computing implemented at the hardware level in the Fujitsu Monaca chip, and what security guarantees does it provide?
Security is a core element of sovereignty; explicit information on hardware‑based confidentiality is needed to build trust.
Speaker: Speaker 1
What is the detailed quantum roadmap, including milestones for the 250 logical qubits by 2030, the 1,000‑qubit machine launching next month, and the 10,000‑qubit system expected in three years?
Stakeholders need a clear timeline and technical roadmap to plan integration of quantum resources with AI workloads.
Speaker: Speaker 1
In what ways will quantum computing and high‑performance computing (HPC) be integrated to drive mission‑critical AI workloads, and what software abstractions will support this integration?
Understanding the coupling of quantum and HPC is essential for designing hybrid workloads and assessing performance benefits.
Speaker: Speaker 1
What are the performance metrics, power efficiency figures, and scalability plans for the 1.6 TB photonic switch and its future 3.2 TB version?
Network bandwidth and latency are critical for data‑center and edge AI; detailed specs are needed for capacity planning.
Speaker: Speaker 1
How will the open RAN and network orchestration stack be leveraged to move AI workloads efficiently across the infrastructure?
Clarifying orchestration mechanisms will help operators optimize AI workload placement and ensure low‑latency delivery.
Speaker: Speaker 1
Can you elaborate on the Takane large language model platform and the Kozuchi AI agent technology, particularly regarding their security architecture and customization capabilities?
Domain‑specific, secure, and fine‑tunable models are central to sovereign AI; detailed information is required for adoption.
Speaker: Speaker 1
What processes and governance frameworks are envisioned for building domain‑specific sovereign AI models for sectors such as defense, nuclear, healthcare, and finance?
These sectors have strict data‑privacy and regulatory requirements; guidance on model development and control is necessary.
Speaker: Speaker 1
What are the specifics of Fujitsu’s partnerships with AMD, Lockheed Martin, Supermicro, and robotics manufacturers for physical AI, and how will these collaborations materialize in products?
Partnership details will indicate ecosystem support, interoperability, and market reach of the physical AI solutions.
Speaker: Speaker 1
How will the compute, network, and AI software stack be integrated into edge devices such as robots, drones, and medical equipment, and what are the expected use‑cases?
Edge deployment is a key frontier; understanding integration pathways and applications will guide developers and OEMs.
Speaker: Speaker 1
What concrete deployment strategies and value propositions does Fujitsu propose for sovereign AI in India and Europe, considering local regulatory and infrastructure contexts?
Tailored approaches are needed to address regional sovereignty concerns, market needs, and policy environments.
Speaker: Speaker 1

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.