India’s AI Future Sovereign Infrastructure and Innovation at Scale

20 Feb 2026 16:00h - 17:00h

India’s AI Future Sovereign Infrastructure and Innovation at Scale

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened with the launch of the “Sovereign AI” research report by Amrita Vishwa Vidya Peetham and introduced a diverse group of industry and academic leaders to discuss how India can build sovereign AI capabilities [1-4]. Moderator Ankit Bose then asked each panelist to name the single most important factor for India to achieve AI leadership for the country and the Global South [44-45][92-95].


Sunil Gupta argued that India’s principal bottleneck is the lack of abundant GPU compute, noting that only a few thousand GPUs are currently available while millions will be needed for large-scale inference and training [54-58][70-78]. He described how the government’s “shared compute facility” has pooled roughly 38 000 GPUs from providers such as IOTA and is adding another 20 000, creating a low-cost resource for startups and research [224-236][237]. Gupta urged that this shared infrastructure be extended beyond model training to support the first wave of inference for sectoral use cases, with government subsidies for the initial cycle [240-247][250-254].


Kalyan Kumar highlighted that sovereign AI also requires a robust data layer, including localized vector databases, data catalogs and contracts, to enable distributed edge inference and high-quality data products [96-108]. He explained that HCL’s recent acquisition of Actian and CWI assets gives them control over core database patents and a vector AI engine slated for release, which will underpin the data-centric approach [98-103]. Kumar stressed that without such data infrastructure, even abundant compute cannot deliver scalable AI solutions [105-108].


Brandon Mello identified three systemic barriers to AI adoption in Indian enterprises: difficulty quantifying ROI, fragmented departmental ownership of AI projects, and the lack of executive sponsorship [119-124][129-143]. Ganesh Ramakrishnan added that ensuring interoperability across the AI stack-from models to data contracts-will foster participation, enable alternative solutions, and support a collaborative ecosystem of academia and industry [151-162]. He also emphasized co-design and a nine-institution academic consortium that is building multilingual foundation models tailored to Indian contexts [163-170][188-194].


The panelists converged on the view that building sovereign AI requires coordinated investment in compute, data infrastructure, skilled talent, and open collaboration between government, startups and research institutions [215-223][454-455]. They announced ongoing actions such as NASCOM’s policy draft, a new MOU with Amrita, and a QR-code-driven feedback mechanism to shape India’s AI roadmap [414-420][435-438].


Keypoints

Major discussion points


Compute infrastructure is the bottleneck for sovereign AI.


Sunil Gupta emphasized that the lack of abundant GPU compute has been the core obstacle and described how IOTA’s “Sovereign Cloud” has built a shared pool of ≈ 10 000 GPUs, with the government now aggregating ≈ 38 000 GPUs and planning to add 20 000 more [46-78][224-236].


A robust data stack and interoperability are essential layers.


Kalyan Kumar highlighted the need for centralized data platforms, vector-DBs, edge inference and data contracts to ensure high-quality, shareable data [96-108]. Ganesh Ramakrishnan added that interoperability across models, datasets and institutions enables participation, scaling and the creation of data products [151-168].


Adoption hurdles stem from ROI uncertainty, organisational friction and lack of executive sponsorship.


Brandon Mello identified “ROI invisibility,” siloed departmental processes and the “champion problem” as reasons why 95 % of AI pilots never reach production [115-142]. He later stressed the importance of solving real-world use cases, consolidating tools and handling India’s multilingual data to drive adoption [335-351].


Skill development and a shift from services to product/IP creation are required.


Kalyan Kumar argued that India must pivot from a service-only model to building its own IP, investing in smarter engineers, research talent and new semiconductor capabilities [266-310]. Ankit Bose noted NASCOM’s initiative to up-skill 150 k developers and revamp curricula to produce specialised AI talent [312-319].


Collaboration between government, academia and industry is the backbone of the sovereign AI ecosystem.


Ganesh stressed the need for interoperable standards, consortium-based research (nine academic institutions) and co-design of models and data contracts [151-170][193-205]. Sunil described the government’s “shared compute facility” that empanels multiple providers, creating a public-private partnership for scaling AI resources [224-236].


Overall purpose / goal


The session was convened to launch the Sovereign AI research report and to surface concrete actions that India-and the broader Global South-must take to build a self-reliant AI ecosystem. Panelists were asked to pinpoint the single most critical step for achieving sovereign capability, covering infrastructure, data, talent, adoption and collaborative governance.


Overall tone


The discussion began with a formal, celebratory tone (report launch, introductions) and quickly shifted to a technical, problem-focused dialogue about compute shortages and data challenges. Mid-session the tone became solution-oriented and collaborative, with panelists proposing concrete initiatives, partnerships and skill-building programs. It concluded on an optimistic, call-to-action note, urging participants to join the consortium, contribute to the QR-coded roadmap and continue the partnership.


Speakers

Speakers (from the provided list)


Ankit Bose – Head of AI, NASCOM (National Association of Software and Services Companies) – Moderator of the panel and expert on AI ecosystem development and developer enablement. [S4]


Sunil Gupta – Co-founder, Managing Director & CEO, Yotta (Yotta Data Services) – Builder of Sovereign Cloud infrastructure and large-scale GPU compute facilities in India. [S4]


Kalyan Kumar – Executive Vice President, Head of Software Product Business, HCL Software – Leader in enterprise software products, data platforms, and sovereign-by-design solutions. [S6]


Ganesh Ramakrishnan – Professor, Indian Institute of Technology Bombay – Researcher in AI foundations, interoperability, and large-scale language models. [S9]


Professor Ganesh Ramakrishnan – (same individual as Ganesh Ramakrishnan; listed separately in the names list) – Professor, IIT Bombay – AI research and model development. [S9]


Brandon Mello – Founding GTM Executive, GenSpark.ai – Entrepreneur driving agentic AI solutions for knowledge-workers and enterprise adoption. [S12]


Speaker 1 – Event moderator/host – Introduced the session, announced report launch and MOU, and facilitated the panel discussion.


Additional speakers (not in the provided names list)


Dr. Manisha V. Ramesh – Pro Vice-Chancellor, Amrita Vishwa Vidyapeetham – Representative for the launch of the Sovereign AI research report.


Dr. Shiva Ramakrishnan – Head, AI Safety Research Lab, Amrita Vishwa Vidyapeetham – Co-speaker for the report launch.


Professor Suresh – Academic representative (specific affiliation not stated) – Invited to the stage for the report launch.


Bharat Jain – Panelist (affiliation not specified in transcript) – Contributed to the discussion on AI sovereignty.


Bhaskar Gorti – Executive Vice President, Tata Communications – Panelist discussing telecom and communications aspects of sovereign AI.


Brenno – (likely a mis-pronunciation of Brandon Mello) – Referenced in the transcript but covered under Brandon Mello above.


Other unnamed panelists – The transcript mentions “Mr. …” and “Ms. …” without full names; these are not listed due to insufficient information.


Full session reportComprehensive analysis and detailed insights

Opening & report launch – The session began with the formal launch of the Sovereign AI research report produced by Amrita Vishwa Vidya Peetham. The moderator thanked the audience, invited Pro-Vice-Chancellor Dr Manisha V. Ramesh and AI-Safety Lab head Dr Shiva Ramakrishnan to the stage, and then introduced the panel (Prof Ganesh Ramakrishnan, IT Bombay; Bharat Jain, IIM Indore consortium; Sunil Gupta, co-founder, MD & CEO of Yotta; Bhaskar Gorti, Tata Communications; Kalyan Kumar, CPO, HCL Software; Brandon Mello, GenSpark) [1-5].


Key “single-most-critical-factor” answers


* Sunil Gupta – Compute scarcity – Gupta identified the shortage of specialised GPU compute as the decisive bottleneck. He noted that when large-scale generative models emerged, India had strong software, services and talent, but “what India was not having at that time was compute” [54-58]. He added that the Indian language model Bhashini was recently migrated from a hyperscale cloud to Yotta’s Sovereign Cloud [54-58]. Gupta quantified the gap: the shared-commodity pool currently holds ~38 000 GPUs, with an additional 20 000 announced, yet “millions of GPUs” will be required for nationwide inference across sectors [70-78][224-236][237]. He argued that 95 % of the country’s use-cases can be served by a 20-100 billion-parameter model, underscoring the urgency of scaling [70-78]. Only 3 % of India-generated data is hosted in-country while India creates/consumes 20 % of global data [54-58]; therefore he called for government-funded subsidies for the first inference cycle to jump-start sectoral adoption [240-247][250-254][260-267].


* Kalyan Kumar – Interoperable data stack – Kumar stressed that compute alone is insufficient without a robust, interoperable data layer. HCL’s unified platform combines centrally managed vector-DBs, edge-ready AI engines, and patents acquired from Actian and CWI Netherlands [96-108][98-103]. The platform emphasizes “data products, contracts and catalogs” to ensure quality, accessibility and provenance as inference moves to the edge [105-108][171-176].


Ganesh Ramakrishnan – Interoperability & data ownership – Ganesh highlighted the need for layer-wise interoperability to encourage participation, offer alternatives and balance fidelity-latency trade-offs [151-156]. He cited the nine-institution consortium led by IIM Indore that co-designs multilingual foundation models for 22 Indian languages using mixture-of-experts architectures [163-170][188-212]. To protect creators, he invoked the principle “jiska data uska adhikar” and referenced his book Samanway (meaning “bringing all languages together”) [166-170]. He also mentioned his earlier work Informatics and AI for Healthcare* [112-115] and advocated “glass-box” models that expose provenance and enable trustworthy AI [151-156].


* Brandon Mello – Adoption barriers – Mello shifted the focus to organisational frictions that keep AI pilots in sandbox mode. He identified “ROI invisibility” – the inability of CFOs to quantify returns – as a key blocker, noting that only one in ten executives has tools to measure AI ROI [119-124]. He added “data-trust and compliance friction” from siloed departmental ownership and the “champion problem” where lack of executive sponsorship stalls projects [129-143]. Successful adoption, he argued, requires solving real-world use cases, consolidating fragmented tooling, and supporting India’s multilingual landscape [335-351].


Deep dive on compute infrastructure – Building on Gupta’s points, the panel described the government empanelment process that lets multiple providers contribute GPUs at market-determined price points, creating a low-cost commodity for startups and research [224-236]. The current pool of ~38 000 GPUs (plus the announced 20 000) is a first step; the panel urged public funding not only for model training but also for the inference phase, arguing that subsidised early usage will generate revenue-producing use cases and later attract private investment [260-267][239-254].


Talent & IP strategy (Kalyan Kumar) – Kumar argued that India must pivot from a service-oriented model to building proprietary IP. He recalled HCL’s 2015-16 decision to “build products for ourselves” and the subsequent acquisition of talent and assets [266-283]. Emphasising the exact wording from the transcript, he said, “you need fewer people, smarter people” [286-290]. He called for investment in fundamental physics and quantum research to reshape future compute paradigms [286-304][298-304], and highlighted the India Chips Limited-Foxconn joint venture as a path to domestic semiconductor fab capacity [441-452].


NASCOM up-skilling & curriculum reform (Ankit Bose) – Bose outlined a complementary programme targeting 150 000 developers over the next six months, together with a curriculum overhaul (BTEC, MTEC, MCA, BCA) in partnership with MIT and industry bodies to create specialised AI tracks [312-319][326-329].


Sector-specific perspectives (Kalyan Kumar) – Kumar outlined four stakeholder lenses:


1. Consumer AI – data-control mechanisms, regulator-led data-rights frameworks.


2. Enterprise AI – metadata-first approaches, data-product marketplaces.


3. Government services – sovereign platforms for citizen services and public-sector AI.


4. Critical national infrastructure – air-gap, defence-grade security, and the need for choice of infrastructure and human-centric AI [96-108][266-283].


Closing – The panel invited participants to scan the QR code displayed on the digital backdrop to provide feedback on the report and contribute to the forthcoming Sovereign AI policy document [435-438]. The session concluded with the signing of an MOU between NASCOM and Amrita Vishwa Vidya Peetham, a group photo, and a reaffirmation of the commitment to advance India’s AI capabilities for both national and Global South impact [414-420][454-455].


Session transcriptComplete transcript of the session
Speaker 1

Thank you. Thank you. hello and good afternoon everyone thank you for joining us for this session on sovereign AI for India before we begin the panel discussion again we are happy to announce that there will be a launch of the sovereign AI research report by Amrita Vishwa Vidyapetam may I invite the following representatives to kindly join us on stage first for the release of the report from Amrita we would like to invite pro vice chancellor Dr. Manisha V. Ramesh and if available head of the AI safety research lab Dr. Shiva Ramakrishnan and any other representatives from Amrita Vishwa Vidyapetam that you would like to invite on stage sir alright Alright, Professor Suresh and if we could please have you on stage I would like to invite Mr.

Ankit Bose, Head NASCOM AI on stage as well We will Thank you so much Yeah, yeah, absolutely You can take a seat sir if you want Thank you Thank you. Thank you, everyone. We now move into the panel discussion. To guide this conversation, we are joined by Mr. Ankit Bose, head NASCOM AI. Joining him today are our distinguished panelists, Professor Ganesh Ramakrishnan from IT Bombay and Bharat Jain, Mr. Sunil Gupta, co -founder, MD, and CEO of Yotta, Mr. Bhaskar Gorti, EVP, Tata Communications, Mr. Kalyan Kumar, CPO, HCL Software, and Mr. Brenno Mello, founding GTM executive, GenSpark. Ankit, over to you. Professor Ganesh will be shortly joining us in two minutes. Thank you.

Ankit Bose

So hi everyone, I think we had a good launch and we have a very strong panel. So Ganesh was on the way and he is still stuck on the traffic, he is walking in. So meanwhile we start the discussion, I think, you know, happy to have a very strong panel. So why don’t we do this, we start with the introduction, right? I think Kalyan, we can start with your quick introduction. So Neil and then Bruno.

Kalyan Kumar

Yeah, hi, Kalyan Kumar, call me KK. I run the software product business for HCL, HCL Software. We are the largest India headquartered enterprise B2B software company with about 10 ,000 customers and about 1 .5 billion dollars of revenue. And very intricately involved in building software products which are sovereign by design.

Sunil Gupta

Hello, good afternoon. Good afternoon. Good afternoon. My name is Sunil Gupta. I am co -founder and CEO of IOTA. So we run data center campuses. We have built Sovereign Cloud in India, which is running a whole lot of mission -critical government of India applications. Recently, we migrated Bhashini from a hyperscale cloud to our Sovereign Cloud. Our claim to fame in the last two years is that we have got thousands of NVIDIA GPU chips in India. And all the models which you are hearing getting launched in this summit, MITS, Sarvam model, IOT, Bombay’s Bharat Gen model or Socket model, they all have been trained on our GPU clusters, and now they are being made available to public use.

Thank you.

Brandon Mello

Hello. Good afternoon. My name is Brandon Mello. I work for Genspark .ai, a follow -up -based company. We have been around for about 10 months. We are the largest growing AI company right now in the world. We just broke $200 million in ARR. Our solution has been incredibly well -received. adopted in the market. It is our third largest market and our solution is to drive adoption from the bottom up by bringing agentic AI to the knowledge worker. Thanks for letting me be here.

Ankit Bose

Great, great, great. And hi, folks. I’m Ankit Bose. I head AI for NASCOM. So, whatever NASCOM does in AI something, I support that. I lead that, right? And we will be joined by Ganesh, who is from Bhadrajin. He’s leading the, you know, sovereign AI modern building effort in the country, right? So, I think meanwhile you join, let’s start. I think, Sunil, let me start with you, right? The first question I think I would want to ask after five days of immense brainstorming around, you know, AI for the country, AI for the world, right? You know, what is the top thing you say which, you know, India has to do, right, to build its sovereign capability, not only for the country, plus for the global south?

Sunil Gupta

Yeah. Ankit, if I take everybody, Just two years or maybe two and a half years down the line, when Chad GPT got on world scene, basically AI capability came in consumer hands. A big debate happened in India’s obviously government circle, industry circle, telecom circle, technology circles everywhere. That while India has got everything which is needed to succeed in AI, like we have been software and services leaders for last three decades. We have a startup ecosystem. On skill set index of mathematics, science, engineering, we are always the best. As a market, we are literally close to 1 billion people carrying smartphones, creating consuming content. AI ultimately resulted to most of the cases, you know, some apps which will be giving some productivity to us.

So both on the demand side and the supply side, including data sets like India will have the best data sets available. So everything India has, but what India was not having at that time was compute. Because AI does not run. And regular data centers or regular CPU computes, it required this. specialized GPU computes. So I would say that the biggest problem and of course you have to take care of the entire stack models, data sets, applications, everything. But the core problem to solve for taking AI to the masses was that how do you make compute available in an abundant way so that we don’t think of that. That should become just a hygiene which is always available.

And that’s the problem we tried to solve. You know way back at that time Jensen was in India. I happened to get to meet him and he says we as NVIDIA are too committed to India. We can extend your parity allocation. We can give you engineering support, everything. But somebody has to take a step forward of not only putting your data centers and power and everything but you also need to put in chips and we will give you everything. And from there to now today we are running almost 10 ,000 chips. You know as I said majority of the models which you are hearing sovereign models getting launched in India. You know they have been trained on a GPU.

But the real thing I would say is start now. Many of these models are great, you must have heard Sarvam Modeller beating Gemini and ChatGPT on many of the match marks. And they are making them absolutely for India use cases like OCR, you know the handwritten notes and all that thing, how do you get convert and all that stuff. So these are real India purpose built use cases and models. When they start scaling, when they start getting adopted by masters, we have seen one UPI changed our lives. Imagine we have UPI in 50 different sectors in the country, 50 UPI movement will come into India. At that time, the number of GPUs required will be millions. Today we are happy as a country, we have X thousand of GPUs.

But if you as a single company like SpaceX or like Meta can have 1 million GPUs, India as a country require multiple million GPUs. So while we are working on all the upper layer of stacks and Indians are very good at that, models, data sets, applications. We need to solve this issue. We are taking care of infrastructure problems. We are taking care of railways and roadways and airports. We also need to create this digital infrastructure. We take care of that, make it available abundantly to every startup, every, you know, I would say academic community. We make it available at a very low price. Government India AI machine is doing a human’s role. On one side, they have asked people like us, incentivized us to invest into the GPUs.

But they are taking GPUs from us, putting their own money, putting their own subsidy and then giving it to Sarvams and IITs and sockets of the world. And they think now you make, you don’t have to bother about money. Just go and make India’s plastic model. And the result is to seem in two years, India has come a long way and we have a long way to go. Compute problem has to be solved.

Ankit Bose

Great. Thank you. Thank you, Sunil. Same question to you, KK. You know, what is the one thing you feel can add the edge, right? The whole.

Kalyan Kumar

When you look at sovereign and I think Minister of Electronics and IIT Vaishnavji, he was mentioning. The. Mr. talking the five layers layer stack right and that’s where if you what sunil mentioned is for a easier way i say i use the word infrastructure which can combine energy or the ping power uh cooling the whole stack so that’s that’s providing that layer and then explain the whole model piece i think as you train and when you start to deploy at scale a couple of things becoming very interesting so you need to start to also build a data stack data platforms vector dbs edge vector i personally think you can do as much centralization the way the data consumption model is going is going to highly get distributed going to go down into the edge correct so you need a very different kind of inferencing and those capabilities so you need a data layer something which uh which we are doing is very interesting outside of oracle and ibm uh the only other company which has all the patents for database is Ethier, because we acquired Actian.

So Actian owns the original patent of Ingress. And every derivative today, whether it is Postgres or every one of them is basically an Ingress query processor derivative, including SQL Server and others. Like that, we also acquired an asset from CWI in Netherlands. So we have a VectorDB, the original Vector engine. So we’ve been building a lot of those asset portfolio, HDB, now releasing a, in April we’re going to release a localized vector AI engine, which again can run on, because as the AI PCs become more and more, Edge becomes more and more, so building that. And building the data disciplines. I think that’s a very important layer. A lot of times what happens is we worry about infrastructure, and then we think about model, and then app.

The data platform is going to become very important, because as we’re building the data platform, the enterprise will only scale if you get your data. centric approach, data products, data contracts, data catalogs and those kind of things. Because finally the AI use case is going to be built on how good quality your data is. Yeah.

Ankit Bose

Great point. I think compute data, data stack for the country, I think very important. Let me come to Venu. Again, the same question, right? If India have to build a server AI for the country and Global South, what’s the top one thing you will say which will help the whole cause?

Brandon Mello

Yeah, so it’s interesting. MIT last year ran a big report and they said 95 % of AI pilots actually never made it to real production, right? So in my point of view, this is never really a tech problem. It’s really a production problem, right? So in my point of view, actually like when I look at a our solution, right, like we are able to deploy over thousands of companies in only eight weeks, right? So when I look at that, there’s really, it comes down to three reasons why this is happening in the industry, right? And the first one is what I call ROI invisibility, right? So when you look at companies right now, it’s really easy to get a budget for a pilot, right?

But what comes to the reality is can they get a budget to get the project done, right? So the data that I have to share with you guys, which is astonishing, is a third of CFOs really nowadays, they cannot quantify ROI inside of their organizations, right? And only one out of ten can actually have tools that can actually measure ROI, right? So. What ended up happening is whenever you talk to those organizations. right? Companies, and you ask, like, how are you actually going to measure productivity gains or how are you going to, like, they don’t have the answer, right? So it ends up, like, what’s the baseline? Like, they don’t have the answer, right? So whenever you bring to, like, the CFO to get that project approval, ends up on the project never getting approved and ends up on that cycle of, like, it ends up getting stuck into a pilot, right?

So when you look at what, you know, number two is, like, I think it’s data and trust and compliance friction, right? I think there’s a huge red tape in terms of what happens inside of organizations, right? I think that it’s very departmentalized, where, like, each part of the organization is trying to solve for each part of the department, right? So when I look at IT, it’s trying to solve for IT. Procurement is trying to solve for IT. Procurement is trying to solve for IT. Procurement is trying to solve for IT. procurement. Because no one’s really trying to solve that as an organization, the project ends up stalling. So something that can essentially take a few months to resolve ends up taking six months to a year.

And like I say in sales, time kills every deal. Last but not least, I think my third point is the champion problem. I think there’s a severe issue within organizations nowadays is there’s really no executive sponsorship. And whenever you don’t have executive sponsorship, especially for AI opportunities, deals never get approved. And people, especially at the bottom tier, they don’t understand what’s going on. And when there’s no clear alignment within the middle tier management, deals never get approved.

Ankit Bose

Great. I think let me summarize probably the three points that, you know, you need a close collaborated teams, right, with a single point of view with executive sponsorship. I think that will solve the adoption piece at least at last, right? Let me come to you, Professor Ganesh, right? Ganesh, I think what we are discussing is the, we have discussed a lot on AI for last five days for India, for globe, you know, and then we had three point of views. I asked them, give me one top thing. You heard probably from Breno and KK and then from, you know, Sunil was confused. What is that your top one take which India should do so that we can lead the seven race for the country and the globe?

Ganesh Ramakrishnan

I would suggest interoperability at every layer. I think it is also alluded to by earlier panelists. Interoperability encourages participation and in the words of PSA, if you are there in our Bharat, genesision is a meaningful participation right interoperability also helps you present alternatives because there is no one size fits all and you need to also ensure that in the trade off between fidelity and latency or between sensitivity and specificity you are able to find the right sweet spot which is suitable for you you can pick something that is appropriate I just on a lighter note I was driving from the PSA office and there was such traffic jam which most of you experienced so I exercised my sovereignty and I started walking so you find alternatives when you think sovereign 3 kilometers that’s why I was late so there are alternatives and also provisions for human participation much better there could be places where AI could be substitutional but many other places where you may want it to be just supplementary or complementary.

So alternatives is another thing that interoperability provides for. And I think the very key is scale out. I mean if just by scaling up we could cater to everyone, great. I would say that at least matches one checkbox which is people being catered to. But even we are not there. Scaling up is not going to cater. The capabilities are not there. But even if it were hypothetically, I think participation would also ensure that people are part of the process. It’s informed. I mean Bharat Jain, I take pride in one of our consortium members at IIM Indore. We are a consortium of nine academic institutions. And in the Institute of Management, what are they doing? They do a fabulous job in going to many of the second tier cities, going to people who have data and engage in conversations, education.

That data is an asset and you could actually transform that asset into IP generation. generation and not just source data. So the dialogue, right, and informed decision making is where participation is encouraged when you have interoperability. I just want to add just what he said. He made a very interesting point. How do you monetize data, correct? And this is something which needs a very different approach because today what happens is you are sourcing data and I think PM yesterday made a very amazing statement, correct? He’s saying, jiska data uska adhikar, correct? Very interesting. But if you look at what he’s saying is the creator of the data, the producer of the data, the consent provider for the use, all have a role to play and that’s what I’ve been using this word called data product or a data catalog.

So you need a catalog first. You need to build a data product and then set up a data contract, which is the fundamental, fundamental for interoperability. I just want to add. Because if that gets solved, I can choose my own personal data and say my data catalog, you can have five things to access. I think India has proven that amazing way of identity payments. So I think we can actually set up an environment where you can really build this. And the data benefactor is also the same person. So great point, Professor. I think it probably means definitely removing or optimizing the various layers and taking it to the last person in the rank. And it will help scale to the 1 .4 billion what we need.

I think thank you for that. Let me ask you again a second question. I think this is a very, very direct question. I think as a country, I think we are building our foundation models. You are one of the person who is building foundation models of the country. And at large, we have built sub -500 billion parameter model. And globally, we are going to 5 trillion or plus. The comparison is so huge, right? What do you think India’s moat can be when we are really, you know, in such a situation where we are at a disadvantage, though we have to aggressively, you know, handle that? Yeah, so the other important takeaway, which probably, you know, addresses some part of what you’re saying, what you’re asking is cooperation, right?

Collaboration. A collaboration, honestly, is not just a transactional process. It begins here, right? The will to understand the other side. I just published a book, you know, Informatics and AI for Healthcare. This is with my colleague, Shetha Jadhav. And what we did in the entire book was I tried to, I mean, I empathize with all the entire life cycle of a healthcare practitioner. And we tried to map every, ML example, informatic example, parsing to healthcare, right? and vice versa there was reciprocation from the other side as well it was very interesting exercise I think that’s how co -design also happens, so collaboration is actually to do innovation and again China has shown in many ways, right in contrast to the US ecosystem that co -design can lead to very innovative ideas, and co -design often is even lacking at the level of algorithms and infrastructure, right right there, new algorithms can come up but all the way to application layers so collaboration also comes by creating ecosystem where people can participate since you alluded again to Bharat Jain, we have a consortium of 9 academic institutions and the whole collaboration is through a section 8 company a not for profit company, which engages with for profit entities but also the academic institutions 60 full time employees work with 100 plus researchers, master students, it’s been a very profound exercise in a very short span of time I mean we may say we are late since you brought up also the landscape outside which is 1 trillion plus parameters and that’s also our North Star at least from the India AI vision that is our goal to get to at least 1 trillion parameters but even the 17 million parameter model that we have released there is a lot of research due diligence that has gone into the architecture choice and actually we are very proud of whatever model we released because ensuring that you know if you have two shared experts one of them is actually catering to languages and mixed code the other is catering to domain due diligence that was actually done based on Indian context right the fact that we covered 22 languages in our speech model the text to speech model again all of that is raised we explicitly captured the common phonetic vocabulary of Indian language And that’s only possible through this process of empathy.

I mean, linguist has to empathize with the computer scientist and vice versa. If we do that, we can actually create magic. Believe me. You can create magic. We just have to break our silos and the biggest silos sitting here. I mean, in fact, an endorsement to this was when we actually built our LLM enabled speech to text model. We had a projector layer which actually projected from speech to text. And we used a mixture of experts for the projection. It was very interesting. The expert for Hindi and Marathi performed very similarly. I mean, they were the same expert. Expert got shared. Whereas for Telugu, there was collaboration between Hindi and Tamil experts. So, data, domain knowledge, all of them actually are reinforcing each other.

So, this is actually a time where we can break the language barrier in my interaction with you. on 8th Jan, I gifted him a book from our consortium called Samanway Samanway stands for bringing all languages together and he said, we need to use AI also to show the strength of India it’s not just AI for India, but AI by India great, great, I think the point of collaboration and you know the story what we all have heard single stake course is a bunch of stakes I think it’s very true and that’s what is the mode for India collaboration, building that collaborative effort between different universities, bringing 9 different universities together to work and it’s a gigantic work, especially what you have created is amazing also, we are very happy 3 days back, we also announced at MOU with our heritage foundation sitting in the US we got a lot of support from people in the Bay Area, so once you open up for collaboration, you will find there is support from around the world and it’s very very good and I think that’s the most important Great, great, great.

Thank you, thank you, Professor Ganesh.

Ankit Bose

So, let me come to Sunil, you, right? I think we all agree that, you know, compute is one of the biggest player and pillar, right? And then government is doing their bit, right? I think they are doing their bit. But again, I think in terms of compute for the country, for some unity, can it be a shared commodity? Can it be, you know, some commodity which different, you know, factors of the country or probably ecosystem come together and build, right? How to solve that problem? Because as you rightly said, few thousands versus few lakhs, right? That’s something, yeah, very high.

Sunil Gupta

Number one, they said, you all come and panel with us at a right price point, right quality, and you declare how much GPUs you can give. They were not forcing us. They said, okay, you decide how much you want to give. We all got empaneled. We contributed GPUs, which were made available to startups. Then government said, every quarter we will come and we’ll encourage new and new providers to come up with the facility. And even existing players can also top up their capacities. And every next time, because the market forces, when the quantities start increasing, supplies start increasing, the pricing also will start reducing. Government say, okay, if new player comes, they can reduce the price.

Existing players will have to match. And they keep on empaneling more and more capacity. And that is something which has resulted into that 38 ,000 GPUs, which government is talking about, the shared compute facility, which is nothing but a, you can say, combination of the compute capacity created by multiple providers like us. And now yesterday, Prime Minister announced that 20 ,000 more are being added to this facility. So I would say, both as a concern, except this is proven that last 18 months, must is doable and both are the technology right while technically it’s possible that the same model can get trained like Ganeshji I’m sure can can talk very authoritatively on this subject technically also you can train on multiple different clusters of course inferencing you can do in multiple different places but even if you don’t do that you are actually what government did very democratically okay IIT we will put you into this service provider okay Sarvam will put you into this service provider okay GAN will put you into this provider so government is democratically making sure that they are encouraging industry to invest into this creating this capability which is required and we because we are getting business we are scaling up now we are investing more and more now and then they are making it available to people because India needs its own models we may use frontier models for certain purposes but as minister was saying that 95 percent of the use cases of the country can very well be done by a 20 billion to 100 billion parameter model right of course Ganeshji is carrying a mandate to create a trillion parameter model also in which country required almost we can for all those things why anybody else can do right their success Bharat Jain success and Sarvam success has proven that India can do it right so I would say that shared compute framework which has been done it is proven we just need to scale it up and my request to government which I think they are doing is don’t limit it only for training of models because models training is one step done now these models will be going to massage for adoption and you require millions of GPUs I think I’m repeating myself but that is where government need to fund the first cycle of inferencing on these models when users start adopting let’s say agriculture use case or a healthcare use case or a education use case or whichever use case which come on multiple UPI equal and use case will come up it will take time for users to start adopting it start accepting it making it a part of their lives at that time it will take time for users to start adopting it start accepting it making it a part of their lives at that time only user will be happy to pay 10 paisa per transaction or maybe 50 rupees per month subscription for that that time these models and use cases will become self -sufficient to generate revenue also then they will need government support but at least for i would say first cycle of inferencing maybe one year or two years government not only support the funding of the training of the model but also they support the first phase of inferencing on this model so that adoption happens revenue models emerge and after that government can say okay let private sector invest and government will come back to their original role of regulator

Ankit Bose

great so i think i think probably it will augment and put fewer thoughts right so the india mission has really created the single fire right yeah this fire is going to every state in the country yes all 28 states all eight union territories they are building aicoes yes and the mandate for each co is to give compute right i think that like a small wildfire it will spread all across the country it will be phenomenal but again i think at the same time you know we have to keep up the pace right i think one thing is space.

Sunil Gupta

Absolutely, Ankit, just trying to, this is something which I know two years back when we said that I’m putting 8000 GPUs, everybody started laughing. Because we were starting with the base when India was not having GPUs, right? Today, we comfortably say okay, India will be going to 50 -60 ,000 GPUs but even today I can tell you India require millions of GPUs. In US, just 3 or 4 deep tech companies are collectively owning millions of GPUs. India has got 1 .4 billion people out of which 1 billion people are carrying smartphones, creating, consuming content every single minute. And as Ganeshji will talk about, they all are creating voice -based AI because India’s AI will be voice -based. People are talking in their own native language or a mixture of Hindi, English, everything.

And they’ll be comfortable doing that instead of writing in their native language or screen which is not so easy. When you’re doing that and actually innovations are being done that even from feature phone or regular telephone line, not using smartphones, you will be able to talk to an AI model at the back end. When you are basically talking about 1 .4 billion people coming in the AI fold for multiple use cases. Just imagine what type of number of GPUs will be coming for inferencing and how many GPUs will be coming for training multiple models for sectoring all these things. So you are right, Ankit. What we have done in last two years is kudos to the whole ecosystem, to government and everybody, all of us.

But we need to keep on building for next 7, 8, 10 years. Sorry, just to give one or two more data points. India is creating and consuming 20 % of world’s data. One -fifth of the world’s data is created and consumed by India. Only 3 % of that data is hosted in India. That shows the upscope of the infrastructure both at the physical data center level and also in terms of the compute or GPU level India need to build. Because we don’t want any single country or any single company start dictating our digital destiny. We need to be as much sovereign as possible.

Ankit Bose

Thank you, Sunil. Thank you. Kalyan, let me come to you. So, Kalyan, I think one big base for the sovereignty is the skill set. to research, develop, deploy, right? And do all of that responsibly, right? I think SCL being, you know, one of the companies who have done that, right, in the last two, three years, what will be your nuggets, right? I think how other companies, other players in the country, other countries can do that, right?

Kalyan Kumar

So, if you look at, see, what is India known for? India is known for capability, historically. NASSCOM, right? But that capability was historically, and for a majority, and most of the business capability for hire. You basically are building capability to build things for others, and that’s been the core business. We’ve now become pretty much, if you really look at, if some other country thinks sovereignty, 50 % of their, global tech engineering services, development operations talent is sitting out of India. You see those GCC crates. But where is the pivot? The pivot is, I think what Professor was talking about, is you have to pivot towards build. We are always more towards service. So building, research, development, build your own IP, and how do you make India for the world?

I think it’s very important. I think that’s what our journey has been. So what we did is in 2015 -16, because we have one advantage, we are a single majority shareholder run company. Mr. Nader had a very ambitious vision. He said, we are building products for others, we should start building for yourself. It’s 2015. It’s a very conscious strategy, and he realized if you want to play in the global market, you need to have access to market permission and market access. Because people would only buy if you are a software product company. So that whole idea of acquiring India intellectual property, because if you really start to see the underlying of these pieces, you could build on open source and other stuff, but suddenly what’s happening is some of these open source companies are getting acquired and suddenly becoming closed source.

This is becoming a very interesting plan and suddenly some of them are getting classified as dual use. Suddenly they’ll say, oh, this is dual use tech, so I can only release this. So what we’re seeing from a skill standpoint, you need lesser smarter people. So I’m making a very controversial statement. You need lesser people, smarter people. You need engineers more than coders. See what’s happening is that we’re building quarters. You need engineers, people who think systems thinking you need people who are research bent. I meet students and I asked MBA students, what did you do? I did engineering. I said, why the hell did you waste four years of your life? If you wanted to go and do an MBA, the things like, why are you not doing deeper?

Why don’t you specialize in a domain? But those are things like even fundamental things. I would say. The big leap is going to be. I think India can solve something very interestingly, and as he’s referring to the PSA, quantum. Because I think the kind of compute needs you have, and looking at energy GPUs, you could completely change the computational paradigm. So hence, but that needs fundamental science, research, physics. Like no one wants to study physics. If you go back 20 years back in this country, everyone wanted to go and do coding. So those are the fundamental skills. So what we’re doing, in a very small way, we are acquiring, we are building talent and research pools.

So 50 % of HCL software product business is in India, engineering. But my second largest engineering center is in Rome. Third is in Israel. Then I’m in Perth, Austin, Chemsford outside of Boston. Why? Because if global companies can come to India and acquire talent to build and research, and then build an IP and take it to US, I’m doing the reverse. So AppScan, which is a code security product, the security heuristics is built in Israel. The, SAS UX is built in Boston. but the core engineering is in Bangalore but the IP is registered in India which is where we are moving a very different way we are now tapping global talent to build for us so we are still a billion and a half we are not big but we have got 130 countries so we are a step in the change it’s a long journey it needs to get away from short term thinking hire people to get them built I think you have to go to a very different model I think that’s what we are starting within the larger scheme of HCM but I think we are walking the right path I think we are acquiring assets continuously and building that

Ankit Bose

so let me add probably what I am seeing in the skill level the persona at least what NASCOM is focused on is the developer and the way we code is changing so NASCOM has done concentrated effort to help developers learn the new way of coding redefine the whole SCLC as a target what I have taken my team has taken we have taken a target of you know enabling 150k developers across the country next six months. Make them AI enabled, AI ready. Help them change the whole, you know, or unlearn and learn the new way. I think that’s what, is one thing, right? But finally, I think, which I should make everyone aware, I think there will be announcements sometime soon.

But with the MIT and, you know, the education industry, we are rewriting the whole, you know, technical, BTEC, MTEC, MCA, BCA curriculum, right? I think we are adding more specialization, as rightly said. Because we need specialists. We don’t need journalists. As an engineer, he studies 48 subjects in four years. At the end, what is he specialized on? It is his luck, right? The group he gets, the project he takes, somehow, some job he gets, right? So, I think that’s what we are changing. Soon, there will be announcements happening. But again, I think that’s what is happening at the background. Coming back to Benno, Benno, you have a product which is so simple, anyone can use it and build agents through that, right?

And get, you know, benefit, benefit from it. that. Let me ask you this. I think the one big piece of AI to really be mature and impact is adoption, right? And you started with the 95 % project fail or probably don’t go to production, right? So if we have to really do adoption at scale, what are the top issues you see, right? And how do you suggest, you know, the companies or folks here can take some pointers to mitigate it in their life functionally?

Brandon Mello

Yeah. So I’ll give you three. One is very specific to India, actually. those are relatable to our solution, but I think those are real use cases because the proof is, like I said, the proof is in the pudding, right? One is like you got to solve a real use case, something that is actually changing in people’s life. So AI is complex and AI is people still like trying to figure out AI. So it needs to be something that is into people’s everyday life. So in our case, for example, let’s go back. So if you look at Cursor or Lovable, right, they changed the life of, you know, vibe coding, software engineers. In our situation here at GenSpark, we looked at people that were producing office work, right?

So people looking at producing Excel, PowerPoint, and essentially just like any mechanical work on the everyday office work, right? Because if you think about it, every time you office task, all of that office work is very mechanical, right? And that’s why we realized all this massive growth in our solution, right? So to your point, I think that adoption… comes from like something that is something that can change people’s life and something in a very simplistic way right I think the second the second thing is should be consolidation of tools right I think from the time that we wake up in the morning I think most of us pick up our phones and we have we inundate about messages and naps and then we go to our office work and then we have probably a hundred tools that we have to touch you know actually we looked at a you know draw our research at work you know people waste in average two and a half hours a day right just you know flipping between different solutions right so in that causes contacts loss of context right so if there’s a waking consolidate tools that also drives adoption right you know we have probably a hundred tools that we have to touch you know so I think the third one is especially in India is In fact, there’s a lot of different languages in this country, which you brought up, right?

So I think in this country, especially LLMs, I think really struggling with being able to drive the right language, especially with all the different dialects that this country has. So being able to really naturalize and be able to bring the sovereignty here, I think is very important. And I think last but not least, people are very scared about data, right? And how that data, once they bring data into AI, how is that data going to be treated, right? So I think the solution needs to bring that sense of security of how that data is going to be managed.

Ankit Bose

Great. Thank you, Breno. I think with the last segment, last question, 30 seconds each, right? Again, probably starting with Breno, since you have the mic, right? So AI is not a short game. It’s a game for the next five years, 10 years, decades. Probably centuries. you know what is the challenge as a humanity we have to mitigate you feel that you know we don’t align with something which is hazardous to us

Brandon Mello

yeah so I think it’s you know actually I was having breakfast the other day and actually a person I was serving asked me the exact same question and I think that it’s how human beings interact with AI I think we’re still trying to figure out how to properly interact with AI and I think the speed of AI is evolving I think we’re still uncertain how to manage that I think the line on the sand moves so fast that we can’t really catch up to that right and the interaction of AI and us no one really knows how to do it yet

Ankit Bose

so I’ll map the earlier part in this part. You know, a very specific use of AI for self to, you know, make, you know, your life simpler. We’ll adopt AI skill. And we have to build a certain, you know, the processes to interact with AI in the long run. Because AI is changing, things are changing. Thank you, Breno. Coming back to you, Professor Ganesh, right? Same question, 30 seconds. What’s the challenge you see if we make something, you know, not aligned?

Professor Ganesh Ramakrishnan

I think the biggest challenge in not making AI aligned is that we will become products, not even consumers, right? We want to be in the steering wheel. I remember my very fondly, my first machine translation paper, I called it, you know, machine assisted human translation. Obviously, I can’t, I mean, that will sound too regressive. But the key is provenance. Right? I mean, how can you leave provenance? at every step in the stack, whether it’s data aggregation, which is again aligned with ecosystem. You need an ecosystem to leave provenance on the data part, whether it’s metadata refinement, data curation, provenance at the level of trading, tokenization, provenance at the observability, the other keyword, right? At the level of the way the model performs.

Models are glass boxes, because that gives you enough breathing space. Where do you, where should you actually yield your practices versus existing practices? So I think if you don’t have that view, the recipes, if they’re not made available, if the education isn’t there, I mean as a prof I always focus on the education part, I think we’ll become products.

Ankit Bose

Thank you, thank you. Sunil, you and then Kalyan.

Sunil Gupta

No, I think I concur with the views that at the end of the day we should not do AI for the sake of doing AI. It is a means to achieve an end purpose and the end purpose is beneficial. for the masses. I remember I think I was seeing on a YouTube video when Prime Minister Sir met all the startups and Professor Ganesh was there and I think Prime Minister Sir said to everybody don’t create toys, don’t use make AI to make toys, right, and use AI which benefits the masses in the real problem which they face in their real lives. So that is something that that is where the name of this event also has come in the Impact Summit, right, that and I think yesterday also used one word that unlike the previous summit where we are too much concerned about security governance which are things to be done but at the same time, keval bhai nahi rekhna hai AI ka, AI se aap apna bhagya bana sakte, apna bhahisha bana sakte ho.

So kaise AI se how we sort of create an impact, we benefit the masses and also machine should not end up dictating our lives as again I would say ke we should not end up becoming product itself. As much AI makes improvements, it possibly will never reach a stage where it starts acquiring human’s emotions, it starts acquiring our sense of gut, it starts acquiring our sense of culture, it starts acquiring what we speak, our body language, not just with our words. So I think human in the loop and human remaining the master of AI is something we’ll have to guard against all the time.

Ankit Bose

Interaction, don’t become product, have human -centric development. Kalyan?

Kalyan Kumar

I would say, break this into four key areas. Professor mentioned, I think the consumer AI, so I’m going to break it into consumer, enterprise, government and critical national infrastructure defense. So let’s, the reasons, all fours are going to play, just like ten seconds. Consumer AI, you are the product, unfortunately. You now have to use data control to decide how much of what you give to get. It’s a give to get mode, correct? In the consumer AI. Because the day you click I agree on an Android 4 on an Apple intelligence, suddenly you are the product and you’re getting something back but that give to get balance and that’s where the role of the regulator in my opinion has a far more play than in the enterprise of regulation enterprise god made world in seven days because he had no installed base enterprise cios you go and talk to cios on the ground their reality is that they’ve got a big problem architectural problem their data landscape is broken so they have to pivot from process workflow to data first big shift so they need to start about lineage metadata most of these companies don’t have metadata correct metadata discovery use techniques acknowledge graph to understand the metadata and then you organize your data for so that AI can be benefited I think the big place in govtech government government citizen engagement g2c massive but that’s where I think that sovereign AI play comes in where the work which serve them is doing or or the whole bar agent important because that’s where you can host citizen service platform and the last is for critical national infrastructure air gap networks, private AI and defense.

So I think we need to also have a very broken up view of this whole thing rather than trying to have one brush to paint all of them. But I think the last is sovereignty is all about choice. Making choice. Like he walked here. It’s a great choice. I can run on hyperscaler A, B. I can run on IOTA. I can run on CIFI. I can run on any or I can run on my own infrastructure. Then I need to have choice of it’s all about choice. And second is please AI exists for human good. So put the people back into the center. Human because we suddenly have made human someone in the side and everything is about AI.

It’s about people using AI surrounding them. So that’s what my thought was.

Ankit Bose

Great. Thank you. I think we have had a lot of good nuggets from everyone. I think we’ll continue this conversation after this. As a part of NASCOM, I think 7 AI is a big initiative for us. I think we have been driving it since last three, three and a half years. Ganesh knows that. Sunil knows that. services companies, we have worked enough with them. To keep it on, I think it’s not an end point. We have to think about the sovereignty and we have to think about how India builds the AGI capability, quantum AGI capability. I think that’s the journey we are on as NASCOM. I think we are writing a current policy document for government on sovereign AI and AGI roadmap.

And I think the QR code is there. The QR code will be here and I want all of you to have a look. It’s a dark one. Please work on it. I think that’s that. Yeah, Ganesh?

Professor Ganesh Ramakrishnan

I mean, the potential is so immense. We have not even scratched the surface, not even the tip of the iceberg we have touched. So, sovereignty is critical because the amount of inefficiency in that entire stack needs to be done away with. GPUs were never designed for building these models, right? Legacy and how can we use even the large work we are doing, workload to actually do better? A SIG design? can we use it to have better model serving engines? So, there’s so much to do. I think everyone should get inquisitive about the entire stack. That’s where sovereignty comes.

Ankit Bose

Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborator. We will have a QR code and please respond to that. Give your inputs. And with that, thank you to my panelists. I loved it and I think hope you also loved it. Thank you again.

Kalyan Kumar

Just one thing I want to just say. Watch on 21st, the PM is inaugurating a new JV which HCL is announcing with Foxconn. It’s called India Chips Limited. I would call it a patient capital. It’s about 16 and 32 nanometer fab which are creating. Basically it’s like a OSAT unit. It’s going to come out after 5 years. You have to build the whole thing. But also building that skill, correct? It’s a big important thing. And we have to start now. We cannot wait for 5 years on the line. So,

Speaker 1

Thank you so much to our panelists I request the panelists to please stay back for a group photo right now You can also access the report that Ankit has been talking about in the QR code displayed on the digital background before and leave feedback I’m also happy to announce Thank you Thank you to our panelists I’m also happy to announce an MOU being signed with Amrita Vishwa Vidyapetam and NASCOM right now Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The session began with the formal launch of the Sovereign AI research report produced by Amrita Vishwa Vidya Peetham.”

The knowledge base records that Professor Suresh from Amrita Vishwa Vidya Peetham participated in the report launch ceremony, confirming the launch of the Sovereign AI report [S2].

Additional Contexthigh

“Sunil Gupta identified a shortage of specialised GPU compute as the decisive bottleneck, stating that the shared‑commodity pool currently holds ~38 000 GPUs with an additional 20 000 announced, and that “millions of GPUs” will be required for nationwide inference.”

Other sources note India’s historic GPU scarcity (only ~8 000 GPUs previously) and recent plans to scale to 50-60 000 GPUs, as well as a programme to make 50 000 GPUs available at low cost, providing additional context on the compute gap and scaling efforts [S1] and [S46].

Confirmedhigh

“95 % of the country’s AI use‑cases can be served by a 20‑100 billion‑parameter model, making large frontier models unnecessary for most applications.”

Multiple knowledge-base entries emphasize focusing on smaller models (20-100 B parameters) to address roughly 95 % of national use-cases, confirming the claim [S19], [S89] and [S90].

External Sources (90)
S1
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-future-sovereign-infrastructure-and-innovation-at-scale — Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborat…
S4
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — -Sunil Gupta: Co-founder, MD, and CEO of Yotta – operates data center campuses and built Sovereign Cloud in India, manag…
S5
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — -Sunil Gupta- Managing Director and Chief Executive Officer, Yota Data Services Following Cormann’s presentation, the s…
S6
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — – Ganesh Ramakrishnan- Kalyan Kumar- Sunil Gupta – Kalyan Kumar- Ankit Bose – Sunil Gupta- Ganesh Ramakrishnan- Kalyan…
S7
Announcement of New Delhi Frontier AI Commitments — -Ganesh: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S8
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — – Ganesh Ramakrishnan- Kalyan Kumar – Sunil Gupta- Ganesh Ramakrishnan
S10
Announcement of New Delhi Frontier AI Commitments — -Ganesh: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S11
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-future-sovereign-infrastructure-and-innovation-at-scale — And like I say in sales, time kills every deal. Last but not least, I think my third point is the champion problem. I th…
S12
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Hello. Good afternoon. My name is Brandon Mello. I work for Genspark .ai, a follow -up -based company. We have been arou…
S13
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S14
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S15
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S16
The challenges of introducing Generative AI into the marketplace — I have been hearing a lot about the shortage of powerful GPUs for AI lately. It seems like the demand is much bigger tha…
S17
Keynotes — Marianne Wilhelmsen: but as Norway prepares for the upcoming IGF 2025, I look forward to welcoming many of you in June a…
S18
Advancing digital identity in Africa while safeguarding sovereignty — A pivotal discussion on digital identity and sovereignty in developing countries unfolded at theInternet Governance Foru…
S19
Panel Discussion Data Sovereignty India AI Impact Summit — The discussion began by challenging conventional notions of sovereignty, with moderator Arghya Sengupta framing the cent…
S20
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Voice technology and multilingual capabilities were highlighted as crucial horizontal solutions for healthcare AI in Ind…
S21
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Drawing from his gaming background, the speaker describes a revolutionary shift toward “live operations” in content crea…
S22
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — Rather than viewing India’s complexity as a challenge, Raghavan presented it as the country’s greatest competitive advan…
S23
https://dig.watch/event/india-ai-impact-summit-2026/waves-of-infrastructure-open-systems-open-source-open-cloud — And it’s got thousands of steps. It takes about 120 days to make a chip. So $10 billion for 120 days producing a wafer, …
S24
Driving Indias AI Future Growth Innovation and Impact — “Investment also includes energy infrastructure, because without energy, there is really no compute infrastructure you c…
S25
Global AI Policy Framework: International Cooperation and Historical Perspectives — So the infrastructure is missing, right? Now, if you’re talking about policies related to compute, you’re talking about …
S26
Regulating Open Data_ Principles Challenges and Opportunities — Digital ecosystems simply do not function in silos. However, enabling data to move across borders should not mean that c…
S27
DC-Sustainability Data, Access & Transparency: A Trifecta for Sustainable News | IGF 2023 — In conclusion, AI has been instrumental in sectors like health and education, aiding in vaccine development and benefit …
S28
WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches — Jon Lloyd: I’m just going to pause for a second here, and we’re going to launch a Mentimeter poll. Those of you online, …
S29
Business Engagement Session: Sustainable Leadership in the Digital Age – Shaping the Future of Business — Reyansh identifies a common organizational challenge where leaders disagree about implementing emerging technologies lik…
S30
Research Publication No. 2014-6 March 17, 2014 — Based on the map of roles provided in the previous section, one can identify a number of potential role conflicts. For …
S31
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Additionally, the analysis notes that the need for skill development aligns with the Sustainable Development Goals (SDGs…
S32
Supply Chain Fortification: Safeguarding the Cyber Resilience of the Global Supply Chain — With the amount of material being brought into the decision-making process by emerging technologies, decision-makers nee…
S33
Open Forum #33 Building an International AI Cooperation Ecosystem — Practical implementation requires comprehensive ecosystems combining government guidance, industry-academia collaboratio…
S34
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — “No single institution, no matter how large or how well resourced, can navigate this epoch alone.”[64]. “It will require…
S35
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Unexpectedly, there was strong consensus across industry, government, and academic perspectives on the need for collabor…
S36
Towards inclusive digital innovation ecosystems – do’s and don’ts and what next? — Ms. Vahini Naidu:discourse on data for development? Thank you, Anita, and good morning, colleagues. So, essentially, wha…
S37
POLICY BRIEF — Many e-diplomacy tools are free (Facebook, Twitter, YouTube and Flickr accounts for example). Even software platf…
S38
POLICY BRIEF — Finally, the rise of open-source software (i.e. free software without copyright constraints) and the increasing co…
S39
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Kalyan calls for moving away from a services‑centric model toward building proprietary IP, hiring smarter engineers, and…
S40
A Digital Future for All (afternoon sessions) — There is a need to build AI capacity in developing countries to ensure they can participate in and benefit from AI advan…
S41
From India to the Global South_ Advancing Social Impact with AI — Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. I’d like to welcome everyone to t…
S42
Artificial intelligence (AI) – UN Security Council — The discussions on structuring capacity-building initiatives in AI to maximize their impact, especially in regions with …
S43
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — If compute, database and foundational models remain concentrated of a few, we risk creating a new form of inequality, an…
S44
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Abhishek Singh: Thank you for convening this and bringing this very, very important subject at FORC, like how do we bala…
S45
Indias Roadmap to an AGI-Enabled Future — -Compute Infrastructure and GPU Requirements: Analysis of India’s current and projected compute needs, with estimates su…
S46
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Actionable Solutions and Pathways Given the lack of GPUs and data centers in the Global South, new business models need…
S47
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — AI policies in Africa should ideally espouse a context-specific and culturally sensitive orientation. The prevailing ten…
S48
WS #144 Bridging the Digital Divide Language Inclusion As a Pillar — Government policy and procurement can drive multilingual adoption Government policy can drive multilingual internet ado…
S49
Announcement of New Delhi Frontier AI Commitments — “First, advancing understanding of real‑world AI usage through anonymized and aggregated insights to support evidence‑ba…
S50
MahaAI Building Safe Secure & Smart Governance — Unexpected focus on quantum computing as an immediate policy concern rather than a distant future issue, highlighting th…
S51
D-Wave quantum backs congressional push to expand US Quantum Program — D-Wave Quantum Inc., a leading player in the field of quantum computing,has voiced its endorsement for recent Congressio…
S52
Building fair markets in the algorithmic age (The Dialogue) — Combining legal and policy research with insights from physics, chemistry, and mathematics can provide better evidence. …
S53
Contents — We believe this vision can be realised and government strategy has a key role to play. We argue that the government’s fo…
S54
The Global Power Shift India’s Rise in AI & Semiconductors — High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different …
S55
From KW to GW Scaling the Infrastructure of the Global AI Economy — High level of consensus across technical, business, and policy dimensions. The agreement spans both global technology pr…
S56
AI 2.0 The Future of Learning in India — High level of consensus with remarkable alignment across diverse stakeholders (government officials, academics, industry…
S57
Keynote-Jeet Adani — As we all know, under peak load, advanced processors generate extraordinary heat. Systems throttle when power falters an…
S58
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — Le Fevre Cervini argues that Europe lacks a unified vision and operates with fragmented country-by-country policies. He …
S59
State of play of major global AI Governance processes — The Hiroshima AI process was introduced in 2022, marking the pursuit of an interoperable governance framework. Interoper…
S60
Keynote-Mukesh Dhirubhai Ambani — The third commitment centres on building India’s sovereign compute infrastructure through three interconnected initiativ…
S61
Skilling and Education in AI — So when I look at the work that we’ve been doing across board and across product areas, and speaking to some of the anno…
S62
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — India possesses many essential ingredients for AI success: a robust software services industry, thriving startup ecosyst…
S63
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — And the big mindset shift that’s starting to occur is this notion that, you know, these aren’t just productivity tools. …
S64
Global AI Policy Framework: International Cooperation and Historical Perspectives — So the infrastructure is missing, right? Now, if you’re talking about policies related to compute, you’re talking about …
S65
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-for-shared-prosperity — And it’s that kind of computing power that is essential. It’s essential for training large AI models. It’s essential for…
S66
High Level Session 2: Digital Public Goods and Global Digital Cooperation — – **Interoperability as Essential**: Universal agreement that interoperability is crucial for DPGs to function effective…
S67
DC-Sustainability Data, Access & Transparency: A Trifecta for Sustainable News | IGF 2023 — Interoperability is essential in both technical and legal systems. As technologies become increasingly global, it is cru…
S68
WS #271 Data Agency Scaling Next Gen Digital Economy Infrastructure — – **Interoperability as a Core Enabler**: The panelists discussed how interoperability between different protocols and p…
S69
Exploring Emerging PE³Ts for Data Governance with Trust | IGF 2023 Open Forum #161 — Certain barriers, such as low budgets, less technical focus in decision-making teams, and low priority given to smaller …
S70
WSIS Action Line C7: E-Agriculture — – **Youth as agents of change and innovation**: Young people were identified as crucial catalysts for digital agricultur…
S71
Business Engagement Session — Garza highlights the importance of closing the digital divide to ensure sustainable digital transformation. She argues t…
S72
Introduction — Content and services. To increase demand, there needs to be new, locally relevant content and services. Content and Ski…
S73
Open Forum #33 Building an International AI Cooperation Ecosystem — Practical implementation requires comprehensive ecosystems combining government guidance, industry-academia collaboratio…
S74
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — International cooperation and knowledge sharing are essential, requiring interoperable governance frameworks and multi-s…
S75
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — So working in tandem, working in synchronization is the need of the hour. This transformation cannot be driven by indust…
S76
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — “And I think this integration of government support for both the academic piece of this and the industry piece is really…
S77
Panel Discussion Data Sovereignty India AI Impact Summit — Successful sovereignty requires government-industry partnership with governments providing guardrails and policy stabili…
S78
Artificial Intelligence & Emerging Tech — Audience:Thank you, respected moderator, Ms. Jennifer Chang, for giving me such a golden opportunity to place my questio…
S79
Multi-stakeholder Discussion on issues about Generative AI — Amrita Choudhury:Good evening everyone. My name is Amrita Choudhury. I come from India, represent CCUI which is a civil …
S80
Bridging Connectivity Gaps and Harnessing e-Resilience | IGF 2023 Networking Session #104 — The moderator invited participants to continue discussions at their booth The moderator thanked everyone for joining th…
S81
https://dig.watch/event/india-ai-impact-summit-2026/ai-automation-in-telecom_-ensuring-accountability-and-public-trust-india-ai-impact-summit-2026 — Technology Security and Data Privacy Officer at Vodafone India Limited, with over 20 years of experience in cyber securi…
S82
COUR EUROPÉENNE DES DROITS DE L’HOMME EUROPEAN COURT OF HUMAN RIGHTS — “In their essentials”, stated the Vice-Chancellor, “these contentions seem to me to be sound.” He accepted that, by the …
S83
https://dig.watch/event/india-ai-impact-summit-2026/national-disaster-management-authority — DG of IMD is here and IMD can elaborate on how robust AI based systems can be deployed at population scale in the Indian…
S84
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — He’s coming back. Thank you so much for your excellencies setting the discussion regarding the Global Network for Center…
S85
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Dr. Romesh Ranawana:Of course. I mean, essentially, the problem is, like it’s been mentioned so many times, are the foun…
S86
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — <strong>Naveen GV:</strong> out a long, lengthy form of information for that to be processed much later by another human…
S87
Leaders TalkX: Local Voices, Global Echoes: Preserving Human Legacy, Linguistic Identity and Local Content in a Digital World — NK Goyal, President of the CMAI Association of India, presented a series of strategies for digital empowerment, includin…
S88
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — An interesting fact is that most of the AI models in the world work in English. But your AI model works in Indian langua…
S89
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — ROI doesn’t come from creating a very large model. 95% of the work can happen with models which are 20 billion or 50 bil…
S90
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-data-sovereignty-india-ai-impact-summit — So I think that’s the goal. by having models which are 20 billion to let’s say 100 billion parameters. You don’t need to…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sunil Gupta
6 arguments200 words per minute2225 words665 seconds
Argument 1
GPU scarcity hampers AI scaling
EXPLANATION
Sunil explains that while India has strong demand and talent for AI, the lack of sufficient GPU compute resources is the main bottleneck preventing large‑scale model training and deployment. Without abundant specialized GPUs, AI cannot be brought to the masses.
EVIDENCE
He notes that after the rise of ChatGPT, India possessed talent, data and market size but was missing compute, requiring specialized GPU clusters rather than regular CPUs [54-60]. He further quantifies the gap by stating that current deployments use about 10,000 chips while millions would be needed for nationwide use cases such as UPI across sectors [75-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel discussions stress the acute shortage of powerful GPUs for AI workloads, citing supply-chain delays and limited allocations as a bottleneck for large-scale model training in India [S2][S16].
MAJOR DISCUSSION POINT
Compute scarcity
AGREED WITH
Kalyan Kumar
DISAGREED WITH
Brandon Mello
Argument 2
Shared, government‑empanelled compute pool as a commodity
EXPLANATION
Sunil describes how the Indian government has created a shared compute facility by empaneling multiple private providers, making GPU capacity available to startups and researchers at market‑driven prices. This pooled approach is intended to turn compute into a commodity that can be scaled up over time.
EVIDENCE
He outlines the empanelment process where providers declare the amount of GPUs they will supply, leading to a shared pool of about 38,000 GPUs, with an additional 20,000 announced by the Prime Minister [224-236]. He emphasizes that this model encourages competition, price reduction and broader access for the ecosystem.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The report describes a shared compute facility created from multiple private providers and coordinated by the government, turning GPU capacity into a commodity that can be scaled [S2].
MAJOR DISCUSSION POINT
Shared compute pool
AGREED WITH
Kalyan Kumar
DISAGREED WITH
Kalyan Kumar
Argument 3
AI must serve the masses, remain a tool, not become the product
EXPLANATION
Sunil stresses that AI should be used to solve real problems for the population rather than being developed as a novelty or a product in itself. Human oversight must be retained to ensure AI benefits society and does not dictate outcomes.
EVIDENCE
He recalls a Prime Minister’s admonition to avoid building “toys” and instead focus on AI that benefits the masses, and he calls for keeping humans in the loop and preventing AI from becoming the product [380-386].
MAJOR DISCUSSION POINT
Human‑centric AI
AGREED WITH
Ankit Bose, Ganesh Ramakrishnan, Brandon Mello
Argument 4
Data localisation is a cornerstone of digital sovereignty
EXPLANATION
Sunil points out that while India generates and consumes a fifth of the world’s data, only a tiny fraction is hosted domestically, creating a strategic vulnerability. He stresses that building physical data‑center capacity and keeping data within India are vital to prevent foreign control over the nation’s digital destiny.
EVIDENCE
He states that India creates and consumes 20 % of global data but only 3 % of that data is hosted in India, highlighting the need for domestic infrastructure and compute resources [254-257].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The data-sovereignty panel highlights that only a few percent of India’s data is hosted domestically, underscoring localisation as essential for digital independence [S19].
MAJOR DISCUSSION POINT
Data localisation for sovereignty
AGREED WITH
Ganesh Ramakrishnan, Kalyan Kumar
Argument 5
Government should fund the first inference cycle to accelerate AI adoption
EXPLANATION
Sunil urges the government to extend its support beyond model training and subsidise the initial phase of inferencing, allowing applications to reach users, generate revenue, and become self‑sustaining before private investors take over. This early funding is presented as a catalyst for widespread deployment.
EVIDENCE
He requests that the government not limit its role to training models but also support the first phase of inferencing, describing this as a necessary step for adoption and revenue generation [224-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sunil’s call for government-funded inferencing aligns with the panel’s mention of public support for the first inference cycle to jump-start AI deployment [S2].
MAJOR DISCUSSION POINT
Public funding for inference phase
AGREED WITH
Ankit Bose
Argument 6
Voice‑based AI will be the dominant interaction mode in India, leveraging linguistic diversity
EXPLANATION
Sunil contends that because Indian users prefer speaking in native languages or mixed Hindi‑English, AI solutions should prioritize speech and voice interfaces, even for feature‑phone users, to achieve mass adoption.
EVIDENCE
He notes that “India’s AI will be voice-based… people are talking in their own native language or a mixture of Hindi, English… even from feature phones you will be able to talk to an AI model at the back end” [244-248].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of Indian AI use cases point to voice-first interfaces as critical for reaching users in native languages and on feature phones [S20][S22].
MAJOR DISCUSSION POINT
Voice‑centric AI
AGREED WITH
Ganesh Ramakrishnan, Brandon Mello
K
Kalyan Kumar
4 arguments175 words per minute1697 words579 seconds
Argument 1
Building a unified data stack with vector DBs and edge inference
EXPLANATION
Kalyan argues that beyond compute, a robust data infrastructure—including centralized platforms, vector databases and edge‑ready inference engines—is essential for sovereign AI. Such a stack enables scalable, low‑latency AI services across the country.
EVIDENCE
He details HCL’s acquisition of Actian (original Ingress patent) and a vector engine from CWI, the development of a localized vector AI engine for edge devices, and the broader portfolio of data-centric assets being built for the stack [96-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion emphasizes the need for a centralized data platform, vector databases and edge-ready inference engines as core components of a sovereign AI stack [S2].
MAJOR DISCUSSION POINT
Data stack development
AGREED WITH
Sunil Gupta, Ganesh Ramakrishnan
Argument 2
Shift from services to building proprietary IP; need smarter engineers and quantum research
EXPLANATION
Kalyan explains that HCL is transitioning from a service‑oriented model to creating its own intellectual property, requiring a talent shift toward engineers with systems thinking and research expertise. He also highlights the future need for quantum‑level compute and fundamental science research.
EVIDENCE
He recounts the 2015-16 strategic pivot to build products for HCL itself, the emphasis on hiring smarter engineers rather than just coders, and the call for quantum research to change the computational paradigm, noting the scarcity of physics talent [266-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Kalyan’s strategic pivot toward proprietary IP, hiring of systems-thinking engineers and calls for quantum-level compute are documented in the panel overview [S2].
MAJOR DISCUSSION POINT
IP creation & talent shift
AGREED WITH
Ankit Bose
DISAGREED WITH
Ankit Bose
Argument 3
Cross‑border partnerships and asset acquisitions expand capabilities
EXPLANATION
Kalyan points out that strategic acquisitions and international collaborations broaden HCL’s technology base, giving it access to advanced database patents and vector engines, while new joint ventures like India Chips Limited aim to develop domestic semiconductor fabs.
EVIDENCE
He mentions acquiring Actian’s Ingress patent, a vector engine from CWI, and later references the upcoming JV with Foxconn for a 16/32 nm fab called India Chips Limited, slated to be operational in five years [98-103][441-447].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Acquisitions such as Actian’s Ingress patent and collaborations with international research groups are cited as ways to broaden HCL’s technology base [S2].
MAJOR DISCUSSION POINT
Strategic acquisitions & JV
AGREED WITH
Ganesh Ramakrishnan
Argument 4
Urgent investment in domestic semiconductor manufacturing is critical for sovereign compute
EXPLANATION
Kalyan highlights the creation of a joint venture with Foxconn, India Chips Limited, to build a 16 nm/32 nm fab, describing it as patient capital and stressing that the effort must begin now rather than wait five years to secure an indigenous GPU supply chain.
EVIDENCE
He announces the JV with Foxconn for a 16 nm and 32 nm fab, calls it patient capital, and emphasizes the need to start immediately instead of waiting five years [441-448][449-452].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Details on the lengthy chip-fabrication process and the need for early, patient capital investment in a domestic fab are provided [S23][S24].
MAJOR DISCUSSION POINT
Domestic semiconductor capability
AGREED WITH
Sunil Gupta
DISAGREED WITH
Sunil Gupta
G
Ganesh Ramakrishnan
4 arguments157 words per minute1464 words558 seconds
Argument 1
Interoperability across layers enables participation and multilingual data products
EXPLANATION
Ganesh emphasizes that ensuring interoperability at every stack layer encourages broad participation, allows alternative solutions, and supports multilingual data products tailored to India’s linguistic diversity. Interoperability also facilitates data contracts and cataloguing.
EVIDENCE
He describes how interoperability offers alternatives, supports participation from academia and industry, and cites the need for data products, catalogs and contracts to enable multilingual AI, referencing the PM’s statement on data ownership and the concept of data products [151-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel stresses layered interoperability, data contracts and catalogs as enablers for broad ecosystem participation and multilingual AI products [S2][S19].
MAJOR DISCUSSION POINT
Layered interoperability
AGREED WITH
Kalyan Kumar
Argument 2
Co‑design, academic‑industry consortia, and open collaboration accelerate sovereign models
EXPLANATION
Ganesh outlines the value of co‑design and consortium‑based collaboration, where multiple academic institutions and industry partners jointly develop foundation models that reflect Indian contexts. Open collaboration reduces silos and speeds up innovation.
EVIDENCE
He cites the nine-institution consortium at IIM Indore, the co-design of a healthcare book, the development of a 17-million-parameter model with multilingual experts, and the recent MOU with a US heritage foundation, illustrating how collaborative ecosystems produce sovereign models [193-214].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A nine-institution consortium at IIM Indore developing a multilingual foundation model exemplifies co-design and open collaboration [S2].
MAJOR DISCUSSION POINT
Consortium‑driven co‑design
AGREED WITH
Kalyan Kumar
Argument 3
Provenance, transparency, and education are essential for alignment
EXPLANATION
Ganesh argues that AI systems must retain provenance at every stage—from data aggregation to model performance—so that users can trace origins and trust outcomes. Education on these practices is crucial to prevent AI from becoming a black‑box product.
EVIDENCE
He references his early work on machine-assisted translation, stresses the need for provenance in data, metadata, tokenisation and observability, and highlights the role of education in making models “glass boxes” [371-376].
MAJOR DISCUSSION POINT
Provenance & transparency
AGREED WITH
Sunil Gupta, Ankit Bose, Brandon Mello
Argument 4
Existing GPU hardware is not optimized for AI workloads; India must develop specialized designs and better model‑serving engines to achieve true sovereignty
EXPLANATION
Ganesh argues that the current generation of GPUs was never built for the demands of large AI models and that a redesign of the compute stack—including signal‑integrated‑gate (SIG) designs and more efficient serving architectures—is essential for a sovereign AI ecosystem. He calls for inquisitiveness across the entire stack to identify and implement these innovations.
EVIDENCE
He notes that “GPUs were never designed for building these models” and asks whether a SIG design could be used to create better model-serving engines, emphasizing the need for new hardware and stack innovations [426-434].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists note that current GPUs were never built for large AI models and call for specialized hardware and serving architectures, echoing broader concerns about GPU suitability [S2][S16].
MAJOR DISCUSSION POINT
Hardware and stack innovation for sovereign AI
B
Brandon Mello
4 arguments147 words per minute1171 words475 seconds
Argument 1
Data trust, compliance friction and need for data contracts
EXPLANATION
Brandon notes that organizations face heavy red‑tape and compliance hurdles when handling data, which hampers AI adoption. Establishing clear data contracts and trust frameworks is necessary to move projects forward.
EVIDENCE
He points to “data and trust and compliance friction” as a major barrier, describing how departmental silos and lack of unified data policies create friction for AI initiatives [129-136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The data-sovereignty discussion highlights heavy compliance friction and the necessity of clear data contracts to enable AI projects [S19].
MAJOR DISCUSSION POINT
Data compliance friction
DISAGREED WITH
Sunil Gupta
Argument 2
ROI invisibility, lack of executive sponsorship, and departmental silos block AI projects
EXPLANATION
Brandon identifies three systemic obstacles: difficulty in quantifying ROI, absence of senior executive champions, and fragmented departmental responsibilities, all of which cause AI pilots to stall or fail.
EVIDENCE
He provides statistics that a third of CFOs cannot quantify ROI and only one in ten have tools to measure it, explains how lack of executive sponsorship leads to stalled projects, and illustrates how siloed procurement and IT functions delay implementations [119-143].
MAJOR DISCUSSION POINT
Execution barriers
DISAGREED WITH
Sunil Gupta
Argument 3
Uncertainty in human‑AI interaction requires careful governance
EXPLANATION
Brandon reflects on the nascent understanding of how humans should interact with increasingly capable AI systems, arguing that rapid advances outpace governance frameworks and demand cautious policy development.
EVIDENCE
He shares a personal anecdote about a breakfast conversation where the question of human-AI interaction was raised, noting that the speed of AI evolution makes it hard to establish stable governance or interaction norms [358-366].
MAJOR DISCUSSION POINT
Human‑AI interaction uncertainty
AGREED WITH
Sunil Gupta, Ankit Bose, Ganesh Ramakrishnan
Argument 4
Adoption depends on solving real everyday problems, consolidating fragmented tools, and supporting India’s multilingual landscape
EXPLANATION
Brandon stresses that AI solutions must address concrete use‑cases that improve daily life, reduce the overhead of juggling many separate applications, and be able to operate across the country’s many languages and dialects. These factors together drive meaningful uptake of AI technologies.
EVIDENCE
He cites the need for a real use case that changes people’s lives, the importance of consolidating dozens of tools into a single workflow to avoid context loss, and the challenge of handling India’s linguistic diversity for LLMs [337-351].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Voice-first solutions and multilingual capabilities are identified as key drivers for practical AI adoption across India’s diverse linguistic environment [S20][S22].
MAJOR DISCUSSION POINT
Practical adoption drivers
AGREED WITH
Sunil Gupta, Ganesh Ramakrishnan
A
Ankit Bose
4 arguments173 words per minute1450 words501 seconds
Argument 1
Massive developer up‑skilling program and curriculum overhaul
EXPLANATION
Ankit outlines NASCOM’s initiative to rapidly up‑skill a large cohort of developers and revamp academic curricula to include AI specializations, aiming to create a workforce ready for sovereign AI development.
EVIDENCE
He states that NASCOM targets 150,000 developers across India within six months, and mentions collaboration with MIT and the education sector to rewrite BTEC, MTEC, MCA, BCA curricula with added specializations [312-319].
MAJOR DISCUSSION POINT
Developer up‑skilling
AGREED WITH
Kalyan Kumar
DISAGREED WITH
Kalyan Kumar
Argument 2
Real‑world use cases, tool consolidation, and language support drive adoption
EXPLANATION
Ankit argues that AI adoption will accelerate when solutions address concrete everyday problems, integrate disparate tools into a unified workflow, and support India’s multilingual environment.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of addressing concrete use-cases, unifying tools and supporting multiple languages is reinforced by sector analyses on voice technology and multilingual AI [S20][S22].
MAJOR DISCUSSION POINT
Adoption drivers
Argument 3
A QR‑code feedback mechanism is being used to crowdsource input on sovereign AI policy and foster collaborative participation
EXPLANATION
Ankit announces that a QR code displayed on the digital background gives participants access to the sovereign AI research report and a channel to submit feedback, thereby inviting the broader ecosystem to shape the policy roadmap.
EVIDENCE
He asks the audience to scan the QR code, view the report, and provide inputs, emphasizing collaborative engagement through this digital tool [435-439].
MAJOR DISCUSSION POINT
Collaborative policy feedback via QR code
Argument 4
AI alignment and safety are essential long‑term challenges to prevent hazardous outcomes
EXPLANATION
Ankit warns that without proper alignment, AI could become a dangerous tool, urging the community to mitigate risks and ensure AI remains beneficial to humanity over the coming decades.
EVIDENCE
He states that “the challenge as a humanity we have to mitigate… we don’t align… AI could be hazardous” and asks about the challenge of non-aligned AI in a 30-second closing question [356-357][365-366].
MAJOR DISCUSSION POINT
AI safety & alignment
AGREED WITH
Sunil Gupta, Ganesh Ramakrishnan, Brandon Mello
S
Speaker 1
1 argument55 words per minute330 words359 seconds
Argument 1
Launch of the Sovereign AI research report and MOU with Amrita Vishwa Vidyapeetham signal policy commitment
EXPLANATION
Speaker 1 announces the publication of a sovereign AI research report and the signing of an MOU with Amrita University, indicating institutional and governmental support for India’s sovereign AI agenda.
EVIDENCE
The opening remarks invite Amrita representatives for the report launch [1], and later the speaker thanks the panel and announces the MOU with Amrita Vishwa Vidyapeetham [454-455].
MAJOR DISCUSSION POINT
Policy & institutional backing
P
Professor Ganesh Ramakrishnan
2 arguments166 words per minute292 words105 seconds
Argument 1
Data ownership rights and a data‑product framework are foundational for sovereign AI
EXPLANATION
Ganesh stresses that data should remain under the control of its creator and that establishing data products, catalogs and contracts is essential to enable trustworthy, interoperable AI systems that respect sovereignty.
EVIDENCE
He references the Prime Minister’s statement “jiska data uska adhikar” and explains that a data catalog, data product and data contract are required for interoperability and to protect data owners’ rights [171-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The data-sovereignty panel emphasizes that data creators retain ownership and that data products, catalogs and contracts are essential for trustworthy AI systems [S19].
MAJOR DISCUSSION POINT
Data ownership & productization
AGREED WITH
Sunil Gupta, Ganesh Ramakrishnan, Kalyan Kumar
Argument 2
Multilingual AI models constitute a strategic moat for India
EXPLANATION
Ganesh argues that supporting India’s linguistic diversity through AI models that cover many languages is a key competitive advantage and essential for broad adoption across the country.
EVIDENCE
He describes the foundation model that covers 22 Indian languages, using mixture-of-experts where experts for Hindi, Marathi, Telugu and other languages are shared, highlighting the focus on multilingual capability [190-212].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses point to India’s linguistic diversity as a unique competitive advantage, with multilingual models seen as a strategic moat for the nation’s AI ecosystem [S20][S22].
MAJOR DISCUSSION POINT
Multilingual capability as a moat
AGREED WITH
Sunil Gupta, Ganesh Ramakrishnan, Brandon Mello
Agreements
Agreement Points
Compute scarcity and the need for a shared, domestically sourced GPU capacity
Speakers: Sunil Gupta, Kalyan Kumar
GPU scarcity hampers AI scaling Shared, government‑empanelled compute pool as a commodity Urgent investment in domestic semiconductor manufacturing is critical for sovereign compute
Both speakers highlight that insufficient GPU compute is the main bottleneck for scaling AI in India. Sunil describes the lack of specialised GPUs and the gap between the 10,000 chips currently deployed and the millions needed for nationwide use cases, and outlines the government-empanelled shared compute pool as a way to turn compute into a commodity [54-60][75-78][224-236]. Kalyan points to the joint venture with Foxconn to build a 16/32 nm fab, calling it patient capital and urging immediate action rather than waiting five years [441-447][449-452].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s AI roadmap estimates a need for at least 128,000 GPUs for domestic workloads, highlighting the urgency of expanding domestic GPU capacity and shared infrastructure models [S45]. The lack of GPUs and data centres in the Global South drives proposals for shared-infrastructure business models [S46]. Policy discussions warn that concentration of compute resources creates an AI divide, reinforcing the need for sovereign compute capacity [S43]. Recent government commitments include building gigawatt-scale, AI-ready data centres to address this scarcity [S60].
AI must remain a human‑centric tool that serves the masses and stays aligned
Speakers: Sunil Gupta, Ankit Bose, Ganesh Ramakrishnan, Brandon Mello
AI must serve the masses, remain a tool, not become the product AI alignment and safety are essential long‑term challenges to prevent hazardous outcomes Provenance, transparency, and education are essential for alignment Uncertainty in human‑AI interaction requires careful governance
All four speakers stress that AI should be deployed to solve real problems for people, with strong human oversight and alignment. Sunil warns against building “toys” and insists AI stay a tool for the masses [380-386]. Ankit flags the long-term safety challenge of misaligned AI [356-357][365-366]. Ganesh calls for provenance, transparency and education to keep models as “glass boxes” [371-376]. Brandon notes the rapid pace of AI outstripping governance and the need for careful human-AI interaction frameworks [358-366].
POLICY CONTEXT (KNOWLEDGE BASE)
The UN AI Security Council emphasizes inclusive, human-centric AI capacity-building to ensure AI serves societal needs and aligns with ethical standards [S42].
Data sovereignty through localisation, ownership rights and a robust data stack
Speakers: Sunil Gupta, Ganesh Ramakrishnan, Kalyan Kumar
Data localisation is a cornerstone of digital sovereignty Data ownership rights and a data‑product framework are foundational for sovereign AI Building a unified data stack with vector DBs and edge inference
The speakers converge on the need for strong data governance to underpin sovereign AI. Sunil points out that India creates and consumes 20 % of global data but only 3 % is hosted domestically, highlighting a strategic vulnerability [254-257]. Ganesh stresses that data creators must retain rights and that data catalogs, products and contracts are essential for interoperability [171-176]. Kalyan describes HCL’s work on a unified data platform, vector databases and edge-ready inference engines as critical infrastructure [96-108].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs on data for development stress the importance of localisation, ownership rights and robust data stacks to protect national interests in the digital economy [S36].
Interoperability and collaborative ecosystems accelerate sovereign AI development
Speakers: Ganesh Ramakrishnan, Kalyan Kumar
Interoperability across layers enables participation and multilingual data products Co‑design, academic‑industry consortia, and open collaboration accelerate sovereign models Cross‑border partnerships and asset acquisitions expand capabilities
Both speakers argue that open, interoperable frameworks and collaborative consortia are key to building sovereign AI. Ganesh highlights how interoperability encourages participation, alternatives and multilingual data products, and describes a nine-institution consortium that co-designs models for Indian contexts [151-156][193-214]. Kalyan adds that strategic acquisitions (e.g., Actian, vector engine) and partnerships broaden HCL’s technology base, supporting interoperable solutions [98-103].
POLICY CONTEXT (KNOWLEDGE BASE)
European AI governance proposals call for interoperable frameworks that enable collaborative ecosystems while respecting national sovereignty [S58]. The Hiroshima AI process similarly highlights interoperability as a cornerstone for coordinated AI policy [S59].
Multilingual and voice‑first AI as a strategic moat and adoption driver
Speakers: Sunil Gupta, Ganesh Ramakrishnan, Brandon Mello
Voice‑based AI will be the dominant interaction mode in India, leveraging linguistic diversity Multilingual AI models constitute a strategic moat for India Adoption depends on solving real everyday problems, consolidating fragmented tools, and supporting India’s multilingual landscape
All three emphasize that catering to India’s linguistic diversity is essential for AI adoption and competitive advantage. Sunil notes that India’s AI will be voice-based, serving users on feature phones in native languages [244-248]. Ganesh describes a foundation model covering 22 Indian languages using mixture-of-experts, positioning multilingual capability as a moat [190-212]. Brandon stresses that addressing multilingual needs and consolidating tools are critical for real-world adoption [347-351].
POLICY CONTEXT (KNOWLEDGE BASE)
Government policy frameworks are urging multilingual internet and AI adoption, treating language accessibility as a core right and encouraging voice-first solutions [S48]. India’s Frontier AI commitments specifically target strengthening multilingual and contextual AI evaluations [S49].
Massive up‑skilling of developers and shift toward building proprietary IP
Speakers: Ankit Bose, Kalyan Kumar
Massive developer up‑skilling program and curriculum overhaul Shift from services to building proprietary IP; need smarter engineers and quantum research
Both speakers underline the importance of developing a skilled AI workforce and moving from service-based models to proprietary product development. Ankit outlines NASCOM’s target to train 150,000 developers and revamp curricula with new specialisations within six months [312-319]. Kalyan discusses HCL’s strategic pivot to build its own IP, hiring smarter engineers with systems thinking, and the future need for quantum research to change the compute paradigm [266-304].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s sovereign AI strategy advocates moving from services-centric models to building proprietary IP and up-skilling engineers, coupled with investment in foundational research such as quantum computing [S39]. National AI capacity-building initiatives call for large-scale developer training, learning support and infrastructure to create a skilled AI workforce [S40][S41][S61]. The shift also reflects a broader debate on open-source versus proprietary software models in AI development [S38].
Government should fund the first inference cycle to accelerate AI adoption
Speakers: Sunil Gupta, Ankit Bose
Government should fund the first inference cycle to accelerate AI adoption
Sunil calls for public funding not only for model training but also for the initial inference phase, arguing this will jump-start adoption and allow revenue-generating use cases to become self-sustaining. He describes the existing shared compute facility and the need for government support for the first inference cycle [224-236]. Ankit’s questioning of how compute can become a shared commodity reflects alignment with this view [215-222].
POLICY CONTEXT (KNOWLEDGE BASE)
The recent keynote outlined a three-pronged plan where the government will underwrite the initial inference layer of sovereign AI models, leveraging newly built AI-ready data centres to accelerate adoption [S60]. Complementary policy notes stress the need for secure, resilient infrastructure to support early-stage AI services [S61].
Similar Viewpoints
Both identify compute scarcity as a critical barrier and propose domestic solutions—shared compute pools and indigenous chip manufacturing—to achieve sovereign AI capacity [54-60][75-78][224-236][441-447][449-452].
Speakers: Sunil Gupta, Kalyan Kumar
GPU scarcity hampers AI scaling Urgent investment in domestic semiconductor manufacturing is critical for sovereign compute
Both stress that a robust, interoperable data infrastructure—including vector databases and edge inference—is essential for scalable sovereign AI ecosystems [151-156][96-108].
Speakers: Ganesh Ramakrishnan, Kalyan Kumar
Interoperability across layers enables participation and multilingual data products Building a unified data stack with vector DBs and edge inference
Both argue that AI adoption hinges on delivering concrete, everyday solutions that benefit the broader population, rather than building AI for its own sake [380-386][337-351].
Speakers: Brandon Mello, Sunil Gupta
Adoption depends on solving real everyday problems, consolidating fragmented tools, and supporting India’s multilingual landscape AI must serve the masses, remain a tool, not become the product
Both highlight that addressing real use‑cases, reducing tool fragmentation, and supporting multilingual contexts are key to scaling AI uptake [312-319][337-351].
Speakers: Ankit Bose, Brandon Mello
Real‑world use cases, tool consolidation, and language support drive adoption Adoption depends on solving real everyday problems, consolidating fragmented tools, and supporting India’s multilingual landscape
Unexpected Consensus
Agreement between an academic researcher and an infrastructure provider on voice‑first, multilingual AI as India’s strategic moat
Speakers: Sunil Gupta, Ganesh Ramakrishnan
Voice‑based AI will be the dominant interaction mode in India, leveraging linguistic diversity Multilingual AI models constitute a strategic moat for India
Despite coming from different sectors-Sunil from a sovereign cloud provider and Ganesh from an academic-industry consortium-they both view voice-centric, multilingual AI as a unique competitive advantage for India, a convergence not obvious given their distinct focus areas [244-248][190-212].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy statements highlight the strategic importance of multilingual, voice-first AI for national inclusion, with regulatory guidance encouraging such collaborations [S48][S49].
Overall Assessment

The panel shows strong consensus on four pillars: (1) addressing compute scarcity through shared facilities and domestic chip manufacturing; (2) ensuring AI remains human‑centric, aligned and serves the masses; (3) establishing robust data sovereignty via localisation, ownership rights and interoperable data stacks; (4) fostering collaborative, interoperable ecosystems and multilingual/voice‑first AI to drive adoption. There is also broad agreement on the need for massive up‑skilling and government support for the inference phase.

High consensus across industry, academia and startups, indicating a unified direction for India’s sovereign AI strategy. This alignment suggests that policy measures, public‑private partnerships and capacity‑building initiatives are likely to receive coordinated support, accelerating progress toward a sovereign, inclusive AI ecosystem.

Differences
Different Viewpoints
How to resolve India’s compute scarcity for sovereign AI
Speakers: Sunil Gupta, Kalyan Kumar
GPU scarcity hampers AI scaling Shared, government‑empanelled compute pool as a commodity Urgent investment in domestic semiconductor manufacturing is critical for sovereign compute Shift from services to building proprietary IP; need smarter engineers and quantum research
Sunil argues that the main bottleneck is a lack of GPUs and proposes expanding a shared, government-empanelled pool of GPU capacity as a commodity to be subsidised and scaled ([54-60][75-78][224-236]). Kalyan counters that relying on external GPU allocations is insufficient; instead, India must develop its own semiconductor fab and invest in quantum-level research and smarter engineering talent to create indigenous compute resources ([441-448][449-452][286-290][298-304]).
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses quantify the GPU shortfall and propose shared-infrastructure models and government-led data centre construction as solutions to sovereign compute scarcity [S45][S46][S60].
What constitutes the primary barrier to AI adoption in India
Speakers: Sunil Gupta, Brandon Mello
GPU scarcity hampers AI scaling ROI invisibility, lack of executive sponsorship, and departmental silos block AI projects Data trust, compliance friction and need for data contracts
Sunil maintains that insufficient compute resources are the chief obstacle to scaling AI for the masses ([54-60][75-78]). Brandon, however, emphasizes organisational and financial hurdles-CFOs cannot quantify ROI, executives do not champion projects, and data-compliance red-tape stalls implementation ([119-124][129-136]). Thus they disagree on whether the bottleneck is technical (compute) or organisational/financial.
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholders identify the concentration of compute resources and energy constraints as key barriers, warning that limited access deepens the AI divide and hampers adoption [S43][S44][S57].
Approach to building AI talent and skills in the country
Speakers: Ankit Bose, Kalyan Kumar
Massive developer up‑skilling program and curriculum overhaul Shift from services to building proprietary IP; need smarter engineers and quantum research
Ankit outlines a rapid up-skilling drive targeting 150,000 developers and a curriculum rewrite to create AI-ready graduates ([312-319]). Kalyan argues that merely training large numbers of coders is insufficient; the focus should be on hiring fewer but smarter engineers with systems-thinking and research capabilities, plus long-term quantum research ([286-290][298-304]). Both seek a skilled workforce but differ on scale and depth of training.
POLICY CONTEXT (KNOWLEDGE BASE)
National AI capacity-building programs emphasize up-skilling developers through training infrastructure, industry partnerships and education initiatives to create a robust talent pipeline [S40][S41][S61].
Unexpected Differences
Hardware solution: more GPUs vs new AI‑specific hardware designs
Speakers: Sunil Gupta, Ganesh Ramakrishnan
GPU scarcity hampers AI scaling Existing GPU hardware is not optimized for AI workloads; need specialized designs and better model‑serving engines
While both acknowledge a hardware bottleneck, Sunil proposes scaling the existing GPU pool through government-empanelled procurement ([54-60][75-78][224-236]), whereas Ganesh argues that the current generation of GPUs was never built for AI and that India must develop new hardware architectures such as SIG designs and improved serving engines ([426-434]). The divergence between scaling existing GPUs and redesigning hardware was not anticipated given their shared focus on sovereignty.
POLICY CONTEXT (KNOWLEDGE BASE)
India’s compute roadmap quantifies the GPU deficit, suggesting immediate scaling of GPU inventories as the primary hardware response, while longer-term research into AI-specific chips remains under discussion [S45].
Emphasis on quantum and physics research versus immediate compute expansion
Speakers: Kalyan Kumar, Sunil Gupta
Urgent investment in domestic semiconductor manufacturing is critical for sovereign compute Shift from services to building proprietary IP; need smarter engineers and quantum research GPU scarcity hampers AI scaling
Kalyan highlights long-term quantum research and fundamental physics as essential for future compute paradigms ([298-304]), a focus that is surprisingly absent from Sunil’s more immediate, short-term strategy of expanding GPU capacity and government-funded inference ([54-60][75-78][224-236]). The contrast between a futuristic research agenda and a near-term deployment plan was not evident earlier in the discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs note a growing focus on quantum computing as an immediate strategic priority, potentially diverting attention from near-term compute expansion needed for AI deployment [S39][S50].
Overall Assessment

The panel shows strong consensus on the need for a sovereign AI ecosystem that benefits the Indian population and the Global South. However, significant disagreements arise around the primary bottleneck (compute vs organisational/financial barriers), the optimal path to resolve compute scarcity (shared GPU pools vs indigenous chip fabrication and quantum research), and the preferred model for talent development (mass up‑skilling vs elite, research‑oriented engineering). These divergences reflect differing strategic horizons—short‑term deployment versus long‑term technological independence.

Moderate to high. While the overarching goal is shared, the contrasting views on technical, policy and capacity‑building approaches could lead to fragmented initiatives unless a coordinated roadmap reconciles these perspectives. The implications are that without alignment, efforts may duplicate, compete for resources, or stall, potentially slowing India’s progress toward AI sovereignty.

Partial Agreements
All four speakers agree that India must build a sovereign AI ecosystem that serves the nation and the Global South. However, Sunil pushes a market‑driven shared GPU pool funded by the government ([224-236]), Kalyan stresses building indigenous chip fabs and quantum research ([441-452]), Ganesh calls for layered interoperability and consortium‑driven co‑design ([151-176]), while Ankit focuses on massive developer up‑skilling and curriculum changes ([312-319]). The shared goal is clear, but the pathways diverge.
Speakers: Sunil Gupta, Kalyan Kumar, Ganesh Ramakrishnan, Ankit Bose
Shared, government‑empanelled compute pool as a commodity Urgent investment in domestic semiconductor manufacturing is critical for sovereign compute Interoperability across layers enables participation and multilingual data products Massive developer up‑skilling program and curriculum overhaul
Both agree that AI must reach the masses, but Sunil sees compute as the decisive factor, whereas Brandon sees organisational and financial structures as the key to unlocking adoption. Their end‑goal aligns, but the means differ.
Speakers: Sunil Gupta, Brandon Mello
GPU scarcity hampers AI scaling ROI invisibility, lack of executive sponsorship, and departmental silos block AI projects
Takeaways
Key takeaways
Compute scarcity, especially GPUs, is the primary bottleneck for scaling sovereign AI in India; a shared, government‑empanelled compute pool is being built to address this. A robust data infrastructure—including unified data stacks, vector databases, edge inference, and interoperable layers—is essential for AI adoption and for enabling multilingual, trustworthy data products. Talent development must shift from service‑oriented coding to building proprietary IP, with large‑scale up‑skilling programs, curriculum redesign, and focus on smarter engineers and quantum research. Adoption barriers are dominated by ROI invisibility, lack of executive sponsorship, departmental silos, and language/​data‑trust challenges; real‑world, high‑impact use cases and tool consolidation are needed to overcome them. Collaboration across academia, industry, and government (consortia, co‑design, cross‑border partnerships) accelerates model development and reduces duplication. AI must remain a human‑centric tool that serves the masses; provenance, transparency, and alignment education are critical to avoid AI becoming a product itself. Policy momentum is evident through the launch of the Sovereign AI research report, an MOU with Amrita Vishwa Vidyapeetham, and government commitments to expand shared GPU capacity.
Resolutions and action items
Release of the Sovereign AI research report by Amrita Vishwa Vidyapeetham. Signing of an MOU between NASCOM and Amrita Vishwa Vidyapeetham. Government to continue empaneling GPU providers and add 20,000 GPUs to the shared compute pool; target to scale to 50‑60,000 GPUs in the near term. NASCOM to launch a developer up‑skilling initiative targeting 150,000 developers over the next six months and to revise technical curricula (BTEC, MTEC, MCA, etc.) with specialization tracks. Participants invited to scan the QR code on the digital background to provide feedback on the report and contribute to the forthcoming sovereign AI policy document. HCL announced a joint venture with Foxconn (India Chips Limited) to develop 16/32 nm fabs, signaling a long‑term hardware roadmap. Call for government funding to support the first inferencing cycle of sovereign models to enable early adoption and revenue generation.
Unresolved issues
How to scale GPU availability from the current tens of thousands to the millions required for nationwide inferencing across sectors. Establishing standardized data contracts, monetization models, and governance frameworks for multilingual data products. Developing reliable ROI measurement tools and processes for AI projects within enterprises. Defining concrete timelines and responsibilities for the transition from shared compute for training to shared compute for inferencing. Addressing quantum‑compute research needs and integrating such capabilities into the sovereign AI stack. Ensuring consistent AI alignment and safety across diverse applications without a unified regulatory mechanism.
Suggested compromises
Adopt a shared‑commodity model for compute where multiple providers contribute GPUs and compete on price, while the government sets baseline access terms. Balance scaling‑up (larger centralized clusters) with scaling‑out (edge and distributed inference) to meet both latency and coverage requirements. Provide choice of infrastructure providers (hyperscalers, IOTA, CIFI, etc.) to give enterprises flexibility while maintaining sovereignty. Co‑design approach that combines academic research, industry implementation, and government policy to align incentives and share risk.
Thought Provoking Comments
The biggest problem for taking AI to the masses in India is how to make compute available in an abundant way – we need millions of GPUs, not just a few thousand, and the government is creating a shared compute facility to address this.
He identified compute scarcity as the fundamental bottleneck for sovereign AI, shifting focus from software and data to the physical infrastructure needed at scale.
Set the agenda for the rest of the discussion on hardware infrastructure; prompted other panelists (e.g., Kalyan and Ganesh) to talk about data platforms and interoperability as complementary layers, and led to a deeper dive into government‑led shared GPU pools.
Speaker: Sunil Gupta
Beyond infrastructure and models, the data layer is critical – we need centralized yet edge‑distributed vector databases, data contracts, and catalogs to ensure high‑quality, usable data for AI at scale.
He introduced the often‑overlooked importance of a robust data stack, linking it to edge inference and the need for data‑centric products.
Expanded the conversation from raw compute to data architecture, prompting Ganesh to discuss interoperability and prompting the group to consider how data sovereignty ties into the overall AI ecosystem.
Speaker: Kalyan Kumar
95 % of AI pilots never reach production because of ROI invisibility, data‑trust/compliance friction, and the champion problem – lack of executive sponsorship and clear metrics stall adoption.
He highlighted systemic organizational barriers rather than technical ones, reframing the challenge as a business‑process issue.
Shifted the tone from technology‑centric to adoption‑centric, leading Ankit and others to ask about concrete steps for scaling AI in enterprises and prompting discussion on executive buy‑in and measurement.
Speaker: Brandon Mello
Interoperability at every layer encourages participation, offers alternatives, and enables scaling out; it also requires data products, catalogs, and contracts to make data a tradable asset.
He introduced a unifying principle—interoperability—that connects compute, data, and model layers, and linked it to economic models of data ownership.
Created a turning point where the panel moved from isolated challenges to a holistic framework; other speakers referenced interoperability when discussing shared compute and data platforms.
Speaker: Ganesh Ramakrishnan
Collaboration and co‑design are essential – we built a multilingual speech‑to‑text model using mixture‑of‑experts where experts for Hindi and Marathi were shared, and Telugu leveraged collaboration between Hindi and Tamil experts.
He provided a concrete example of how interdisciplinary collaboration yields technical breakthroughs, emphasizing empathy between linguists and engineers.
Deepened the technical discussion, illustrating how collaborative design can overcome language diversity challenges; reinforced the earlier call for interoperability and data sharing.
Speaker: Ganesh Ramakrishnan
The government’s empaneling of GPU providers has created a shared compute facility of 38,000 GPUs, with plans to add 20,000 more; we need to extend this model to inferencing and subsidise the first cycle of usage to drive adoption.
He detailed a concrete policy mechanism that turns the abstract compute problem into an actionable program, and highlighted the need for ongoing support beyond training.
Validated Sunil’s earlier compute argument with policy evidence, steering the conversation toward implementation timelines and the role of public funding in scaling AI services.
Speaker: Sunil Gupta
India must pivot from a service‑oriented model to building its own IP; we need smarter engineers, focus on quantum/computational research, and invest in long‑term talent rather than short‑term hiring.
He challenged the status quo of Indian tech firms, urging a strategic shift toward indigenous product development and fundamental science.
Prompted a broader reflection on skill development and long‑term competitiveness, influencing later remarks about developer upskilling and curriculum redesign.
Speaker: Kalyan Kumar
If AI is not aligned, we become products rather than consumers; provenance at every stack level—data aggregation, metadata, tokenisation, observability—is essential to keep humans in control.
He raised the ethical dimension of AI sovereignty, linking technical provenance to societal control and preventing AI from dictating human behavior.
Shifted the final segment toward ethical considerations, reinforcing Sunil’s call for AI to serve the masses and prompting the panel to conclude on responsible AI governance.
Speaker: Ganesh Ramakrishnan
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved the conversation from identifying a single bottleneck (compute) to constructing a comprehensive sovereign AI ecosystem. Sunil Gupta’s emphasis on GPU scarcity anchored the need for hardware, Kalyan Kumar broadened the view with a data‑centric stack, and Brandon Mello reframed the challenge as organizational adoption. Ganesh Ramakrishnan’s calls for interoperability, collaborative model design, and provenance tied these technical and business strands together, while his ethical warning capped the dialogue. Together, these comments redirected the panel from isolated problems to an integrated strategy encompassing infrastructure, data, talent, policy, and responsible use, ultimately defining a roadmap for India’s sovereign AI ambition.

Follow-up Questions
How can India scale GPU compute to the millions needed for nationwide AI deployment?
Sunil highlighted the current shortfall of GPUs and the need for a massive increase to support training and inferencing for billions of users, indicating a critical infrastructure gap.
Speaker: Sunil Gupta
What strategies are needed to build a robust, distributed data stack (including vector databases and edge inference) for sovereign AI?
Kalyan emphasized the importance of data platforms, vector DBs, and edge capabilities as a foundational layer beyond compute, suggesting further development and research.
Speaker: Kalyan Kumar
How can organizations reliably quantify AI ROI to overcome ‘ROI invisibility’ and secure budget approval?
Brandon noted that a third of CFOs cannot measure AI ROI, leading to stalled pilots; developing metrics and tools is a research and practice need.
Speaker: Brandon Mello
What mechanisms can reduce data trust, compliance friction, and departmental silos that impede AI project progress?
He identified bureaucratic hurdles across IT, procurement, etc., as a barrier, calling for streamlined processes and governance models.
Speaker: Brandon Mello
How can companies ensure strong executive sponsorship (the ‘champion problem’) for AI initiatives?
Lack of senior leadership support leads to project abandonment; identifying effective sponsorship models is an open issue.
Speaker: Brandon Mello
What standards and frameworks are required to achieve interoperability across AI layers (data, models, applications) in India?
Ganesh advocated for interoperability to enable participation, alternatives, and scaling, implying need for common protocols and contracts.
Speaker: Ganesh Ramakrishnan
How should data products, catalogs, and contracts be designed to enforce data ownership and sovereignty?
He discussed the concept of data catalogs and contracts as essential for interoperable, sovereign data ecosystems.
Speaker: Ganesh Ramakrishnan
What research is needed in quantum computing and fundamental physics to address future compute demands for AI?
Kalyan suggested that breakthroughs in quantum and physics could reshape compute paradigms, indicating a long‑term research direction.
Speaker: Kalyan Kumar
How can AI systems be kept aligned with human values and maintain provenance throughout the stack?
He warned that without alignment and provenance, AI becomes a product rather than a tool, highlighting a need for governance and transparency research.
Speaker: Ganesh Ramakrishnan
What approaches can improve AI adoption by consolidating tools, supporting multilingual contexts, and ensuring data security?
Brandon identified tool fragmentation, language diversity, and data privacy as adoption barriers that require targeted solutions.
Speaker: Brandon Mello
How should government policies support a shared compute commodity and fund the first cycle of AI inferencing?
Sunil described the shared compute facility model and called for government backing of early inferencing phases to drive adoption.
Speaker: Sunil Gupta
What curriculum changes and developer upskilling programs are needed to prepare 150,000 AI‑ready developers in six months?
Ankit mentioned upcoming initiatives to rewrite technical curricula and massive developer training, indicating a need for educational design research.
Speaker: Ankit Bose
How can Indian software firms transition from service‑oriented models to building proprietary AI IP?
Kalyan highlighted the strategic pivot required for sovereignty, suggesting research into product development, IP creation, and market entry.
Speaker: Kalyan Kumar
What frameworks are needed to manage AI deployment across consumer, enterprise, government, and critical national infrastructure sectors?
He broke AI impact into four domains, each with distinct requirements, calling for sector‑specific governance and technical standards.
Speaker: Kalyan Kumar
How can AI alignment challenges be addressed to prevent AI systems from becoming mere products without human control?
Sunil stressed the importance of human‑in‑the‑loop and avoiding AI as a product, pointing to ethical and control research needs.
Speaker: Sunil Gupta

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.