India’s AI Future Sovereign Infrastructure and Innovation at Scale
20 Feb 2026 16:00h - 17:00h
India’s AI Future Sovereign Infrastructure and Innovation at Scale
Summary
The panel opened with the launch of the “Sovereign AI” research report by Amrita Vishwa Vidya Peetham and introduced a diverse group of industry and academic leaders to discuss how India can build sovereign AI capabilities [1-4]. Moderator Ankit Bose then asked each panelist to name the single most important factor for India to achieve AI leadership for the country and the Global South [44-45][92-95].
Sunil Gupta argued that India’s principal bottleneck is the lack of abundant GPU compute, noting that only a few thousand GPUs are currently available while millions will be needed for large-scale inference and training [54-58][70-78]. He described how the government’s “shared compute facility” has pooled roughly 38 000 GPUs from providers such as IOTA and is adding another 20 000, creating a low-cost resource for startups and research [224-236][237]. Gupta urged that this shared infrastructure be extended beyond model training to support the first wave of inference for sectoral use cases, with government subsidies for the initial cycle [240-247][250-254].
Kalyan Kumar highlighted that sovereign AI also requires a robust data layer, including localized vector databases, data catalogs and contracts, to enable distributed edge inference and high-quality data products [96-108]. He explained that HCL’s recent acquisition of Actian and CWI assets gives them control over core database patents and a vector AI engine slated for release, which will underpin the data-centric approach [98-103]. Kumar stressed that without such data infrastructure, even abundant compute cannot deliver scalable AI solutions [105-108].
Brandon Mello identified three systemic barriers to AI adoption in Indian enterprises: difficulty quantifying ROI, fragmented departmental ownership of AI projects, and the lack of executive sponsorship [119-124][129-143]. Ganesh Ramakrishnan added that ensuring interoperability across the AI stack-from models to data contracts-will foster participation, enable alternative solutions, and support a collaborative ecosystem of academia and industry [151-162]. He also emphasized co-design and a nine-institution academic consortium that is building multilingual foundation models tailored to Indian contexts [163-170][188-194].
The panelists converged on the view that building sovereign AI requires coordinated investment in compute, data infrastructure, skilled talent, and open collaboration between government, startups and research institutions [215-223][454-455]. They announced ongoing actions such as NASCOM’s policy draft, a new MOU with Amrita, and a QR-code-driven feedback mechanism to shape India’s AI roadmap [414-420][435-438].
Keypoints
Major discussion points
– Compute infrastructure is the bottleneck for sovereign AI.
Sunil Gupta emphasized that the lack of abundant GPU compute has been the core obstacle and described how IOTA’s “Sovereign Cloud” has built a shared pool of ≈ 10 000 GPUs, with the government now aggregating ≈ 38 000 GPUs and planning to add 20 000 more [46-78][224-236].
– A robust data stack and interoperability are essential layers.
Kalyan Kumar highlighted the need for centralized data platforms, vector-DBs, edge inference and data contracts to ensure high-quality, shareable data [96-108]. Ganesh Ramakrishnan added that interoperability across models, datasets and institutions enables participation, scaling and the creation of data products [151-168].
– Adoption hurdles stem from ROI uncertainty, organisational friction and lack of executive sponsorship.
Brandon Mello identified “ROI invisibility,” siloed departmental processes and the “champion problem” as reasons why 95 % of AI pilots never reach production [115-142]. He later stressed the importance of solving real-world use cases, consolidating tools and handling India’s multilingual data to drive adoption [335-351].
– Skill development and a shift from services to product/IP creation are required.
Kalyan Kumar argued that India must pivot from a service-only model to building its own IP, investing in smarter engineers, research talent and new semiconductor capabilities [266-310]. Ankit Bose noted NASCOM’s initiative to up-skill 150 k developers and revamp curricula to produce specialised AI talent [312-319].
– Collaboration between government, academia and industry is the backbone of the sovereign AI ecosystem.
Ganesh stressed the need for interoperable standards, consortium-based research (nine academic institutions) and co-design of models and data contracts [151-170][193-205]. Sunil described the government’s “shared compute facility” that empanels multiple providers, creating a public-private partnership for scaling AI resources [224-236].
Overall purpose / goal
The session was convened to launch the Sovereign AI research report and to surface concrete actions that India-and the broader Global South-must take to build a self-reliant AI ecosystem. Panelists were asked to pinpoint the single most critical step for achieving sovereign capability, covering infrastructure, data, talent, adoption and collaborative governance.
Overall tone
The discussion began with a formal, celebratory tone (report launch, introductions) and quickly shifted to a technical, problem-focused dialogue about compute shortages and data challenges. Mid-session the tone became solution-oriented and collaborative, with panelists proposing concrete initiatives, partnerships and skill-building programs. It concluded on an optimistic, call-to-action note, urging participants to join the consortium, contribute to the QR-coded roadmap and continue the partnership.
Speakers
Speakers (from the provided list)
– Ankit Bose – Head of AI, NASCOM (National Association of Software and Services Companies) – Moderator of the panel and expert on AI ecosystem development and developer enablement. [S4]
– Sunil Gupta – Co-founder, Managing Director & CEO, Yotta (Yotta Data Services) – Builder of Sovereign Cloud infrastructure and large-scale GPU compute facilities in India. [S4]
– Kalyan Kumar – Executive Vice President, Head of Software Product Business, HCL Software – Leader in enterprise software products, data platforms, and sovereign-by-design solutions. [S6]
– Ganesh Ramakrishnan – Professor, Indian Institute of Technology Bombay – Researcher in AI foundations, interoperability, and large-scale language models. [S9]
– Professor Ganesh Ramakrishnan – (same individual as Ganesh Ramakrishnan; listed separately in the names list) – Professor, IIT Bombay – AI research and model development. [S9]
– Brandon Mello – Founding GTM Executive, GenSpark.ai – Entrepreneur driving agentic AI solutions for knowledge-workers and enterprise adoption. [S12]
– Speaker 1 – Event moderator/host – Introduced the session, announced report launch and MOU, and facilitated the panel discussion.
Additional speakers (not in the provided names list)
– Dr. Manisha V. Ramesh – Pro Vice-Chancellor, Amrita Vishwa Vidyapeetham – Representative for the launch of the Sovereign AI research report.
– Dr. Shiva Ramakrishnan – Head, AI Safety Research Lab, Amrita Vishwa Vidyapeetham – Co-speaker for the report launch.
– Professor Suresh – Academic representative (specific affiliation not stated) – Invited to the stage for the report launch.
– Bharat Jain – Panelist (affiliation not specified in transcript) – Contributed to the discussion on AI sovereignty.
– Bhaskar Gorti – Executive Vice President, Tata Communications – Panelist discussing telecom and communications aspects of sovereign AI.
– Brenno – (likely a mis-pronunciation of Brandon Mello) – Referenced in the transcript but covered under Brandon Mello above.
– Other unnamed panelists – The transcript mentions “Mr. …” and “Ms. …” without full names; these are not listed due to insufficient information.
Opening & report launch – The session began with the formal launch of the Sovereign AI research report produced by Amrita Vishwa Vidya Peetham. The moderator thanked the audience, invited Pro-Vice-Chancellor Dr Manisha V. Ramesh and AI-Safety Lab head Dr Shiva Ramakrishnan to the stage, and then introduced the panel (Prof Ganesh Ramakrishnan, IT Bombay; Bharat Jain, IIM Indore consortium; Sunil Gupta, co-founder, MD & CEO of Yotta; Bhaskar Gorti, Tata Communications; Kalyan Kumar, CPO, HCL Software; Brandon Mello, GenSpark) [1-5].
Key “single-most-critical-factor” answers
* Sunil Gupta – Compute scarcity – Gupta identified the shortage of specialised GPU compute as the decisive bottleneck. He noted that when large-scale generative models emerged, India had strong software, services and talent, but “what India was not having at that time was compute” [54-58]. He added that the Indian language model Bhashini was recently migrated from a hyperscale cloud to Yotta’s Sovereign Cloud [54-58]. Gupta quantified the gap: the shared-commodity pool currently holds ~38 000 GPUs, with an additional 20 000 announced, yet “millions of GPUs” will be required for nationwide inference across sectors [70-78][224-236][237]. He argued that 95 % of the country’s use-cases can be served by a 20-100 billion-parameter model, underscoring the urgency of scaling [70-78]. Only 3 % of India-generated data is hosted in-country while India creates/consumes 20 % of global data [54-58]; therefore he called for government-funded subsidies for the first inference cycle to jump-start sectoral adoption [240-247][250-254][260-267].
* Kalyan Kumar – Interoperable data stack – Kumar stressed that compute alone is insufficient without a robust, interoperable data layer. HCL’s unified platform combines centrally managed vector-DBs, edge-ready AI engines, and patents acquired from Actian and CWI Netherlands [96-108][98-103]. The platform emphasizes “data products, contracts and catalogs” to ensure quality, accessibility and provenance as inference moves to the edge [105-108][171-176].
Ganesh Ramakrishnan – Interoperability & data ownership – Ganesh highlighted the need for layer-wise interoperability to encourage participation, offer alternatives and balance fidelity-latency trade-offs [151-156]. He cited the nine-institution consortium led by IIM Indore that co-designs multilingual foundation models for 22 Indian languages using mixture-of-experts architectures [163-170][188-212]. To protect creators, he invoked the principle “jiska data uska adhikar” and referenced his book Samanway (meaning “bringing all languages together”) [166-170]. He also mentioned his earlier work Informatics and AI for Healthcare* [112-115] and advocated “glass-box” models that expose provenance and enable trustworthy AI [151-156].
* Brandon Mello – Adoption barriers – Mello shifted the focus to organisational frictions that keep AI pilots in sandbox mode. He identified “ROI invisibility” – the inability of CFOs to quantify returns – as a key blocker, noting that only one in ten executives has tools to measure AI ROI [119-124]. He added “data-trust and compliance friction” from siloed departmental ownership and the “champion problem” where lack of executive sponsorship stalls projects [129-143]. Successful adoption, he argued, requires solving real-world use cases, consolidating fragmented tooling, and supporting India’s multilingual landscape [335-351].
Deep dive on compute infrastructure – Building on Gupta’s points, the panel described the government empanelment process that lets multiple providers contribute GPUs at market-determined price points, creating a low-cost commodity for startups and research [224-236]. The current pool of ~38 000 GPUs (plus the announced 20 000) is a first step; the panel urged public funding not only for model training but also for the inference phase, arguing that subsidised early usage will generate revenue-producing use cases and later attract private investment [260-267][239-254].
Talent & IP strategy (Kalyan Kumar) – Kumar argued that India must pivot from a service-oriented model to building proprietary IP. He recalled HCL’s 2015-16 decision to “build products for ourselves” and the subsequent acquisition of talent and assets [266-283]. Emphasising the exact wording from the transcript, he said, “you need fewer people, smarter people” [286-290]. He called for investment in fundamental physics and quantum research to reshape future compute paradigms [286-304][298-304], and highlighted the India Chips Limited-Foxconn joint venture as a path to domestic semiconductor fab capacity [441-452].
NASCOM up-skilling & curriculum reform (Ankit Bose) – Bose outlined a complementary programme targeting 150 000 developers over the next six months, together with a curriculum overhaul (BTEC, MTEC, MCA, BCA) in partnership with MIT and industry bodies to create specialised AI tracks [312-319][326-329].
Sector-specific perspectives (Kalyan Kumar) – Kumar outlined four stakeholder lenses:
1. Consumer AI – data-control mechanisms, regulator-led data-rights frameworks.
2. Enterprise AI – metadata-first approaches, data-product marketplaces.
3. Government services – sovereign platforms for citizen services and public-sector AI.
4. Critical national infrastructure – air-gap, defence-grade security, and the need for choice of infrastructure and human-centric AI [96-108][266-283].
Closing – The panel invited participants to scan the QR code displayed on the digital backdrop to provide feedback on the report and contribute to the forthcoming Sovereign AI policy document [435-438]. The session concluded with the signing of an MOU between NASCOM and Amrita Vishwa Vidya Peetham, a group photo, and a reaffirmation of the commitment to advance India’s AI capabilities for both national and Global South impact [414-420][454-455].
Thank you. Thank you. hello and good afternoon everyone thank you for joining us for this session on sovereign AI for India before we begin the panel discussion again we are happy to announce that there will be a launch of the sovereign AI research report by Amrita Vishwa Vidyapetam may I invite the following representatives to kindly join us on stage first for the release of the report from Amrita we would like to invite pro vice chancellor Dr. Manisha V. Ramesh and if available head of the AI safety research lab Dr. Shiva Ramakrishnan and any other representatives from Amrita Vishwa Vidyapetam that you would like to invite on stage sir alright Alright, Professor Suresh and if we could please have you on stage I would like to invite Mr.
Ankit Bose, Head NASCOM AI on stage as well We will Thank you so much Yeah, yeah, absolutely You can take a seat sir if you want Thank you Thank you. Thank you, everyone. We now move into the panel discussion. To guide this conversation, we are joined by Mr. Ankit Bose, head NASCOM AI. Joining him today are our distinguished panelists, Professor Ganesh Ramakrishnan from IT Bombay and Bharat Jain, Mr. Sunil Gupta, co -founder, MD, and CEO of Yotta, Mr. Bhaskar Gorti, EVP, Tata Communications, Mr. Kalyan Kumar, CPO, HCL Software, and Mr. Brenno Mello, founding GTM executive, GenSpark. Ankit, over to you. Professor Ganesh will be shortly joining us in two minutes. Thank you.
So hi everyone, I think we had a good launch and we have a very strong panel. So Ganesh was on the way and he is still stuck on the traffic, he is walking in. So meanwhile we start the discussion, I think, you know, happy to have a very strong panel. So why don’t we do this, we start with the introduction, right? I think Kalyan, we can start with your quick introduction. So Neil and then Bruno.
Yeah, hi, Kalyan Kumar, call me KK. I run the software product business for HCL, HCL Software. We are the largest India headquartered enterprise B2B software company with about 10 ,000 customers and about 1 .5 billion dollars of revenue. And very intricately involved in building software products which are sovereign by design.
Hello, good afternoon. Good afternoon. Good afternoon. My name is Sunil Gupta. I am co -founder and CEO of IOTA. So we run data center campuses. We have built Sovereign Cloud in India, which is running a whole lot of mission -critical government of India applications. Recently, we migrated Bhashini from a hyperscale cloud to our Sovereign Cloud. Our claim to fame in the last two years is that we have got thousands of NVIDIA GPU chips in India. And all the models which you are hearing getting launched in this summit, MITS, Sarvam model, IOT, Bombay’s Bharat Gen model or Socket model, they all have been trained on our GPU clusters, and now they are being made available to public use.
Thank you.
Hello. Good afternoon. My name is Brandon Mello. I work for Genspark .ai, a follow -up -based company. We have been around for about 10 months. We are the largest growing AI company right now in the world. We just broke $200 million in ARR. Our solution has been incredibly well -received. adopted in the market. It is our third largest market and our solution is to drive adoption from the bottom up by bringing agentic AI to the knowledge worker. Thanks for letting me be here.
Great, great, great. And hi, folks. I’m Ankit Bose. I head AI for NASCOM. So, whatever NASCOM does in AI something, I support that. I lead that, right? And we will be joined by Ganesh, who is from Bhadrajin. He’s leading the, you know, sovereign AI modern building effort in the country, right? So, I think meanwhile you join, let’s start. I think, Sunil, let me start with you, right? The first question I think I would want to ask after five days of immense brainstorming around, you know, AI for the country, AI for the world, right? You know, what is the top thing you say which, you know, India has to do, right, to build its sovereign capability, not only for the country, plus for the global south?
Yeah. Ankit, if I take everybody, Just two years or maybe two and a half years down the line, when Chad GPT got on world scene, basically AI capability came in consumer hands. A big debate happened in India’s obviously government circle, industry circle, telecom circle, technology circles everywhere. That while India has got everything which is needed to succeed in AI, like we have been software and services leaders for last three decades. We have a startup ecosystem. On skill set index of mathematics, science, engineering, we are always the best. As a market, we are literally close to 1 billion people carrying smartphones, creating consuming content. AI ultimately resulted to most of the cases, you know, some apps which will be giving some productivity to us.
So both on the demand side and the supply side, including data sets like India will have the best data sets available. So everything India has, but what India was not having at that time was compute. Because AI does not run. And regular data centers or regular CPU computes, it required this. specialized GPU computes. So I would say that the biggest problem and of course you have to take care of the entire stack models, data sets, applications, everything. But the core problem to solve for taking AI to the masses was that how do you make compute available in an abundant way so that we don’t think of that. That should become just a hygiene which is always available.
And that’s the problem we tried to solve. You know way back at that time Jensen was in India. I happened to get to meet him and he says we as NVIDIA are too committed to India. We can extend your parity allocation. We can give you engineering support, everything. But somebody has to take a step forward of not only putting your data centers and power and everything but you also need to put in chips and we will give you everything. And from there to now today we are running almost 10 ,000 chips. You know as I said majority of the models which you are hearing sovereign models getting launched in India. You know they have been trained on a GPU.
But the real thing I would say is start now. Many of these models are great, you must have heard Sarvam Modeller beating Gemini and ChatGPT on many of the match marks. And they are making them absolutely for India use cases like OCR, you know the handwritten notes and all that thing, how do you get convert and all that stuff. So these are real India purpose built use cases and models. When they start scaling, when they start getting adopted by masters, we have seen one UPI changed our lives. Imagine we have UPI in 50 different sectors in the country, 50 UPI movement will come into India. At that time, the number of GPUs required will be millions. Today we are happy as a country, we have X thousand of GPUs.
But if you as a single company like SpaceX or like Meta can have 1 million GPUs, India as a country require multiple million GPUs. So while we are working on all the upper layer of stacks and Indians are very good at that, models, data sets, applications. We need to solve this issue. We are taking care of infrastructure problems. We are taking care of railways and roadways and airports. We also need to create this digital infrastructure. We take care of that, make it available abundantly to every startup, every, you know, I would say academic community. We make it available at a very low price. Government India AI machine is doing a human’s role. On one side, they have asked people like us, incentivized us to invest into the GPUs.
But they are taking GPUs from us, putting their own money, putting their own subsidy and then giving it to Sarvams and IITs and sockets of the world. And they think now you make, you don’t have to bother about money. Just go and make India’s plastic model. And the result is to seem in two years, India has come a long way and we have a long way to go. Compute problem has to be solved.
Great. Thank you. Thank you, Sunil. Same question to you, KK. You know, what is the one thing you feel can add the edge, right? The whole.
When you look at sovereign and I think Minister of Electronics and IIT Vaishnavji, he was mentioning. The. Mr. talking the five layers layer stack right and that’s where if you what sunil mentioned is for a easier way i say i use the word infrastructure which can combine energy or the ping power uh cooling the whole stack so that’s that’s providing that layer and then explain the whole model piece i think as you train and when you start to deploy at scale a couple of things becoming very interesting so you need to start to also build a data stack data platforms vector dbs edge vector i personally think you can do as much centralization the way the data consumption model is going is going to highly get distributed going to go down into the edge correct so you need a very different kind of inferencing and those capabilities so you need a data layer something which uh which we are doing is very interesting outside of oracle and ibm uh the only other company which has all the patents for database is Ethier, because we acquired Actian.
So Actian owns the original patent of Ingress. And every derivative today, whether it is Postgres or every one of them is basically an Ingress query processor derivative, including SQL Server and others. Like that, we also acquired an asset from CWI in Netherlands. So we have a VectorDB, the original Vector engine. So we’ve been building a lot of those asset portfolio, HDB, now releasing a, in April we’re going to release a localized vector AI engine, which again can run on, because as the AI PCs become more and more, Edge becomes more and more, so building that. And building the data disciplines. I think that’s a very important layer. A lot of times what happens is we worry about infrastructure, and then we think about model, and then app.
The data platform is going to become very important, because as we’re building the data platform, the enterprise will only scale if you get your data. centric approach, data products, data contracts, data catalogs and those kind of things. Because finally the AI use case is going to be built on how good quality your data is. Yeah.
Great point. I think compute data, data stack for the country, I think very important. Let me come to Venu. Again, the same question, right? If India have to build a server AI for the country and Global South, what’s the top one thing you will say which will help the whole cause?
Yeah, so it’s interesting. MIT last year ran a big report and they said 95 % of AI pilots actually never made it to real production, right? So in my point of view, this is never really a tech problem. It’s really a production problem, right? So in my point of view, actually like when I look at a our solution, right, like we are able to deploy over thousands of companies in only eight weeks, right? So when I look at that, there’s really, it comes down to three reasons why this is happening in the industry, right? And the first one is what I call ROI invisibility, right? So when you look at companies right now, it’s really easy to get a budget for a pilot, right?
But what comes to the reality is can they get a budget to get the project done, right? So the data that I have to share with you guys, which is astonishing, is a third of CFOs really nowadays, they cannot quantify ROI inside of their organizations, right? And only one out of ten can actually have tools that can actually measure ROI, right? So. What ended up happening is whenever you talk to those organizations. right? Companies, and you ask, like, how are you actually going to measure productivity gains or how are you going to, like, they don’t have the answer, right? So it ends up, like, what’s the baseline? Like, they don’t have the answer, right? So whenever you bring to, like, the CFO to get that project approval, ends up on the project never getting approved and ends up on that cycle of, like, it ends up getting stuck into a pilot, right?
So when you look at what, you know, number two is, like, I think it’s data and trust and compliance friction, right? I think there’s a huge red tape in terms of what happens inside of organizations, right? I think that it’s very departmentalized, where, like, each part of the organization is trying to solve for each part of the department, right? So when I look at IT, it’s trying to solve for IT. Procurement is trying to solve for IT. Procurement is trying to solve for IT. Procurement is trying to solve for IT. procurement. Because no one’s really trying to solve that as an organization, the project ends up stalling. So something that can essentially take a few months to resolve ends up taking six months to a year.
And like I say in sales, time kills every deal. Last but not least, I think my third point is the champion problem. I think there’s a severe issue within organizations nowadays is there’s really no executive sponsorship. And whenever you don’t have executive sponsorship, especially for AI opportunities, deals never get approved. And people, especially at the bottom tier, they don’t understand what’s going on. And when there’s no clear alignment within the middle tier management, deals never get approved.
Great. I think let me summarize probably the three points that, you know, you need a close collaborated teams, right, with a single point of view with executive sponsorship. I think that will solve the adoption piece at least at last, right? Let me come to you, Professor Ganesh, right? Ganesh, I think what we are discussing is the, we have discussed a lot on AI for last five days for India, for globe, you know, and then we had three point of views. I asked them, give me one top thing. You heard probably from Breno and KK and then from, you know, Sunil was confused. What is that your top one take which India should do so that we can lead the seven race for the country and the globe?
I would suggest interoperability at every layer. I think it is also alluded to by earlier panelists. Interoperability encourages participation and in the words of PSA, if you are there in our Bharat, genesision is a meaningful participation right interoperability also helps you present alternatives because there is no one size fits all and you need to also ensure that in the trade off between fidelity and latency or between sensitivity and specificity you are able to find the right sweet spot which is suitable for you you can pick something that is appropriate I just on a lighter note I was driving from the PSA office and there was such traffic jam which most of you experienced so I exercised my sovereignty and I started walking so you find alternatives when you think sovereign 3 kilometers that’s why I was late so there are alternatives and also provisions for human participation much better there could be places where AI could be substitutional but many other places where you may want it to be just supplementary or complementary.
So alternatives is another thing that interoperability provides for. And I think the very key is scale out. I mean if just by scaling up we could cater to everyone, great. I would say that at least matches one checkbox which is people being catered to. But even we are not there. Scaling up is not going to cater. The capabilities are not there. But even if it were hypothetically, I think participation would also ensure that people are part of the process. It’s informed. I mean Bharat Jain, I take pride in one of our consortium members at IIM Indore. We are a consortium of nine academic institutions. And in the Institute of Management, what are they doing? They do a fabulous job in going to many of the second tier cities, going to people who have data and engage in conversations, education.
That data is an asset and you could actually transform that asset into IP generation. generation and not just source data. So the dialogue, right, and informed decision making is where participation is encouraged when you have interoperability. I just want to add just what he said. He made a very interesting point. How do you monetize data, correct? And this is something which needs a very different approach because today what happens is you are sourcing data and I think PM yesterday made a very amazing statement, correct? He’s saying, jiska data uska adhikar, correct? Very interesting. But if you look at what he’s saying is the creator of the data, the producer of the data, the consent provider for the use, all have a role to play and that’s what I’ve been using this word called data product or a data catalog.
So you need a catalog first. You need to build a data product and then set up a data contract, which is the fundamental, fundamental for interoperability. I just want to add. Because if that gets solved, I can choose my own personal data and say my data catalog, you can have five things to access. I think India has proven that amazing way of identity payments. So I think we can actually set up an environment where you can really build this. And the data benefactor is also the same person. So great point, Professor. I think it probably means definitely removing or optimizing the various layers and taking it to the last person in the rank. And it will help scale to the 1 .4 billion what we need.
I think thank you for that. Let me ask you again a second question. I think this is a very, very direct question. I think as a country, I think we are building our foundation models. You are one of the person who is building foundation models of the country. And at large, we have built sub -500 billion parameter model. And globally, we are going to 5 trillion or plus. The comparison is so huge, right? What do you think India’s moat can be when we are really, you know, in such a situation where we are at a disadvantage, though we have to aggressively, you know, handle that? Yeah, so the other important takeaway, which probably, you know, addresses some part of what you’re saying, what you’re asking is cooperation, right?
Collaboration. A collaboration, honestly, is not just a transactional process. It begins here, right? The will to understand the other side. I just published a book, you know, Informatics and AI for Healthcare. This is with my colleague, Shetha Jadhav. And what we did in the entire book was I tried to, I mean, I empathize with all the entire life cycle of a healthcare practitioner. And we tried to map every, ML example, informatic example, parsing to healthcare, right? and vice versa there was reciprocation from the other side as well it was very interesting exercise I think that’s how co -design also happens, so collaboration is actually to do innovation and again China has shown in many ways, right in contrast to the US ecosystem that co -design can lead to very innovative ideas, and co -design often is even lacking at the level of algorithms and infrastructure, right right there, new algorithms can come up but all the way to application layers so collaboration also comes by creating ecosystem where people can participate since you alluded again to Bharat Jain, we have a consortium of 9 academic institutions and the whole collaboration is through a section 8 company a not for profit company, which engages with for profit entities but also the academic institutions 60 full time employees work with 100 plus researchers, master students, it’s been a very profound exercise in a very short span of time I mean we may say we are late since you brought up also the landscape outside which is 1 trillion plus parameters and that’s also our North Star at least from the India AI vision that is our goal to get to at least 1 trillion parameters but even the 17 million parameter model that we have released there is a lot of research due diligence that has gone into the architecture choice and actually we are very proud of whatever model we released because ensuring that you know if you have two shared experts one of them is actually catering to languages and mixed code the other is catering to domain due diligence that was actually done based on Indian context right the fact that we covered 22 languages in our speech model the text to speech model again all of that is raised we explicitly captured the common phonetic vocabulary of Indian language And that’s only possible through this process of empathy.
I mean, linguist has to empathize with the computer scientist and vice versa. If we do that, we can actually create magic. Believe me. You can create magic. We just have to break our silos and the biggest silos sitting here. I mean, in fact, an endorsement to this was when we actually built our LLM enabled speech to text model. We had a projector layer which actually projected from speech to text. And we used a mixture of experts for the projection. It was very interesting. The expert for Hindi and Marathi performed very similarly. I mean, they were the same expert. Expert got shared. Whereas for Telugu, there was collaboration between Hindi and Tamil experts. So, data, domain knowledge, all of them actually are reinforcing each other.
So, this is actually a time where we can break the language barrier in my interaction with you. on 8th Jan, I gifted him a book from our consortium called Samanway Samanway stands for bringing all languages together and he said, we need to use AI also to show the strength of India it’s not just AI for India, but AI by India great, great, I think the point of collaboration and you know the story what we all have heard single stake course is a bunch of stakes I think it’s very true and that’s what is the mode for India collaboration, building that collaborative effort between different universities, bringing 9 different universities together to work and it’s a gigantic work, especially what you have created is amazing also, we are very happy 3 days back, we also announced at MOU with our heritage foundation sitting in the US we got a lot of support from people in the Bay Area, so once you open up for collaboration, you will find there is support from around the world and it’s very very good and I think that’s the most important Great, great, great.
Thank you, thank you, Professor Ganesh.
So, let me come to Sunil, you, right? I think we all agree that, you know, compute is one of the biggest player and pillar, right? And then government is doing their bit, right? I think they are doing their bit. But again, I think in terms of compute for the country, for some unity, can it be a shared commodity? Can it be, you know, some commodity which different, you know, factors of the country or probably ecosystem come together and build, right? How to solve that problem? Because as you rightly said, few thousands versus few lakhs, right? That’s something, yeah, very high.
Number one, they said, you all come and panel with us at a right price point, right quality, and you declare how much GPUs you can give. They were not forcing us. They said, okay, you decide how much you want to give. We all got empaneled. We contributed GPUs, which were made available to startups. Then government said, every quarter we will come and we’ll encourage new and new providers to come up with the facility. And even existing players can also top up their capacities. And every next time, because the market forces, when the quantities start increasing, supplies start increasing, the pricing also will start reducing. Government say, okay, if new player comes, they can reduce the price.
Existing players will have to match. And they keep on empaneling more and more capacity. And that is something which has resulted into that 38 ,000 GPUs, which government is talking about, the shared compute facility, which is nothing but a, you can say, combination of the compute capacity created by multiple providers like us. And now yesterday, Prime Minister announced that 20 ,000 more are being added to this facility. So I would say, both as a concern, except this is proven that last 18 months, must is doable and both are the technology right while technically it’s possible that the same model can get trained like Ganeshji I’m sure can can talk very authoritatively on this subject technically also you can train on multiple different clusters of course inferencing you can do in multiple different places but even if you don’t do that you are actually what government did very democratically okay IIT we will put you into this service provider okay Sarvam will put you into this service provider okay GAN will put you into this provider so government is democratically making sure that they are encouraging industry to invest into this creating this capability which is required and we because we are getting business we are scaling up now we are investing more and more now and then they are making it available to people because India needs its own models we may use frontier models for certain purposes but as minister was saying that 95 percent of the use cases of the country can very well be done by a 20 billion to 100 billion parameter model right of course Ganeshji is carrying a mandate to create a trillion parameter model also in which country required almost we can for all those things why anybody else can do right their success Bharat Jain success and Sarvam success has proven that India can do it right so I would say that shared compute framework which has been done it is proven we just need to scale it up and my request to government which I think they are doing is don’t limit it only for training of models because models training is one step done now these models will be going to massage for adoption and you require millions of GPUs I think I’m repeating myself but that is where government need to fund the first cycle of inferencing on these models when users start adopting let’s say agriculture use case or a healthcare use case or a education use case or whichever use case which come on multiple UPI equal and use case will come up it will take time for users to start adopting it start accepting it making it a part of their lives at that time it will take time for users to start adopting it start accepting it making it a part of their lives at that time only user will be happy to pay 10 paisa per transaction or maybe 50 rupees per month subscription for that that time these models and use cases will become self -sufficient to generate revenue also then they will need government support but at least for i would say first cycle of inferencing maybe one year or two years government not only support the funding of the training of the model but also they support the first phase of inferencing on this model so that adoption happens revenue models emerge and after that government can say okay let private sector invest and government will come back to their original role of regulator
great so i think i think probably it will augment and put fewer thoughts right so the india mission has really created the single fire right yeah this fire is going to every state in the country yes all 28 states all eight union territories they are building aicoes yes and the mandate for each co is to give compute right i think that like a small wildfire it will spread all across the country it will be phenomenal but again i think at the same time you know we have to keep up the pace right i think one thing is space.
Absolutely, Ankit, just trying to, this is something which I know two years back when we said that I’m putting 8000 GPUs, everybody started laughing. Because we were starting with the base when India was not having GPUs, right? Today, we comfortably say okay, India will be going to 50 -60 ,000 GPUs but even today I can tell you India require millions of GPUs. In US, just 3 or 4 deep tech companies are collectively owning millions of GPUs. India has got 1 .4 billion people out of which 1 billion people are carrying smartphones, creating, consuming content every single minute. And as Ganeshji will talk about, they all are creating voice -based AI because India’s AI will be voice -based. People are talking in their own native language or a mixture of Hindi, English, everything.
And they’ll be comfortable doing that instead of writing in their native language or screen which is not so easy. When you’re doing that and actually innovations are being done that even from feature phone or regular telephone line, not using smartphones, you will be able to talk to an AI model at the back end. When you are basically talking about 1 .4 billion people coming in the AI fold for multiple use cases. Just imagine what type of number of GPUs will be coming for inferencing and how many GPUs will be coming for training multiple models for sectoring all these things. So you are right, Ankit. What we have done in last two years is kudos to the whole ecosystem, to government and everybody, all of us.
But we need to keep on building for next 7, 8, 10 years. Sorry, just to give one or two more data points. India is creating and consuming 20 % of world’s data. One -fifth of the world’s data is created and consumed by India. Only 3 % of that data is hosted in India. That shows the upscope of the infrastructure both at the physical data center level and also in terms of the compute or GPU level India need to build. Because we don’t want any single country or any single company start dictating our digital destiny. We need to be as much sovereign as possible.
Thank you, Sunil. Thank you. Kalyan, let me come to you. So, Kalyan, I think one big base for the sovereignty is the skill set. to research, develop, deploy, right? And do all of that responsibly, right? I think SCL being, you know, one of the companies who have done that, right, in the last two, three years, what will be your nuggets, right? I think how other companies, other players in the country, other countries can do that, right?
So, if you look at, see, what is India known for? India is known for capability, historically. NASSCOM, right? But that capability was historically, and for a majority, and most of the business capability for hire. You basically are building capability to build things for others, and that’s been the core business. We’ve now become pretty much, if you really look at, if some other country thinks sovereignty, 50 % of their, global tech engineering services, development operations talent is sitting out of India. You see those GCC crates. But where is the pivot? The pivot is, I think what Professor was talking about, is you have to pivot towards build. We are always more towards service. So building, research, development, build your own IP, and how do you make India for the world?
I think it’s very important. I think that’s what our journey has been. So what we did is in 2015 -16, because we have one advantage, we are a single majority shareholder run company. Mr. Nader had a very ambitious vision. He said, we are building products for others, we should start building for yourself. It’s 2015. It’s a very conscious strategy, and he realized if you want to play in the global market, you need to have access to market permission and market access. Because people would only buy if you are a software product company. So that whole idea of acquiring India intellectual property, because if you really start to see the underlying of these pieces, you could build on open source and other stuff, but suddenly what’s happening is some of these open source companies are getting acquired and suddenly becoming closed source.
This is becoming a very interesting plan and suddenly some of them are getting classified as dual use. Suddenly they’ll say, oh, this is dual use tech, so I can only release this. So what we’re seeing from a skill standpoint, you need lesser smarter people. So I’m making a very controversial statement. You need lesser people, smarter people. You need engineers more than coders. See what’s happening is that we’re building quarters. You need engineers, people who think systems thinking you need people who are research bent. I meet students and I asked MBA students, what did you do? I did engineering. I said, why the hell did you waste four years of your life? If you wanted to go and do an MBA, the things like, why are you not doing deeper?
Why don’t you specialize in a domain? But those are things like even fundamental things. I would say. The big leap is going to be. I think India can solve something very interestingly, and as he’s referring to the PSA, quantum. Because I think the kind of compute needs you have, and looking at energy GPUs, you could completely change the computational paradigm. So hence, but that needs fundamental science, research, physics. Like no one wants to study physics. If you go back 20 years back in this country, everyone wanted to go and do coding. So those are the fundamental skills. So what we’re doing, in a very small way, we are acquiring, we are building talent and research pools.
So 50 % of HCL software product business is in India, engineering. But my second largest engineering center is in Rome. Third is in Israel. Then I’m in Perth, Austin, Chemsford outside of Boston. Why? Because if global companies can come to India and acquire talent to build and research, and then build an IP and take it to US, I’m doing the reverse. So AppScan, which is a code security product, the security heuristics is built in Israel. The, SAS UX is built in Boston. but the core engineering is in Bangalore but the IP is registered in India which is where we are moving a very different way we are now tapping global talent to build for us so we are still a billion and a half we are not big but we have got 130 countries so we are a step in the change it’s a long journey it needs to get away from short term thinking hire people to get them built I think you have to go to a very different model I think that’s what we are starting within the larger scheme of HCM but I think we are walking the right path I think we are acquiring assets continuously and building that
so let me add probably what I am seeing in the skill level the persona at least what NASCOM is focused on is the developer and the way we code is changing so NASCOM has done concentrated effort to help developers learn the new way of coding redefine the whole SCLC as a target what I have taken my team has taken we have taken a target of you know enabling 150k developers across the country next six months. Make them AI enabled, AI ready. Help them change the whole, you know, or unlearn and learn the new way. I think that’s what, is one thing, right? But finally, I think, which I should make everyone aware, I think there will be announcements sometime soon.
But with the MIT and, you know, the education industry, we are rewriting the whole, you know, technical, BTEC, MTEC, MCA, BCA curriculum, right? I think we are adding more specialization, as rightly said. Because we need specialists. We don’t need journalists. As an engineer, he studies 48 subjects in four years. At the end, what is he specialized on? It is his luck, right? The group he gets, the project he takes, somehow, some job he gets, right? So, I think that’s what we are changing. Soon, there will be announcements happening. But again, I think that’s what is happening at the background. Coming back to Benno, Benno, you have a product which is so simple, anyone can use it and build agents through that, right?
And get, you know, benefit, benefit from it. that. Let me ask you this. I think the one big piece of AI to really be mature and impact is adoption, right? And you started with the 95 % project fail or probably don’t go to production, right? So if we have to really do adoption at scale, what are the top issues you see, right? And how do you suggest, you know, the companies or folks here can take some pointers to mitigate it in their life functionally?
Yeah. So I’ll give you three. One is very specific to India, actually. those are relatable to our solution, but I think those are real use cases because the proof is, like I said, the proof is in the pudding, right? One is like you got to solve a real use case, something that is actually changing in people’s life. So AI is complex and AI is people still like trying to figure out AI. So it needs to be something that is into people’s everyday life. So in our case, for example, let’s go back. So if you look at Cursor or Lovable, right, they changed the life of, you know, vibe coding, software engineers. In our situation here at GenSpark, we looked at people that were producing office work, right?
So people looking at producing Excel, PowerPoint, and essentially just like any mechanical work on the everyday office work, right? Because if you think about it, every time you office task, all of that office work is very mechanical, right? And that’s why we realized all this massive growth in our solution, right? So to your point, I think that adoption… comes from like something that is something that can change people’s life and something in a very simplistic way right I think the second the second thing is should be consolidation of tools right I think from the time that we wake up in the morning I think most of us pick up our phones and we have we inundate about messages and naps and then we go to our office work and then we have probably a hundred tools that we have to touch you know actually we looked at a you know draw our research at work you know people waste in average two and a half hours a day right just you know flipping between different solutions right so in that causes contacts loss of context right so if there’s a waking consolidate tools that also drives adoption right you know we have probably a hundred tools that we have to touch you know so I think the third one is especially in India is In fact, there’s a lot of different languages in this country, which you brought up, right?
So I think in this country, especially LLMs, I think really struggling with being able to drive the right language, especially with all the different dialects that this country has. So being able to really naturalize and be able to bring the sovereignty here, I think is very important. And I think last but not least, people are very scared about data, right? And how that data, once they bring data into AI, how is that data going to be treated, right? So I think the solution needs to bring that sense of security of how that data is going to be managed.
Great. Thank you, Breno. I think with the last segment, last question, 30 seconds each, right? Again, probably starting with Breno, since you have the mic, right? So AI is not a short game. It’s a game for the next five years, 10 years, decades. Probably centuries. you know what is the challenge as a humanity we have to mitigate you feel that you know we don’t align with something which is hazardous to us
yeah so I think it’s you know actually I was having breakfast the other day and actually a person I was serving asked me the exact same question and I think that it’s how human beings interact with AI I think we’re still trying to figure out how to properly interact with AI and I think the speed of AI is evolving I think we’re still uncertain how to manage that I think the line on the sand moves so fast that we can’t really catch up to that right and the interaction of AI and us no one really knows how to do it yet
so I’ll map the earlier part in this part. You know, a very specific use of AI for self to, you know, make, you know, your life simpler. We’ll adopt AI skill. And we have to build a certain, you know, the processes to interact with AI in the long run. Because AI is changing, things are changing. Thank you, Breno. Coming back to you, Professor Ganesh, right? Same question, 30 seconds. What’s the challenge you see if we make something, you know, not aligned?
I think the biggest challenge in not making AI aligned is that we will become products, not even consumers, right? We want to be in the steering wheel. I remember my very fondly, my first machine translation paper, I called it, you know, machine assisted human translation. Obviously, I can’t, I mean, that will sound too regressive. But the key is provenance. Right? I mean, how can you leave provenance? at every step in the stack, whether it’s data aggregation, which is again aligned with ecosystem. You need an ecosystem to leave provenance on the data part, whether it’s metadata refinement, data curation, provenance at the level of trading, tokenization, provenance at the observability, the other keyword, right? At the level of the way the model performs.
Models are glass boxes, because that gives you enough breathing space. Where do you, where should you actually yield your practices versus existing practices? So I think if you don’t have that view, the recipes, if they’re not made available, if the education isn’t there, I mean as a prof I always focus on the education part, I think we’ll become products.
Thank you, thank you. Sunil, you and then Kalyan.
No, I think I concur with the views that at the end of the day we should not do AI for the sake of doing AI. It is a means to achieve an end purpose and the end purpose is beneficial. for the masses. I remember I think I was seeing on a YouTube video when Prime Minister Sir met all the startups and Professor Ganesh was there and I think Prime Minister Sir said to everybody don’t create toys, don’t use make AI to make toys, right, and use AI which benefits the masses in the real problem which they face in their real lives. So that is something that that is where the name of this event also has come in the Impact Summit, right, that and I think yesterday also used one word that unlike the previous summit where we are too much concerned about security governance which are things to be done but at the same time, keval bhai nahi rekhna hai AI ka, AI se aap apna bhagya bana sakte, apna bhahisha bana sakte ho.
So kaise AI se how we sort of create an impact, we benefit the masses and also machine should not end up dictating our lives as again I would say ke we should not end up becoming product itself. As much AI makes improvements, it possibly will never reach a stage where it starts acquiring human’s emotions, it starts acquiring our sense of gut, it starts acquiring our sense of culture, it starts acquiring what we speak, our body language, not just with our words. So I think human in the loop and human remaining the master of AI is something we’ll have to guard against all the time.
Interaction, don’t become product, have human -centric development. Kalyan?
I would say, break this into four key areas. Professor mentioned, I think the consumer AI, so I’m going to break it into consumer, enterprise, government and critical national infrastructure defense. So let’s, the reasons, all fours are going to play, just like ten seconds. Consumer AI, you are the product, unfortunately. You now have to use data control to decide how much of what you give to get. It’s a give to get mode, correct? In the consumer AI. Because the day you click I agree on an Android 4 on an Apple intelligence, suddenly you are the product and you’re getting something back but that give to get balance and that’s where the role of the regulator in my opinion has a far more play than in the enterprise of regulation enterprise god made world in seven days because he had no installed base enterprise cios you go and talk to cios on the ground their reality is that they’ve got a big problem architectural problem their data landscape is broken so they have to pivot from process workflow to data first big shift so they need to start about lineage metadata most of these companies don’t have metadata correct metadata discovery use techniques acknowledge graph to understand the metadata and then you organize your data for so that AI can be benefited I think the big place in govtech government government citizen engagement g2c massive but that’s where I think that sovereign AI play comes in where the work which serve them is doing or or the whole bar agent important because that’s where you can host citizen service platform and the last is for critical national infrastructure air gap networks, private AI and defense.
So I think we need to also have a very broken up view of this whole thing rather than trying to have one brush to paint all of them. But I think the last is sovereignty is all about choice. Making choice. Like he walked here. It’s a great choice. I can run on hyperscaler A, B. I can run on IOTA. I can run on CIFI. I can run on any or I can run on my own infrastructure. Then I need to have choice of it’s all about choice. And second is please AI exists for human good. So put the people back into the center. Human because we suddenly have made human someone in the side and everything is about AI.
It’s about people using AI surrounding them. So that’s what my thought was.
Great. Thank you. I think we have had a lot of good nuggets from everyone. I think we’ll continue this conversation after this. As a part of NASCOM, I think 7 AI is a big initiative for us. I think we have been driving it since last three, three and a half years. Ganesh knows that. Sunil knows that. services companies, we have worked enough with them. To keep it on, I think it’s not an end point. We have to think about the sovereignty and we have to think about how India builds the AGI capability, quantum AGI capability. I think that’s the journey we are on as NASCOM. I think we are writing a current policy document for government on sovereign AI and AGI roadmap.
And I think the QR code is there. The QR code will be here and I want all of you to have a look. It’s a dark one. Please work on it. I think that’s that. Yeah, Ganesh?
I mean, the potential is so immense. We have not even scratched the surface, not even the tip of the iceberg we have touched. So, sovereignty is critical because the amount of inefficiency in that entire stack needs to be done away with. GPUs were never designed for building these models, right? Legacy and how can we use even the large work we are doing, workload to actually do better? A SIG design? can we use it to have better model serving engines? So, there’s so much to do. I think everyone should get inquisitive about the entire stack. That’s where sovereignty comes.
Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborator. We will have a QR code and please respond to that. Give your inputs. And with that, thank you to my panelists. I loved it and I think hope you also loved it. Thank you again.
Just one thing I want to just say. Watch on 21st, the PM is inaugurating a new JV which HCL is announcing with Foxconn. It’s called India Chips Limited. I would call it a patient capital. It’s about 16 and 32 nanometer fab which are creating. Basically it’s like a OSAT unit. It’s going to come out after 5 years. You have to build the whole thing. But also building that skill, correct? It’s a big important thing. And we have to start now. We cannot wait for 5 years on the line. So,
Thank you so much to our panelists I request the panelists to please stay back for a group photo right now You can also access the report that Ankit has been talking about in the QR code displayed on the digital background before and leave feedback I’m also happy to announce Thank you Thank you to our panelists I’m also happy to announce an MOU being signed with Amrita Vishwa Vidyapetam and NASCOM right now Thank you. Thank you. Thank you.
India possesses many essential ingredients for AI success: a robust software services industry, thriving startup ecosystem, exceptional mathematical and engineering talent, and a massive domestic mark…
EventAnd the big mindset shift that’s starting to occur is this notion that, you know, these aren’t just productivity tools. These are going to be augmented teammates into our society where they will be wo…
EventSo the infrastructure is missing, right? Now, if you’re talking about policies related to compute, you’re talking about policies related to data, right? So if you don’t bring them and take their persp…
EventAnd it’s that kind of computing power that is essential. It’s essential for training large AI models. It’s essential for testing new ideas more quickly, as you mentioned. Subtitles by the Amara .org c…
Event_reporting– **Interoperability as Essential**: Universal agreement that interoperability is crucial for DPGs to function effectively and avoid creating silos
EventInteroperability is essential in both technical and legal systems. As technologies become increasingly global, it is crucial to ensure interoperability of technical systems and data flows across count…
Event– **Interoperability as a Core Enabler**: The panelists discussed how interoperability between different protocols and platforms is essential for breaking network effects that keep users locked into c…
EventCertain barriers, such as low budgets, less technical focus in decision-making teams, and low priority given to smaller organizations could be impeding the widespread adoption of these technologies H…
Event– **Youth as agents of change and innovation**: Young people were identified as crucial catalysts for digital agriculture adoption, both as technology adopters who can bridge the gap with older farmer…
EventGarza highlights the importance of closing the digital divide to ensure sustainable digital transformation. She argues that this requires targeted investments in infrastructure, locally relevant appli…
EventContent and services. To increase demand, there needs to be new, locally relevant content and services. Content and Skills need to be developed. A lack of knowledge as to how to use the Internet can …
ResourcePractical implementation requires comprehensive ecosystems combining government guidance, industry-academia collaboration, and international cooperation
EventInternational cooperation and knowledge sharing are essential, requiring interoperable governance frameworks and multi-stakeholder collaboration including government, industry, academia, and communiti…
EventSo working in tandem, working in synchronization is the need of the hour. This transformation cannot be driven by industry alone. It demands a triple helix of government, academia and industry. Govern…
Event“And I think this integration of government support for both the academic piece of this and the industry piece is really important, a really important three-way relationship”<a href=”https://dig.watch…
EventSuccessful sovereignty requires government-industry partnership with governments providing guardrails and policy stability while industry focuses on innovation
Event“The session began with the formal launch of the Sovereign AI research report produced by Amrita Vishwa Vidya Peetham.”
The knowledge base records that Professor Suresh from Amrita Vishwa Vidya Peetham participated in the report launch ceremony, confirming the launch of the Sovereign AI report [S2].
“Sunil Gupta identified a shortage of specialised GPU compute as the decisive bottleneck, stating that the shared‑commodity pool currently holds ~38 000 GPUs with an additional 20 000 announced, and that “millions of GPUs” will be required for nationwide inference.”
Other sources note India’s historic GPU scarcity (only ~8 000 GPUs previously) and recent plans to scale to 50-60 000 GPUs, as well as a programme to make 50 000 GPUs available at low cost, providing additional context on the compute gap and scaling efforts [S1] and [S46].
“95 % of the country’s AI use‑cases can be served by a 20‑100 billion‑parameter model, making large frontier models unnecessary for most applications.”
Multiple knowledge-base entries emphasize focusing on smaller models (20-100 B parameters) to address roughly 95 % of national use-cases, confirming the claim [S19], [S89] and [S90].
The panel shows strong consensus on four pillars: (1) addressing compute scarcity through shared facilities and domestic chip manufacturing; (2) ensuring AI remains human‑centric, aligned and serves the masses; (3) establishing robust data sovereignty via localisation, ownership rights and interoperable data stacks; (4) fostering collaborative, interoperable ecosystems and multilingual/voice‑first AI to drive adoption. There is also broad agreement on the need for massive up‑skilling and government support for the inference phase.
High consensus across industry, academia and startups, indicating a unified direction for India’s sovereign AI strategy. This alignment suggests that policy measures, public‑private partnerships and capacity‑building initiatives are likely to receive coordinated support, accelerating progress toward a sovereign, inclusive AI ecosystem.
The panel shows strong consensus on the need for a sovereign AI ecosystem that benefits the Indian population and the Global South. However, significant disagreements arise around the primary bottleneck (compute vs organisational/financial barriers), the optimal path to resolve compute scarcity (shared GPU pools vs indigenous chip fabrication and quantum research), and the preferred model for talent development (mass up‑skilling vs elite, research‑oriented engineering). These divergences reflect differing strategic horizons—short‑term deployment versus long‑term technological independence.
Moderate to high. While the overarching goal is shared, the contrasting views on technical, policy and capacity‑building approaches could lead to fragmented initiatives unless a coordinated roadmap reconciles these perspectives. The implications are that without alignment, efforts may duplicate, compete for resources, or stall, potentially slowing India’s progress toward AI sovereignty.
The discussion was shaped by a series of pivotal insights that moved the conversation from identifying a single bottleneck (compute) to constructing a comprehensive sovereign AI ecosystem. Sunil Gupta’s emphasis on GPU scarcity anchored the need for hardware, Kalyan Kumar broadened the view with a data‑centric stack, and Brandon Mello reframed the challenge as organizational adoption. Ganesh Ramakrishnan’s calls for interoperability, collaborative model design, and provenance tied these technical and business strands together, while his ethical warning capped the dialogue. Together, these comments redirected the panel from isolated problems to an integrated strategy encompassing infrastructure, data, talent, policy, and responsible use, ultimately defining a roadmap for India’s sovereign AI ambition.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

