India’s AI Future Sovereign Infrastructure and Innovation at Scale

20 Feb 2026 16:00h - 17:00h

India’s AI Future Sovereign Infrastructure and Innovation at Scale

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with the launch of Amrita Vishwa Vidya-peetham’s sovereign AI research report and a panel of industry and academic leaders, including representatives from NASCOM AI, IIT Bombay, Yotta, Tata Communications, HCL Software and GenSpark [1][3-5]. Ankit Bose, head of AI at NASCOM, introduced the discussion and asked each panelist to identify the single most important action for India to build sovereign AI capability [12][44-45].


Sunil Gupta highlighted that India’s main bottleneck is abundant compute, noting that the country lacked sufficient GPU resources until recent partnerships secured nearly 10 000 GPUs and a government-backed shared compute facility now aims to reach 38 000, with an additional 20 000 announced [54-57][66-68][235-237]. He argued that scaling this infrastructure and subsidising the first wave of model inference are essential for widespread adoption, because millions of GPUs will be needed for both training and serving AI services across sectors [239-242][247-250][254-257].


Kalyan Kumar stressed that sovereign AI also requires a robust data layer, describing HCL’s acquisition of vector-DB technology, the upcoming localized vector AI engine, and the need for data platforms, catalogs and contracts to ensure high-quality, distributed data for edge inference [96-104][106-108]. He warned that focusing only on hardware and models ignores the necessity of a data-centric approach, which he sees as the foundation for scaling AI applications [105-107].


Ganesh Ramakrishnan advocated interoperability at every stack layer, arguing that it enables participation, alternative solutions and scalable collaboration among academia, industry and government [151-156][162-168]. He linked interoperability to data ownership, proposing a data-product and data-catalog framework that respects creators’ rights and facilitates secure sharing [170-176][178-181]. He also emphasized co-design and collaboration across institutions, citing his consortium of nine academic bodies and recent international MOUs as examples of how joint effort can accelerate model development [191-199][203-209].


Brandon Mello identified three adoption barriers: “ROI invisibility” where CFOs cannot quantify returns, data-trust and compliance friction, and the lack of executive sponsorship, all of which stall AI pilots from reaching production [119-124][129-134][139-142]. He suggested that clear ROI metrics, streamlined governance and dedicated champions are needed to move projects beyond the pilot stage [125-128][135-138][140-142].


The panel collectively agreed that building sovereign AI requires coordinated compute provisioning, interoperable data infrastructure, skill development for engineers and researchers, and sustained government-industry collaboration [216-222][266-274][426-432]. The discussion concluded with a call for continued collaboration, the upcoming NASCOM “7 AI” initiative, and the intention to draft a national sovereign AI and AGI roadmap, underscoring the strategic importance of these efforts for India and the Global South [414-419][426-433].


Keypoints


Major discussion points


Compute scarcity is the bottleneck for sovereign AI in India.


Sunil Gupta stresses that while India has talent, data, and market size, it lacked sufficient GPU-based compute, which he and his company have been trying to supply (e.g., “the core problem … how do you make compute available in an abundant way” [46-60]; “we are running almost 10,000 chips” [66-68]; “India will need multiple million GPUs” [75-78]). He later describes the government-backed shared-compute facility that now aggregates ~38,000 GPUs and is being expanded (e.g., “the shared compute facility … combination of the compute capacity created by multiple providers” [224-236]; “India will be going to 50-60,000 GPUs … we need millions of GPUs” [239-258]).


A robust data stack and edge infrastructure are essential alongside hardware.


Kalyan Kumar outlines the need for a centralized-yet-distributed data platform, vector databases, and edge-ready inference, arguing that “the data platform is going to become very important” for scaling AI deployments ( [96-108]).


Interoperability and ecosystem collaboration are critical for scaling and inclusivity.


Ganesh Ramakrishnan calls for “interoperability at every layer” to enable participation, alternative solutions, and data-product ecosystems ( [151-168]). He later adds that co-design, academic-industry consortia, and open data contracts are the “biggest takeaway” for building India’s AI moat ( [193-205]).


Adoption failures stem from organizational and ROI challenges, not just technology.


Brandon Mello points out that 95 % of AI pilots never reach production because of “ROI invisibility,” “data and trust and compliance friction,” and the “champion problem” (lack of executive sponsorship) ( [119-143]). He later reinforces that solving real-world use cases, consolidating tools, and addressing language & data-security concerns are needed for mass adoption ( [335-351]).


Skill development and a shift from services to building IP are required for long-term sovereignty.


Kalyan emphasizes moving from a service-oriented model to building proprietary products, hiring “engineers, not just coders,” and investing in deep research (including quantum compute) to create home-grown IP ( [266-306]). NASCOM’s parallel effort to up-skill 150k developers and revamp curricula is cited as a concrete step ( [312-327]).


Overall purpose / goal of the discussion


The panel was convened to launch the Sovereign AI Research Report (Amrita Vishwa Vidyapeetham) and to chart a coordinated roadmap for India-and the broader Global South-to achieve AI sovereignty. Participants were asked to identify the single most impactful action their domain could take, with the aim of aligning industry, academia, and government around concrete priorities (e.g., compute availability, data infrastructure, interoperability, talent, and adoption pathways) that will enable India to develop, deploy, and control its own AI models and services.


Overall tone and its evolution


Opening (0-5 min): Formal and celebratory, with introductions and acknowledgments of the report launch.


Mid-session (5-30 min): Shifts to a problem-solving tone; speakers present urgent challenges (compute shortage, data stack gaps) and propose strategic solutions, often with a sense of urgency (“we need millions of GPUs,” [239-258]).


Later segment (30-45 min): Becomes collaborative and optimistic, emphasizing interoperability, consortium building, and skill development as enablers.


Closing (45-55 min): Returns to a call-to-action tone, urging participants to contribute to shared resources (QR code, MOU) and stressing the long-term, nation-building mission of sovereign AI.


Overall, the discussion moves from introductory formality to a focused, solution-oriented dialogue, ending with a unifying, forward-looking call for collective action.


Speakers

Speaker 1


Role / Title: Moderator / Event host (introduced the panel and announced the report launch)


Sunil Gupta


Role / Title: Co-founder and CEO of IOTA (also referenced as co-founder, MD and CEO of Yotta)


Areas of Expertise: Data centre operations, sovereign cloud infrastructure, large-scale GPU compute for AI models


Affiliation: IOTA (Yotta) – runs data-centre campuses and built the Sovereign Cloud in India [S1][S2]


Ganesh Ramakrishnan (also listed as Professor Ganesh Ramakrishnan)


Role / Title: Professor, IIT Bombay (distinguished panelist)


Areas of Expertise: Sovereign AI, foundation model development, interoperability, multilingual AI for India


Affiliation: IIT Bombay [S6][S7]


Ankit Bose


Role / Title: Head of AI, NASCOM


Areas of Expertise: AI strategy and implementation for NASCOM, developer enablement, AI education initiatives


Kalyan Kumar


Role / Title: Chief Product Officer (CPO), HCL Software


Areas of Expertise: Enterprise software products, sovereign-by-design software, data platforms, vector databases, AI infrastructure


Brandon Mello (referred to as Brenno Mello)


Role / Title: Founding GTM Executive, GenSpark (Genspark.ai)


Areas of Expertise: AI product commercialization, go-to-market strategy, enterprise AI adoption, agentic AI for knowledge workers [S14][S15]


Professor Ganesh Ramakrishnan (duplicate entry of Ganesh Ramakrishnan; listed for completeness)


Additional speakers:


Professor Suresh – Mentioned in the opening remarks as a professor invited to the stage; no further details provided.


Bharat Jain – Panelist; no title or affiliation specified in the transcript.


Bhaskar Gorti – EVP, Tata Communications (listed in the introductory panel lineup).


Full session reportComprehensive analysis and detailed insights

The session opened with a formal inauguration of the Sovereign AI Research Report produced by Amrita Vishwa Vidya-peetham. Speaker 1 thanked the audience, introduced the report’s release and invited senior representatives from Amrita – Pro-Vice-Chancellor Dr Manisha V Ramesh and the head of the AI-Safety Research Lab Dr Shiva Ramakrishnan – to the stage, followed by the panelists: Professor Ganesh Ramakrishnan (IIT Bombay), Bharat Jain, Sunil Gupta (Yotta), Bhaskar Gorti (Tata Communications), Kalyan Kumar (HCL Software) and Brenno Mello (GenSpark.ai) [1][3-5][12-18].


Ankit Bose, head of AI at NASCOM, opened the discussion by noting the successful launch and asking each participant to identify the single most important action India should take to build sovereign AI capability for the nation and the Global South [8-13][44-45].


Compute scarcity was identified as the primary bottleneck. Sunil Gupta explained that, although India possesses talent, data and a massive market, it lacks the specialised GPU-based compute required for modern AI. He framed the core problem as “how do you make compute available in an abundant way so that it becomes a hygiene factor” [46-60]. By the time of the panel his company was operating “almost 10 000 chips” and had trained the majority of the sovereign models now being released [66-68]. He warned that India will need multiple million GPUs to support both training and inferencing at scale [75-78]. To address this, the government has created a shared-compute facility that aggregates capacity from multiple providers, currently totalling about 38 000 GPUs, with an additional 20 000 announced [224-236][237]. Gupta stressed that this facility must be expanded to “50-60 000 GPUs” and ultimately to “millions of GPUs” to meet the demands of a billion-plus user base, especially as AI in India will be largely voice-first and accessed on low-end devices [239-258][244-247].


Turning to the data stack, Kalyan Kumar highlighted HCL’s acquisition of vector-DB technology (Actian’s Ingress engine and a Dutch CWI asset) and announced a forthcoming “localized vector AI engine” designed for edge deployment [96-104]. He argued that “the data platform is going to become very important” because AI applications will only scale if they are built on high-quality, well-catalogued data, with data products, contracts and metadata forming the foundation for trustworthy AI [105-108][106-108]. He also emphasized the need for a skill shift from coders to engineers and outlined the joint-venture with Foxconn – India Chips Limited – to build a 16/32 nm fab, describing it as “patient capital” that will secure future compute capacity even though the fab will take five years to become operational [266-292][441-447].


Professor Ganesh Ramakrishnan expanded the discussion to interoperability across the entire AI stack. He asserted that “interoperability at every layer encourages participation” and enables alternative solutions, scale-out architectures and the ability to balance fidelity, latency, sensitivity and specificity [151-160]. Ganesh linked interoperability to a data-product ecosystem, proposing “data catalogs and data contracts” that respect the creator’s rights (the principle “jiska data uska adhikar”) and facilitate secure sharing [165-176][178-181]. He illustrated the concept with his own consortium of nine academic institutions, which co-designs models such as a 22-language speech-to-text system using a mixture-of-experts architecture, thereby creating a “voice-first, multilingual AI” that can run on feature phones [191-209][212-213][244-247].


Brenno Mello shifted the focus to adoption barriers. Citing a MIT report, he noted that “95 % of AI pilots never make it to real production” and identified three systemic obstacles: “ROI invisibility” – CFOs cannot quantify returns, leading to stalled pilots; “data-trust and compliance friction” caused by departmental silos; and the “champion problem” – a lack of executive sponsorship [119-124][129-134][139-142]. He suggested that clear ROI metrics, streamlined governance and dedicated champions are essential to move projects beyond proof-of-concept [125-128][135-138][140-142].


Summarising these insights, Ankit emphasised that close-collaborated teams with a single point of view and executive sponsorship are required to overcome adoption challenges [144-146].


The conversation returned to the theme of compute as a shared national commodity. Ankit asked whether the country could treat compute like a utility, with the government coordinating providers to ensure low-price, abundant access [215-222]. Gupta affirmed that the current empanelment model already creates a “shared-compute facility” and argued that the government should also subsidise the first inferencing cycle of sovereign models to catalyse early revenue-generating use cases [236-242]. He reiterated that India creates and consumes “20 % of the world’s data” yet only “3 % is hosted in India”, underscoring the urgency of domestic infrastructure [254-257].


Skill development and the shift from services to indigenous IP were addressed by Kalyan. He traced HCL’s evolution from a service-oriented firm to a product-builder, noting the need for “engineers, not just coders”, and for research in fundamental science such as quantum computing [266-292][298-306]. The India Chips Limited joint-venture was presented as a long-term investment in domestic compute capacity [441-447][450-453].


Complementing this, Ankit described NASCOM’s ambition to up-skill 150 000 developers within six months, rewrite B.Tech/M.Tech curricula and introduce specialisations that produce “smarter engineers” capable of building sovereign AI [312-320][326-327].


Across the panel there was strong agreement on several pillars: (1) the necessity of massive, affordable GPU compute delivered through a shared-compute facility; (2) the importance of a modern, interoperable data stack with provenance, catalogs and contracts; (3) the need for collaborative, co-design ecosystems that span academia, industry and government; (4) the vision of a voice-first, multilingual AI serving billions; and (5) the imperative of human-in-the-loop, ethically aligned AI [46-60][96-108][151-168][191-209][244-247][371-376][385-386].


Rather than a conflict, the panel highlighted complementary perspectives: Gupta emphasized expanding the shared-compute facility, while Ganesh stressed that interoperability and scale-out architectures are essential to reach India’s diverse population [224-236][151-160]. On funding, Gupta called for government support for the first inferencing phase, whereas Kalyan focused on talent development and the India Chips Limited venture rather than direct subsidies [236-242][266-292][441-447]. On talent strategy, Ankit’s mass-upskilling plan contrasted with Kalyan’s call for a smaller, highly skilled engineering cohort [312-320][287-292]. Regarding data, Ganesh emphasized ownership, data catalogs and contracts rather than monetisation [165-176].


The panel concluded with a call to action. NASCOM announced the forthcoming “7 AI” initiative, a draft national sovereign-AI and AGI roadmap (accessible via a QR code), and the signing of an MOU between Amrita Vishwa Vidya-peetham and NASCOM to deepen collaboration [414-424][426-433][454-455]. Participants were urged to provide feedback, stay for a group photo and continue the dialogue.


In summary, the consensus was clear: India must combine massive, affordable compute, interoperable data infrastructure, skilled talent, and coordinated public-private partnership to achieve sovereign AI for the nation and the Global South.


Session transcriptComplete transcript of the session
Speaker 1

Thank you. Thank you. hello and good afternoon everyone thank you for joining us for this session on sovereign AI for India before we begin the panel discussion again we are happy to announce that there will be a launch of the sovereign AI research report by Amrita Vishwa Vidyapetam may I invite the following representatives to kindly join us on stage first for the release of the report from Amrita we would like to invite pro vice chancellor Dr. Manisha V. Ramesh and if available head of the AI safety research lab Dr. Shiva Ramakrishnan and any other representatives from Amrita Vishwa Vidyapetam that you would like to invite on stage sir alright Alright, Professor Suresh and if we could please have you on stage I would like to invite Mr.

Ankit Bose, Head NASCOM AI on stage as well We will Thank you so much Yeah, yeah, absolutely You can take a seat sir if you want Thank you Thank you. Thank you, everyone. We now move into the panel discussion. To guide this conversation, we are joined by Mr. Ankit Bose, head NASCOM AI. Joining him today are our distinguished panelists, Professor Ganesh Ramakrishnan from IT Bombay and Bharat Jain, Mr. Sunil Gupta, co -founder, MD, and CEO of Yotta, Mr. Bhaskar Gorti, EVP, Tata Communications, Mr. Kalyan Kumar, CPO, HCL Software, and Mr. Brenno Mello, founding GTM executive, GenSpark. Ankit, over to you. Professor Ganesh will be shortly joining us in two minutes. Thank you.

Ankit Bose

So hi everyone, I think we had a good launch and we have a very strong panel. So Ganesh was on the way and he is still stuck on the traffic, he is walking in. So meanwhile we start the discussion, I think, you know, happy to have a very strong panel. So why don’t we do this, we start with the introduction, right? I think Kalyan, we can start with your quick introduction. So Neil and then Bruno.

Kalyan Kumar

Yeah, hi, Kalyan Kumar, call me KK. I run the software product business for HCL, HCL Software. We are the largest India headquartered enterprise B2B software company with about 10 ,000 customers and about 1 .5 billion dollars of revenue. And very intricately involved in building software products which are sovereign by design.

Sunil Gupta

Hello, good afternoon. Good afternoon. Good afternoon. My name is Sunil Gupta. I am co -founder and CEO of IOTA. So we run data center campuses. We have built Sovereign Cloud in India, which is running a whole lot of mission -critical government of India applications. Recently, we migrated Bhashini from a hyperscale cloud to our Sovereign Cloud. Our claim to fame in the last two years is that we have got thousands of NVIDIA GPU chips in India. And all the models which you are hearing getting launched in this summit, MITS, Sarvam model, IOT, Bombay’s Bharat Gen model or Socket model, they all have been trained on our GPU clusters, and now they are being made available to public use.

Thank you.

Brandon Mello

Hello. Good afternoon. My name is Brandon Mello. I work for Genspark .ai, a follow -up -based company. We have been around for about 10 months. We are the largest growing AI company right now in the world. We just broke $200 million in ARR. Our solution has been incredibly well -received. adopted in the market. It is our third largest market and our solution is to drive adoption from the bottom up by bringing agentic AI to the knowledge worker. Thanks for letting me be here.

Ankit Bose

Great, great, great. And hi, folks. I’m Ankit Bose. I head AI for NASCOM. So, whatever NASCOM does in AI something, I support that. I lead that, right? And we will be joined by Ganesh, who is from Bhadrajin. He’s leading the, you know, sovereign AI modern building effort in the country, right? So, I think meanwhile you join, let’s start. I think, Sunil, let me start with you, right? The first question I think I would want to ask after five days of immense brainstorming around, you know, AI for the country, AI for the world, right? You know, what is the top thing you say which, you know, India has to do, right, to build its sovereign capability, not only for the country, plus for the global south?

Sunil Gupta

Yeah. Ankit, if I take everybody, Just two years or maybe two and a half years down the line, when Chad GPT got on world scene, basically AI capability came in consumer hands. A big debate happened in India’s obviously government circle, industry circle, telecom circle, technology circles everywhere. That while India has got everything which is needed to succeed in AI, like we have been software and services leaders for last three decades. We have a startup ecosystem. On skill set index of mathematics, science, engineering, we are always the best. As a market, we are literally close to 1 billion people carrying smartphones, creating consuming content. AI ultimately resulted to most of the cases, you know, some apps which will be giving some productivity to us.

So both on the demand side and the supply side, including data sets like India will have the best data sets available. So everything India has, but what India was not having at that time was compute. Because AI does not run. And regular data centers or regular CPU computes, it required this. specialized GPU computes. So I would say that the biggest problem and of course you have to take care of the entire stack models, data sets, applications, everything. But the core problem to solve for taking AI to the masses was that how do you make compute available in an abundant way so that we don’t think of that. That should become just a hygiene which is always available.

And that’s the problem we tried to solve. You know way back at that time Jensen was in India. I happened to get to meet him and he says we as NVIDIA are too committed to India. We can extend your parity allocation. We can give you engineering support, everything. But somebody has to take a step forward of not only putting your data centers and power and everything but you also need to put in chips and we will give you everything. And from there to now today we are running almost 10 ,000 chips. You know as I said majority of the models which you are hearing sovereign models getting launched in India. You know they have been trained on a GPU.

But the real thing I would say is start now. Many of these models are great, you must have heard Sarvam Modeller beating Gemini and ChatGPT on many of the match marks. And they are making them absolutely for India use cases like OCR, you know the handwritten notes and all that thing, how do you get convert and all that stuff. So these are real India purpose built use cases and models. When they start scaling, when they start getting adopted by masters, we have seen one UPI changed our lives. Imagine we have UPI in 50 different sectors in the country, 50 UPI movement will come into India. At that time, the number of GPUs required will be millions. Today we are happy as a country, we have X thousand of GPUs.

But if you as a single company like SpaceX or like Meta can have 1 million GPUs, India as a country require multiple million GPUs. So while we are working on all the upper layer of stacks and Indians are very good at that, models, data sets, applications. We need to solve this issue. We are taking care of infrastructure problems. We are taking care of railways and roadways and airports. We also need to create this digital infrastructure. We take care of that, make it available abundantly to every startup, every, you know, I would say academic community. We make it available at a very low price. Government India AI machine is doing a human’s role. On one side, they have asked people like us, incentivized us to invest into the GPUs.

But they are taking GPUs from us, putting their own money, putting their own subsidy and then giving it to Sarvams and IITs and sockets of the world. And they think now you make, you don’t have to bother about money. Just go and make India’s plastic model. And the result is to seem in two years, India has come a long way and we have a long way to go. Compute problem has to be solved.

Ankit Bose

Great. Thank you. Thank you, Sunil. Same question to you, KK. You know, what is the one thing you feel can add the edge, right? The whole.

Kalyan Kumar

When you look at sovereign and I think Minister of Electronics and IIT Vaishnavji, he was mentioning. The. Mr. talking the five layers layer stack right and that’s where if you what sunil mentioned is for a easier way i say i use the word infrastructure which can combine energy or the ping power uh cooling the whole stack so that’s that’s providing that layer and then explain the whole model piece i think as you train and when you start to deploy at scale a couple of things becoming very interesting so you need to start to also build a data stack data platforms vector dbs edge vector i personally think you can do as much centralization the way the data consumption model is going is going to highly get distributed going to go down into the edge correct so you need a very different kind of inferencing and those capabilities so you need a data layer something which uh which we are doing is very interesting outside of oracle and ibm uh the only other company which has all the patents for database is Ethier, because we acquired Actian.

So Actian owns the original patent of Ingress. And every derivative today, whether it is Postgres or every one of them is basically an Ingress query processor derivative, including SQL Server and others. Like that, we also acquired an asset from CWI in Netherlands. So we have a VectorDB, the original Vector engine. So we’ve been building a lot of those asset portfolio, HDB, now releasing a, in April we’re going to release a localized vector AI engine, which again can run on, because as the AI PCs become more and more, Edge becomes more and more, so building that. And building the data disciplines. I think that’s a very important layer. A lot of times what happens is we worry about infrastructure, and then we think about model, and then app.

The data platform is going to become very important, because as we’re building the data platform, the enterprise will only scale if you get your data. centric approach, data products, data contracts, data catalogs and those kind of things. Because finally the AI use case is going to be built on how good quality your data is. Yeah.

Ankit Bose

Great point. I think compute data, data stack for the country, I think very important. Let me come to Venu. Again, the same question, right? If India have to build a server AI for the country and Global South, what’s the top one thing you will say which will help the whole cause?

Brandon Mello

Yeah, so it’s interesting. MIT last year ran a big report and they said 95 % of AI pilots actually never made it to real production, right? So in my point of view, this is never really a tech problem. It’s really a production problem, right? So in my point of view, actually like when I look at a our solution, right, like we are able to deploy over thousands of companies in only eight weeks, right? So when I look at that, there’s really, it comes down to three reasons why this is happening in the industry, right? And the first one is what I call ROI invisibility, right? So when you look at companies right now, it’s really easy to get a budget for a pilot, right?

But what comes to the reality is can they get a budget to get the project done, right? So the data that I have to share with you guys, which is astonishing, is a third of CFOs really nowadays, they cannot quantify ROI inside of their organizations, right? And only one out of ten can actually have tools that can actually measure ROI, right? So. What ended up happening is whenever you talk to those organizations. right? Companies, and you ask, like, how are you actually going to measure productivity gains or how are you going to, like, they don’t have the answer, right? So it ends up, like, what’s the baseline? Like, they don’t have the answer, right? So whenever you bring to, like, the CFO to get that project approval, ends up on the project never getting approved and ends up on that cycle of, like, it ends up getting stuck into a pilot, right?

So when you look at what, you know, number two is, like, I think it’s data and trust and compliance friction, right? I think there’s a huge red tape in terms of what happens inside of organizations, right? I think that it’s very departmentalized, where, like, each part of the organization is trying to solve for each part of the department, right? So when I look at IT, it’s trying to solve for IT. Procurement is trying to solve for IT. Procurement is trying to solve for IT. Procurement is trying to solve for IT. procurement. Because no one’s really trying to solve that as an organization, the project ends up stalling. So something that can essentially take a few months to resolve ends up taking six months to a year.

And like I say in sales, time kills every deal. Last but not least, I think my third point is the champion problem. I think there’s a severe issue within organizations nowadays is there’s really no executive sponsorship. And whenever you don’t have executive sponsorship, especially for AI opportunities, deals never get approved. And people, especially at the bottom tier, they don’t understand what’s going on. And when there’s no clear alignment within the middle tier management, deals never get approved.

Ankit Bose

Great. I think let me summarize probably the three points that, you know, you need a close collaborated teams, right, with a single point of view with executive sponsorship. I think that will solve the adoption piece at least at last, right? Let me come to you, Professor Ganesh, right? Ganesh, I think what we are discussing is the, we have discussed a lot on AI for last five days for India, for globe, you know, and then we had three point of views. I asked them, give me one top thing. You heard probably from Breno and KK and then from, you know, Sunil was confused. What is that your top one take which India should do so that we can lead the seven race for the country and the globe?

Ganesh Ramakrishnan

I would suggest interoperability at every layer. I think it is also alluded to by earlier panelists. Interoperability encourages participation and in the words of PSA, if you are there in our Bharat, genesision is a meaningful participation right interoperability also helps you present alternatives because there is no one size fits all and you need to also ensure that in the trade off between fidelity and latency or between sensitivity and specificity you are able to find the right sweet spot which is suitable for you you can pick something that is appropriate I just on a lighter note I was driving from the PSA office and there was such traffic jam which most of you experienced so I exercised my sovereignty and I started walking so you find alternatives when you think sovereign 3 kilometers that’s why I was late so there are alternatives and also provisions for human participation much better there could be places where AI could be substitutional but many other places where you may want it to be just supplementary or complementary.

So alternatives is another thing that interoperability provides for. And I think the very key is scale out. I mean if just by scaling up we could cater to everyone, great. I would say that at least matches one checkbox which is people being catered to. But even we are not there. Scaling up is not going to cater. The capabilities are not there. But even if it were hypothetically, I think participation would also ensure that people are part of the process. It’s informed. I mean Bharat Jain, I take pride in one of our consortium members at IIM Indore. We are a consortium of nine academic institutions. And in the Institute of Management, what are they doing? They do a fabulous job in going to many of the second tier cities, going to people who have data and engage in conversations, education.

That data is an asset and you could actually transform that asset into IP generation. generation and not just source data. So the dialogue, right, and informed decision making is where participation is encouraged when you have interoperability. I just want to add just what he said. He made a very interesting point. How do you monetize data, correct? And this is something which needs a very different approach because today what happens is you are sourcing data and I think PM yesterday made a very amazing statement, correct? He’s saying, jiska data uska adhikar, correct? Very interesting. But if you look at what he’s saying is the creator of the data, the producer of the data, the consent provider for the use, all have a role to play and that’s what I’ve been using this word called data product or a data catalog.

So you need a catalog first. You need to build a data product and then set up a data contract, which is the fundamental, fundamental for interoperability. I just want to add. Because if that gets solved, I can choose my own personal data and say my data catalog, you can have five things to access. I think India has proven that amazing way of identity payments. So I think we can actually set up an environment where you can really build this. And the data benefactor is also the same person. So great point, Professor. I think it probably means definitely removing or optimizing the various layers and taking it to the last person in the rank. And it will help scale to the 1 .4 billion what we need.

I think thank you for that. Let me ask you again a second question. I think this is a very, very direct question. I think as a country, I think we are building our foundation models. You are one of the person who is building foundation models of the country. And at large, we have built sub -500 billion parameter model. And globally, we are going to 5 trillion or plus. The comparison is so huge, right? What do you think India’s moat can be when we are really, you know, in such a situation where we are at a disadvantage, though we have to aggressively, you know, handle that? Yeah, so the other important takeaway, which probably, you know, addresses some part of what you’re saying, what you’re asking is cooperation, right?

Collaboration. A collaboration, honestly, is not just a transactional process. It begins here, right? The will to understand the other side. I just published a book, you know, Informatics and AI for Healthcare. This is with my colleague, Shetha Jadhav. And what we did in the entire book was I tried to, I mean, I empathize with all the entire life cycle of a healthcare practitioner. And we tried to map every, ML example, informatic example, parsing to healthcare, right? and vice versa there was reciprocation from the other side as well it was very interesting exercise I think that’s how co -design also happens, so collaboration is actually to do innovation and again China has shown in many ways, right in contrast to the US ecosystem that co -design can lead to very innovative ideas, and co -design often is even lacking at the level of algorithms and infrastructure, right right there, new algorithms can come up but all the way to application layers so collaboration also comes by creating ecosystem where people can participate since you alluded again to Bharat Jain, we have a consortium of 9 academic institutions and the whole collaboration is through a section 8 company a not for profit company, which engages with for profit entities but also the academic institutions 60 full time employees work with 100 plus researchers, master students, it’s been a very profound exercise in a very short span of time I mean we may say we are late since you brought up also the landscape outside which is 1 trillion plus parameters and that’s also our North Star at least from the India AI vision that is our goal to get to at least 1 trillion parameters but even the 17 million parameter model that we have released there is a lot of research due diligence that has gone into the architecture choice and actually we are very proud of whatever model we released because ensuring that you know if you have two shared experts one of them is actually catering to languages and mixed code the other is catering to domain due diligence that was actually done based on Indian context right the fact that we covered 22 languages in our speech model the text to speech model again all of that is raised we explicitly captured the common phonetic vocabulary of Indian language And that’s only possible through this process of empathy.

I mean, linguist has to empathize with the computer scientist and vice versa. If we do that, we can actually create magic. Believe me. You can create magic. We just have to break our silos and the biggest silos sitting here. I mean, in fact, an endorsement to this was when we actually built our LLM enabled speech to text model. We had a projector layer which actually projected from speech to text. And we used a mixture of experts for the projection. It was very interesting. The expert for Hindi and Marathi performed very similarly. I mean, they were the same expert. Expert got shared. Whereas for Telugu, there was collaboration between Hindi and Tamil experts. So, data, domain knowledge, all of them actually are reinforcing each other.

So, this is actually a time where we can break the language barrier in my interaction with you. on 8th Jan, I gifted him a book from our consortium called Samanway Samanway stands for bringing all languages together and he said, we need to use AI also to show the strength of India it’s not just AI for India, but AI by India great, great, I think the point of collaboration and you know the story what we all have heard single stake course is a bunch of stakes I think it’s very true and that’s what is the mode for India collaboration, building that collaborative effort between different universities, bringing 9 different universities together to work and it’s a gigantic work, especially what you have created is amazing also, we are very happy 3 days back, we also announced at MOU with our heritage foundation sitting in the US we got a lot of support from people in the Bay Area, so once you open up for collaboration, you will find there is support from around the world and it’s very very good and I think that’s the most important Great, great, great.

Thank you, thank you, Professor Ganesh.

Ankit Bose

So, let me come to Sunil, you, right? I think we all agree that, you know, compute is one of the biggest player and pillar, right? And then government is doing their bit, right? I think they are doing their bit. But again, I think in terms of compute for the country, for some unity, can it be a shared commodity? Can it be, you know, some commodity which different, you know, factors of the country or probably ecosystem come together and build, right? How to solve that problem? Because as you rightly said, few thousands versus few lakhs, right? That’s something, yeah, very high.

Sunil Gupta

Number one, they said, you all come and panel with us at a right price point, right quality, and you declare how much GPUs you can give. They were not forcing us. They said, okay, you decide how much you want to give. We all got empaneled. We contributed GPUs, which were made available to startups. Then government said, every quarter we will come and we’ll encourage new and new providers to come up with the facility. And even existing players can also top up their capacities. And every next time, because the market forces, when the quantities start increasing, supplies start increasing, the pricing also will start reducing. Government say, okay, if new player comes, they can reduce the price.

Existing players will have to match. And they keep on empaneling more and more capacity. And that is something which has resulted into that 38 ,000 GPUs, which government is talking about, the shared compute facility, which is nothing but a, you can say, combination of the compute capacity created by multiple providers like us. And now yesterday, Prime Minister announced that 20 ,000 more are being added to this facility. So I would say, both as a concern, except this is proven that last 18 months, must is doable and both are the technology right while technically it’s possible that the same model can get trained like Ganeshji I’m sure can can talk very authoritatively on this subject technically also you can train on multiple different clusters of course inferencing you can do in multiple different places but even if you don’t do that you are actually what government did very democratically okay IIT we will put you into this service provider okay Sarvam will put you into this service provider okay GAN will put you into this provider so government is democratically making sure that they are encouraging industry to invest into this creating this capability which is required and we because we are getting business we are scaling up now we are investing more and more now and then they are making it available to people because India needs its own models we may use frontier models for certain purposes but as minister was saying that 95 percent of the use cases of the country can very well be done by a 20 billion to 100 billion parameter model right of course Ganeshji is carrying a mandate to create a trillion parameter model also in which country required almost we can for all those things why anybody else can do right their success Bharat Jain success and Sarvam success has proven that India can do it right so I would say that shared compute framework which has been done it is proven we just need to scale it up and my request to government which I think they are doing is don’t limit it only for training of models because models training is one step done now these models will be going to massage for adoption and you require millions of GPUs I think I’m repeating myself but that is where government need to fund the first cycle of inferencing on these models when users start adopting let’s say agriculture use case or a healthcare use case or a education use case or whichever use case which come on multiple UPI equal and use case will come up it will take time for users to start adopting it start accepting it making it a part of their lives at that time it will take time for users to start adopting it start accepting it making it a part of their lives at that time only user will be happy to pay 10 paisa per transaction or maybe 50 rupees per month subscription for that that time these models and use cases will become self -sufficient to generate revenue also then they will need government support but at least for i would say first cycle of inferencing maybe one year or two years government not only support the funding of the training of the model but also they support the first phase of inferencing on this model so that adoption happens revenue models emerge and after that government can say okay let private sector invest and government will come back to their original role of regulator

Ankit Bose

great so i think i think probably it will augment and put fewer thoughts right so the india mission has really created the single fire right yeah this fire is going to every state in the country yes all 28 states all eight union territories they are building aicoes yes and the mandate for each co is to give compute right i think that like a small wildfire it will spread all across the country it will be phenomenal but again i think at the same time you know we have to keep up the pace right i think one thing is space.

Sunil Gupta

Absolutely, Ankit, just trying to, this is something which I know two years back when we said that I’m putting 8000 GPUs, everybody started laughing. Because we were starting with the base when India was not having GPUs, right? Today, we comfortably say okay, India will be going to 50 -60 ,000 GPUs but even today I can tell you India require millions of GPUs. In US, just 3 or 4 deep tech companies are collectively owning millions of GPUs. India has got 1 .4 billion people out of which 1 billion people are carrying smartphones, creating, consuming content every single minute. And as Ganeshji will talk about, they all are creating voice -based AI because India’s AI will be voice -based. People are talking in their own native language or a mixture of Hindi, English, everything.

And they’ll be comfortable doing that instead of writing in their native language or screen which is not so easy. When you’re doing that and actually innovations are being done that even from feature phone or regular telephone line, not using smartphones, you will be able to talk to an AI model at the back end. When you are basically talking about 1 .4 billion people coming in the AI fold for multiple use cases. Just imagine what type of number of GPUs will be coming for inferencing and how many GPUs will be coming for training multiple models for sectoring all these things. So you are right, Ankit. What we have done in last two years is kudos to the whole ecosystem, to government and everybody, all of us.

But we need to keep on building for next 7, 8, 10 years. Sorry, just to give one or two more data points. India is creating and consuming 20 % of world’s data. One -fifth of the world’s data is created and consumed by India. Only 3 % of that data is hosted in India. That shows the upscope of the infrastructure both at the physical data center level and also in terms of the compute or GPU level India need to build. Because we don’t want any single country or any single company start dictating our digital destiny. We need to be as much sovereign as possible.

Ankit Bose

Thank you, Sunil. Thank you. Kalyan, let me come to you. So, Kalyan, I think one big base for the sovereignty is the skill set. to research, develop, deploy, right? And do all of that responsibly, right? I think SCL being, you know, one of the companies who have done that, right, in the last two, three years, what will be your nuggets, right? I think how other companies, other players in the country, other countries can do that, right?

Kalyan Kumar

So, if you look at, see, what is India known for? India is known for capability, historically. NASSCOM, right? But that capability was historically, and for a majority, and most of the business capability for hire. You basically are building capability to build things for others, and that’s been the core business. We’ve now become pretty much, if you really look at, if some other country thinks sovereignty, 50 % of their, global tech engineering services, development operations talent is sitting out of India. You see those GCC crates. But where is the pivot? The pivot is, I think what Professor was talking about, is you have to pivot towards build. We are always more towards service. So building, research, development, build your own IP, and how do you make India for the world?

I think it’s very important. I think that’s what our journey has been. So what we did is in 2015 -16, because we have one advantage, we are a single majority shareholder run company. Mr. Nader had a very ambitious vision. He said, we are building products for others, we should start building for yourself. It’s 2015. It’s a very conscious strategy, and he realized if you want to play in the global market, you need to have access to market permission and market access. Because people would only buy if you are a software product company. So that whole idea of acquiring India intellectual property, because if you really start to see the underlying of these pieces, you could build on open source and other stuff, but suddenly what’s happening is some of these open source companies are getting acquired and suddenly becoming closed source.

This is becoming a very interesting plan and suddenly some of them are getting classified as dual use. Suddenly they’ll say, oh, this is dual use tech, so I can only release this. So what we’re seeing from a skill standpoint, you need lesser smarter people. So I’m making a very controversial statement. You need lesser people, smarter people. You need engineers more than coders. See what’s happening is that we’re building quarters. You need engineers, people who think systems thinking you need people who are research bent. I meet students and I asked MBA students, what did you do? I did engineering. I said, why the hell did you waste four years of your life? If you wanted to go and do an MBA, the things like, why are you not doing deeper?

Why don’t you specialize in a domain? But those are things like even fundamental things. I would say. The big leap is going to be. I think India can solve something very interestingly, and as he’s referring to the PSA, quantum. Because I think the kind of compute needs you have, and looking at energy GPUs, you could completely change the computational paradigm. So hence, but that needs fundamental science, research, physics. Like no one wants to study physics. If you go back 20 years back in this country, everyone wanted to go and do coding. So those are the fundamental skills. So what we’re doing, in a very small way, we are acquiring, we are building talent and research pools.

So 50 % of HCL software product business is in India, engineering. But my second largest engineering center is in Rome. Third is in Israel. Then I’m in Perth, Austin, Chemsford outside of Boston. Why? Because if global companies can come to India and acquire talent to build and research, and then build an IP and take it to US, I’m doing the reverse. So AppScan, which is a code security product, the security heuristics is built in Israel. The, SAS UX is built in Boston. but the core engineering is in Bangalore but the IP is registered in India which is where we are moving a very different way we are now tapping global talent to build for us so we are still a billion and a half we are not big but we have got 130 countries so we are a step in the change it’s a long journey it needs to get away from short term thinking hire people to get them built I think you have to go to a very different model I think that’s what we are starting within the larger scheme of HCM but I think we are walking the right path I think we are acquiring assets continuously and building that

Ankit Bose

so let me add probably what I am seeing in the skill level the persona at least what NASCOM is focused on is the developer and the way we code is changing so NASCOM has done concentrated effort to help developers learn the new way of coding redefine the whole SCLC as a target what I have taken my team has taken we have taken a target of you know enabling 150k developers across the country next six months. Make them AI enabled, AI ready. Help them change the whole, you know, or unlearn and learn the new way. I think that’s what, is one thing, right? But finally, I think, which I should make everyone aware, I think there will be announcements sometime soon.

But with the MIT and, you know, the education industry, we are rewriting the whole, you know, technical, BTEC, MTEC, MCA, BCA curriculum, right? I think we are adding more specialization, as rightly said. Because we need specialists. We don’t need journalists. As an engineer, he studies 48 subjects in four years. At the end, what is he specialized on? It is his luck, right? The group he gets, the project he takes, somehow, some job he gets, right? So, I think that’s what we are changing. Soon, there will be announcements happening. But again, I think that’s what is happening at the background. Coming back to Benno, Benno, you have a product which is so simple, anyone can use it and build agents through that, right?

And get, you know, benefit, benefit from it. that. Let me ask you this. I think the one big piece of AI to really be mature and impact is adoption, right? And you started with the 95 % project fail or probably don’t go to production, right? So if we have to really do adoption at scale, what are the top issues you see, right? And how do you suggest, you know, the companies or folks here can take some pointers to mitigate it in their life functionally?

Brandon Mello

Yeah. So I’ll give you three. One is very specific to India, actually. those are relatable to our solution, but I think those are real use cases because the proof is, like I said, the proof is in the pudding, right? One is like you got to solve a real use case, something that is actually changing in people’s life. So AI is complex and AI is people still like trying to figure out AI. So it needs to be something that is into people’s everyday life. So in our case, for example, let’s go back. So if you look at Cursor or Lovable, right, they changed the life of, you know, vibe coding, software engineers. In our situation here at GenSpark, we looked at people that were producing office work, right?

So people looking at producing Excel, PowerPoint, and essentially just like any mechanical work on the everyday office work, right? Because if you think about it, every time you office task, all of that office work is very mechanical, right? And that’s why we realized all this massive growth in our solution, right? So to your point, I think that adoption… comes from like something that is something that can change people’s life and something in a very simplistic way right I think the second the second thing is should be consolidation of tools right I think from the time that we wake up in the morning I think most of us pick up our phones and we have we inundate about messages and naps and then we go to our office work and then we have probably a hundred tools that we have to touch you know actually we looked at a you know draw our research at work you know people waste in average two and a half hours a day right just you know flipping between different solutions right so in that causes contacts loss of context right so if there’s a waking consolidate tools that also drives adoption right you know we have probably a hundred tools that we have to touch you know so I think the third one is especially in India is In fact, there’s a lot of different languages in this country, which you brought up, right?

So I think in this country, especially LLMs, I think really struggling with being able to drive the right language, especially with all the different dialects that this country has. So being able to really naturalize and be able to bring the sovereignty here, I think is very important. And I think last but not least, people are very scared about data, right? And how that data, once they bring data into AI, how is that data going to be treated, right? So I think the solution needs to bring that sense of security of how that data is going to be managed.

Ankit Bose

Great. Thank you, Breno. I think with the last segment, last question, 30 seconds each, right? Again, probably starting with Breno, since you have the mic, right? So AI is not a short game. It’s a game for the next five years, 10 years, decades. Probably centuries. you know what is the challenge as a humanity we have to mitigate you feel that you know we don’t align with something which is hazardous to us

Brandon Mello

yeah so I think it’s you know actually I was having breakfast the other day and actually a person I was serving asked me the exact same question and I think that it’s how human beings interact with AI I think we’re still trying to figure out how to properly interact with AI and I think the speed of AI is evolving I think we’re still uncertain how to manage that I think the line on the sand moves so fast that we can’t really catch up to that right and the interaction of AI and us no one really knows how to do it yet

Ankit Bose

so I’ll map the earlier part in this part. You know, a very specific use of AI for self to, you know, make, you know, your life simpler. We’ll adopt AI skill. And we have to build a certain, you know, the processes to interact with AI in the long run. Because AI is changing, things are changing. Thank you, Breno. Coming back to you, Professor Ganesh, right? Same question, 30 seconds. What’s the challenge you see if we make something, you know, not aligned?

Professor Ganesh Ramakrishnan

I think the biggest challenge in not making AI aligned is that we will become products, not even consumers, right? We want to be in the steering wheel. I remember my very fondly, my first machine translation paper, I called it, you know, machine assisted human translation. Obviously, I can’t, I mean, that will sound too regressive. But the key is provenance. Right? I mean, how can you leave provenance? at every step in the stack, whether it’s data aggregation, which is again aligned with ecosystem. You need an ecosystem to leave provenance on the data part, whether it’s metadata refinement, data curation, provenance at the level of trading, tokenization, provenance at the observability, the other keyword, right? At the level of the way the model performs.

Models are glass boxes, because that gives you enough breathing space. Where do you, where should you actually yield your practices versus existing practices? So I think if you don’t have that view, the recipes, if they’re not made available, if the education isn’t there, I mean as a prof I always focus on the education part, I think we’ll become products.

Ankit Bose

Thank you, thank you. Sunil, you and then Kalyan.

Sunil Gupta

No, I think I concur with the views that at the end of the day we should not do AI for the sake of doing AI. It is a means to achieve an end purpose and the end purpose is beneficial. for the masses. I remember I think I was seeing on a YouTube video when Prime Minister Sir met all the startups and Professor Ganesh was there and I think Prime Minister Sir said to everybody don’t create toys, don’t use make AI to make toys, right, and use AI which benefits the masses in the real problem which they face in their real lives. So that is something that that is where the name of this event also has come in the Impact Summit, right, that and I think yesterday also used one word that unlike the previous summit where we are too much concerned about security governance which are things to be done but at the same time, keval bhai nahi rekhna hai AI ka, AI se aap apna bhagya bana sakte, apna bhahisha bana sakte ho.

So kaise AI se how we sort of create an impact, we benefit the masses and also machine should not end up dictating our lives as again I would say ke we should not end up becoming product itself. As much AI makes improvements, it possibly will never reach a stage where it starts acquiring human’s emotions, it starts acquiring our sense of gut, it starts acquiring our sense of culture, it starts acquiring what we speak, our body language, not just with our words. So I think human in the loop and human remaining the master of AI is something we’ll have to guard against all the time.

Ankit Bose

Interaction, don’t become product, have human -centric development. Kalyan?

Kalyan Kumar

I would say, break this into four key areas. Professor mentioned, I think the consumer AI, so I’m going to break it into consumer, enterprise, government and critical national infrastructure defense. So let’s, the reasons, all fours are going to play, just like ten seconds. Consumer AI, you are the product, unfortunately. You now have to use data control to decide how much of what you give to get. It’s a give to get mode, correct? In the consumer AI. Because the day you click I agree on an Android 4 on an Apple intelligence, suddenly you are the product and you’re getting something back but that give to get balance and that’s where the role of the regulator in my opinion has a far more play than in the enterprise of regulation enterprise god made world in seven days because he had no installed base enterprise cios you go and talk to cios on the ground their reality is that they’ve got a big problem architectural problem their data landscape is broken so they have to pivot from process workflow to data first big shift so they need to start about lineage metadata most of these companies don’t have metadata correct metadata discovery use techniques acknowledge graph to understand the metadata and then you organize your data for so that AI can be benefited I think the big place in govtech government government citizen engagement g2c massive but that’s where I think that sovereign AI play comes in where the work which serve them is doing or or the whole bar agent important because that’s where you can host citizen service platform and the last is for critical national infrastructure air gap networks, private AI and defense.

So I think we need to also have a very broken up view of this whole thing rather than trying to have one brush to paint all of them. But I think the last is sovereignty is all about choice. Making choice. Like he walked here. It’s a great choice. I can run on hyperscaler A, B. I can run on IOTA. I can run on CIFI. I can run on any or I can run on my own infrastructure. Then I need to have choice of it’s all about choice. And second is please AI exists for human good. So put the people back into the center. Human because we suddenly have made human someone in the side and everything is about AI.

It’s about people using AI surrounding them. So that’s what my thought was.

Ankit Bose

Great. Thank you. I think we have had a lot of good nuggets from everyone. I think we’ll continue this conversation after this. As a part of NASCOM, I think 7 AI is a big initiative for us. I think we have been driving it since last three, three and a half years. Ganesh knows that. Sunil knows that. services companies, we have worked enough with them. To keep it on, I think it’s not an end point. We have to think about the sovereignty and we have to think about how India builds the AGI capability, quantum AGI capability. I think that’s the journey we are on as NASCOM. I think we are writing a current policy document for government on sovereign AI and AGI roadmap.

And I think the QR code is there. The QR code will be here and I want all of you to have a look. It’s a dark one. Please work on it. I think that’s that. Yeah, Ganesh?

Professor Ganesh Ramakrishnan

I mean, the potential is so immense. We have not even scratched the surface, not even the tip of the iceberg we have touched. So, sovereignty is critical because the amount of inefficiency in that entire stack needs to be done away with. GPUs were never designed for building these models, right? Legacy and how can we use even the large work we are doing, workload to actually do better? A SIG design? can we use it to have better model serving engines? So, there’s so much to do. I think everyone should get inquisitive about the entire stack. That’s where sovereignty comes.

Ankit Bose

Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborator. We will have a QR code and please respond to that. Give your inputs. And with that, thank you to my panelists. I loved it and I think hope you also loved it. Thank you again.

Kalyan Kumar

Just one thing I want to just say. Watch on 21st, the PM is inaugurating a new JV which HCL is announcing with Foxconn. It’s called India Chips Limited. I would call it a patient capital. It’s about 16 and 32 nanometer fab which are creating. Basically it’s like a OSAT unit. It’s going to come out after 5 years. You have to build the whole thing. But also building that skill, correct? It’s a big important thing. And we have to start now. We cannot wait for 5 years on the line. So,

Speaker 1

Thank you so much to our panelists I request the panelists to please stay back for a group photo right now You can also access the report that Ankit has been talking about in the QR code displayed on the digital background before and leave feedback I’m also happy to announce Thank you Thank you to our panelists I’m also happy to announce an MOU being signed with Amrita Vishwa Vidyapetam and NASCOM right now Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (34)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The session opened with a formal inauguration of the Sovereign AI Research Report produced by Amrita Vishwa Vidya‑peetham, with senior representatives from Amrita – Pro‑Vice‑Chancellor Dr Manisha V Ramesh and Dr Shiva Ramakrishnan – attending.”

The knowledge base records that Amrita Vishwa Vidyapetam participated in the report launch ceremony, confirming the involvement of Amrita in the inauguration [S2].

Confirmedhigh

“Compute scarcity was identified as the primary bottleneck for building sovereign AI capability in India.”

Multiple sources describe infrastructure and compute limitations as the critical bottleneck for AI development in India, confirming the report’s emphasis on compute scarcity [S105] and the national goal to deploy tens of thousands of GPUs [S58].

Confirmedmedium

“The government has created a shared‑compute facility that aggregates capacity from multiple providers, currently totalling about 38,000 GPUs, with an additional 20,000 announced.”

The knowledge base notes India’s mission to deploy over 38,000 GPUs as public infrastructure, confirming the reported 38,000-GPU figure; the additional 20,000 announcement is not covered in the sources, so the 38,000 part is confirmed [S58].

Additional Contextmedium

“The shared‑compute facility is part of a collaborative framework called “Maitri” that provides shared access to compute, data, and AI models as digital public goods.”

S106 describes the Maitri platform as a collaborative framework offering shared compute, data, and model access, adding detail to the report’s description of the shared‑compute facility.

External Sources (111)
S1
India’s AI Future Sovereign Infrastructure and Innovation at Scale — 2225 words | 200 words per minute | Duration: 665 secondss Hello, good afternoon. Good afternoon. Good afternoon. My na…
S2
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — -Sunil Gupta: Co-founder, MD, and CEO of Yotta – operates data center campuses and built Sovereign Cloud in India, manag…
S3
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S4
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S5
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S6
Announcement of New Delhi Frontier AI Commitments — -Ganesh: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S7
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Raised by:Kalyan Kumar and Professor Ganesh Ramakrishnan Raised by:Professor Ganesh Ramakrishnan
S8
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — – Ganesh Ramakrishnan- Kalyan Kumar – Sunil Gupta- Ganesh Ramakrishnan
S9
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborat…
S10
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-future-sovereign-infrastructure-and-innovation-at-scale — Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborat…
S12
India’s AI Future Sovereign Infrastructure and Innovation at Scale — I would say, break this into four key areas. Professor mentioned, I think the consumer AI, so I’m going to break it into…
S13
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — – Ganesh Ramakrishnan- Kalyan Kumar- Sunil Gupta – Kalyan Kumar- Ankit Bose – Sunil Gupta- Ganesh Ramakrishnan- Kalyan…
S14
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-future-sovereign-infrastructure-and-innovation-at-scale — And like I say in sales, time kills every deal. Last but not least, I think my third point is the champion problem. I th…
S15
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Hello. Good afternoon. My name is Brandon Mello. I work for Genspark .ai, a follow -up -based company. We have been arou…
S16
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Hello. Good afternoon. My name is Brandon Mello. I work for Genspark .ai, a follow -up -based company. We have been arou…
S17
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Raised by:Kalyan Kumar and Professor Ganesh Ramakrishnan Raised by:Professor Ganesh Ramakrishnan
S19
Enhancing rather than replacing humanity with AI — The technology that worries us might also help us, but only if we stay engaged rather than retreat into pure resistance….
S20
UNSC meeting: Artificial intelligence, peace and security — Yi Zeng:My name is Yi Zeng and I would like to take this opportunity to share with distinguished representatives my pers…
S21
Ethical AI_ Keeping Humanity in the Loop While Innovating — Thank you. ensure that they don’t suffer from climate changes and shocks. I mean, the problems are so inspiring. So I th…
S22
Science AI & Innovation_ India–Japan Collaboration Showcase — yeah i think uh two perspectives uh One is in our solutioning, when we, and I’m going to take a live example, when we ac…
S23
Indias Roadmap to an AGI-Enabled Future — Evidence:Examples include agricultural loan assessment in Tamil and legal aid reasoning in Hindi – problems affecting hu…
S24
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Voice technology and multilingual capabilities were highlighted as crucial horizontal solutions for healthcare AI in Ind…
S25
Driving Indias AI Future Growth Innovation and Impact — Minister Jayant Chaudhary outlined the government’s approach to AI democratization, highlighting the India AI mission’s …
S26
How to build trust in user-centric digital public services | IGF 2023 Day 0 Event #193 — Audience:So I have, in a way, a related question to cybersecurity. You asked previously how to deal with trust in the ag…
S27
Panel Discussion Data Sovereignty India AI Impact Summit — The discussion began by challenging conventional notions of sovereignty, with moderator Arghya Sengupta framing the cent…
S28
The future of Digital Public Infrastructure for environmental sustainability — These can promote compliance and foster enhanced stakeholder engagement. Furthermore, data analysis is underscored as an…
S29
Empowering Women Entrepreneurs through Digital Trade and Training ( Global Innovation Forum) — Policymakers are beginning to realise the significant influence of the digital world and its potential impact on various…
S30
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Data Foundations: Proper data infrastructure is essential, with most companies still needing to complete foundational wo…
S31
Designing Indias Digital Future AI at the Core 6G at the Edge — Summary:Roy emphasizes that infrastructure challenges, particularly power consumption and site requirements, are the mai…
S32
Regulating Open Data_ Principles Challenges and Opportunities — Digital ecosystems simply do not function in silos. However, enabling data to move across borders should not mean that c…
S33
Main Session on Sustainability & Environment | IGF 2023 — In conclusion, the analysis presents various arguments and stances on the significance of standards and sustainability. …
S34
The Right to Data for Development (Bluenumber) — The importance of interoperability in agriculture data systems is also highlighted. Interoperability refers to the abili…
S35
Importance of Professional standards for AI development and testing — – Moira De Roche- Liz Eastwood Havey believes that failures like the Post Office scandal result from poor implementatio…
S36
AI as critical infrastructure for continuity in public services — He observes that despite rapid technological advancement and availability of platforms and GPUs, organizations struggle …
S37
Keynote-Rishad Premji — Explanation:Rather than focusing on technological capabilities, there is recognition that the main challenges lie in org…
S38
Supply Chain Fortification: Safeguarding the Cyber Resilience of the Global Supply Chain — Injecting sovereignty in policy making and industry is needed. With the amount of material being brought into the decis…
S39
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Shikoh Gitau: and I’m really glad to be here. Thank you so much for having me. And apologies for joining in late. So, th…
S40
Driving Indias AI Future Growth Innovation and Impact — Evidence:By combining infrastructure and open source, costs can be made palatable for Indian citizens. The goal is servi…
S41
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — High level of consensus with strong alignment between industry experts, academics, and policymakers. This suggests a mat…
S42
Driving Enterprise Impact Through Scalable AI Adoption — Summary:The main disagreements centered on educational priorities (fundamental vs. applied skills), assessment methods (…
S43
Discussion Report: Sovereign AI in Defence and National Security — Faisal advocates for a strategic approach where countries focus their limited sovereign resources on the most critical c…
S44
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — 2.Infrastructure capacity- having sovereign compute for advanced models If AI is to become electable in our democracies…
S45
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — 1.Infrastructure Scaling: Continue accelerating from thousands to millions of GPUs required for population-scale deploym…
S46
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Infrastructure and Compute Requirements for Sovereign AI: The panel extensively discussed India’s need for massive GPU i…
S47
Driving Indias AI Future Growth Innovation and Impact — Industry representatives highlighted significant challenges and opportunities in India’s AI landscape. A.S. Rajgopal fro…
S48
From India to the Global South_ Advancing Social Impact with AI — Consensus level:High level of consensus with significant implications for coordinated AI development strategy. The align…
S49
The Global Power Shift India’s Rise in AI & Semiconductors — Consensus level:High level of consensus with complementary perspectives rather than conflicting views. The speakers come…
S50
Building Trustworthy AI Foundations and Practical Pathways — Consensus level:High level of consensus with complementary expertise – Thakkar provides the broad technological and econ…
S51
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Unexpectedly, there was strong consensus across industry, government, and academic perspectives on the need for collabor…
S52
Upskilling for the AI era: Education’s next revolution — This comment is insightful because it addresses a common criticism of large-scale initiatives – that they focus on quant…
S53
How AI Is Transforming Indias Workforce for Global Competitivene — Disagreement level:Moderate disagreement with significant implications – while speakers share common goals of inclusive …
S54
Critical battle for high-quality data in AI industry — According tothe Economist, Adobe has defied predictions of its demise in the face of AI by leveraging its vast database …
S55
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “An interesting fact is that most of the AI models in the world work in English”[41]. “But your AI model works in Indian…
S56
Building the Workforce_ AI for Viksit Bharat 2047 — From the community health worker delivering nutrition to an expecting mother to the balancing worker strategizing access…
S57
The Future of Public Safety AI-Powered Citizen-Centric Policing in India — The expansion of language support remains an ongoing challenge and opportunity. Currently, Bhashini is being enhanced to…
S58
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Thank you and colleagues panelists great to be here and great to see a large kind of attendance that we have seen over t…
S59
Open Forum #30 High Level Review of AI Governance Including the Discussion — – Access to high-end compute resources Abhishek Singh, Under-Secretary from the Indian Ministry of Electronics and Info…
S60
Keynote-Rishad Premji — This comment transforms the discussion by repositioning India’s challenges as strengths. It provides the logical foundat…
S61
Open Internet Inclusive AI Unlocking Innovation for All — Anandan provided an optimistic assessment of India’s position in consumer AI applications, revealing that “India today h…
S62
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — India possesses many essential ingredients for AI success: a robust software services industry, thriving startup ecosyst…
S63
India’s AI Future Sovereign Infrastructure and Innovation at Scale — This comment reframed the entire sovereignty discussion by identifying compute infrastructure as the critical bottleneck…
S64
Keynote-Mukesh Dhirubhai Ambani — The third commitment centres on building India’s sovereign compute infrastructure through three interconnected initiativ…
S65
Keynote-Mukesh Dhirubhai Ambani — Distinguished guests, my fellow Indians, namaste. The Global AI Impact Summit is a defining moment in India’s tech histo…
S66
High Level Youth IGF : Building a Resilient, Inclusive and Safe Digital Future for West Africa — Building resilience through robust infrastructure and cybersecurity is essential
S67
Successes & challenges: cyber capacity building coordination | IGF 2023 — Enhancing the skills and capabilities of public administrations as they transition into the digital realm requires a str…
S68
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Data Foundations: Proper data infrastructure is essential, with most companies still needing to complete foundational wo…
S69
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-for-shared-prosperity — And it’s that kind of computing power that is essential. It’s essential for training large AI models. It’s essential for…
S70
Opening of the session/OEWG 2025 — Malawi: Thank you so much, Chair. Allow me to first thank the GFCE and UK government, through the Women in Internation…
S71
WSIS Action Line C6: Digital Ecosystem Builders in action: Redefining the role of ICT regulators — Petros Galides: Thank you. Thank you very much, Moderator, dear Ahmed. Just a few words about eMERGE, as my colleague sa…
S72
S73
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Existing initiatives like the Global Digital Compact and Open Government Partnership provide an opportunity to create co…
S74
The Right to Data for Development (Bluenumber) — The importance of interoperability in agriculture data systems is also highlighted. Interoperability refers to the abili…
S75
Importance of Professional standards for AI development and testing — – Moira De Roche- Liz Eastwood Havey believes that failures like the Post Office scandal result from poor implementatio…
S76
Keynote-Rishad Premji — Rather than focusing on technological capabilities, there is recognition that the main challenges lie in organizational …
S77
Keynote-Rishad Premji — Explanation:Rather than focusing on technological capabilities, there is recognition that the main challenges lie in org…
S78
Opening Remarks (50th IFDT) — The overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitu…
S79
Open Mic & Closing Ceremony — The overall tone was formal yet appreciative. There was a sense of accomplishment and gratitude expressed throughout, wi…
S80
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S81
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S82
WSIS Prizes 2025 Winner’s Ceremony — The tone throughout the ceremony was consistently celebratory, formal, and appreciative. It maintained a positive and co…
S83
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Thank you very much for having me. It’s always fun to listen to everyone here on this. I was hoping somebody was going t…
S84
[Brussels e-briefings] The Eurozone ‘time bomb’: Can the single currency be rescued for good? — The main positive news is that the Eurozone has understood the need for extraordinary measures to be taken. The sense of…
S85
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — This comment shifted the tone from technical solutions to strategic urgency, emphasizing the need for speed and coordina…
S86
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued) – session 6 — The speaker commenced by acknowledging the Chair’s dedication in revising the Annual Progress Report, particularly the s…
S87
Closure of the session — Thailand: Thank you, Chair, for giving me the floor. Thailand supports the establishment of a Permanent Mechanism that…
S88
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S89
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S90
Dynamic Coalition Collaborative Session — The discussion began with an optimistic, collaborative tone as panelists shared their expertise and perspectives. Howeve…
S91
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — The discussion maintained a consistently collaborative and solution-oriented tone throughout. Speakers were optimistic a…
S92
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S93
Building Trusted AI at Scale – Keynote Anne Bouverot — Overall Tone:The tone is diplomatic, optimistic, and collaborative throughout. It begins with ceremonial courtesy and ap…
S94
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S95
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — Raghavan argues that while the world focuses on immediate metrics like largest models or fastest chips, these are transi…
S96
Opening remarks — Hartmut Glaser:of Science and Technology. was focused on artificial intelligence. President Lula da Silva asked us to di…
S97
State of play of major global AI Governance processes — Dohyun Kang:Thank you very much for introducing me, and thank you again, the Secretary General of the ITU, and under the…
S98
Panel Discussion Data Sovereignty India AI Impact Summit — So first of all, thank you. I’ll just keep it. I’ve answered this in two parts, and real quickly. So. So. critical quest…
S99
Opening of the session — Singapore: Thank you Mr. Chair on behalf of my delegation I’d like to express our thanks to you and your team for the p…
S100
Advancing Scientific AI with Safety Ethics and Responsibility — Thanks Shyam. I think first, yeah first thing that we need to understand is how that ecosystem is and then see if certai…
S101
What Proliferation of Artificial Intelligence Means for Information Integrity? — Ivars Pundurs: THE remainder of the episode is about the collapse and recession of the IMF by IWM. It’s basically about …
S102
National Disaster Management Authority — An unexpected disagreement emerged on the primary bottleneck – Mohapatra identifies data quality as the main issue (only…
S103
National Disaster Management Authority — Explanation:An unexpected disagreement emerged on the primary bottleneck – Mohapatra identifies data quality as the main…
S104
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Despite technical and economic opportunities, significant policy challenges remain. Chandra identified lack of coordinat…
S105
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — The first constraint involves infrastructure limitations, which Patel described as “oxygen for AI.” The global shortage …
S106
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Agarwal explained that while India has strong talent and skills, they faced challenges with compute infrastructure and d…
S107
https://app.faicon.ai/ai-impact-summit-2026/building-public-interest-ai-catalytic-funding-for-equitable-compute-access — so I would say that the focus is not on rationing but on intelligent prioritization I think that’s going to be the focus…
S108
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Dr. Saurabh Garg outlined India’s approach through the proposed “Maitri” platform, a collaborative framework designed to…
S109
AI Infrastructure and Future Development: A Panel Discussion — And we think that if we do buy all of those chips, we really help create a lot of market cap for Lisa and team. And they…
S110
Shaping the Future AI Strategies for Jobs and Economic Development — kilometers away from Earth. We partner with Agni Cool, which is a space tech company, and the space ecosystem has evolve…
S111
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — ask I don’t think there is any country in the world whose government has given its citizens… In India’s context. Yes, …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sunil Gupta
10 arguments200 words per minute2225 words665 seconds
Argument 1
GPU scarcity and need for massive scale (Sunil Gupta)
EXPLANATION
Sunil emphasizes that India’s AI progress is hampered by a shortage of GPU compute, which is essential for training and deploying models at scale. He argues that making abundant GPU resources a basic hygiene is critical for mass AI adoption.
EVIDENCE
He notes that while India has strong demand and data, it lacked compute, stating that AI cannot run on regular CPUs and requires specialized GPU compute, which was missing at the time ([54-60]). He later quantifies the gap, saying millions of GPUs will be needed for future use cases, whereas currently only a few thousand are available ([70-78]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The critical need for massive GPU infrastructure in India is highlighted in the panel discussion, noting the gap between demand and available GPUs and the target of scaling to 50-60,000 GPUs [S1][S2].
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
AGREED WITH
Ankit Bose, Kalyan Kumar
DISAGREED WITH
Ganesh Ramakrishnan, Kalyan Kumar
Argument 2
Shared compute facility as a commodity, government empaneling and scaling (Sunil Gupta)
EXPLANATION
Sunil describes a government‑led shared compute facility where multiple providers contribute GPU capacity, which is then made available to startups and other users at competitive prices. He highlights the empaneling process and recent expansions as evidence of scalability.
EVIDENCE
He explains that providers voluntarily declare how many GPUs they can supply, are empaneled, and the government encourages new entrants, leading to a pool of 38,000 GPUs and an additional 20,000 announced by the Prime Minister ([224-236]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sunil describes a government-led shared compute model where providers are empaneled and contribute GPUs, creating a pool of 38,000 GPUs that can be accessed competitively by startups [S1][S2].
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
AGREED WITH
Ganesh Ramakrishnan, Kalyan Kumar
DISAGREED WITH
Ganesh Ramakrishnan, Kalyan Kumar
Argument 3
India’s rich datasets as an advantage, but must be hosted locally (Sunil Gupta)
EXPLANATION
Sunil points out that India generates a large share of global data, which is a strategic asset for AI, but stresses that most of this data is stored abroad, creating a vulnerability. He calls for domestic hosting to ensure sovereignty.
EVIDENCE
He states that India creates and consumes 20 % of the world’s data yet only 3 % of it is hosted within the country, underscoring the need for local infrastructure ([254-257]).
MAJOR DISCUSSION POINT
Data Infrastructure, Interoperability & Provenance
AGREED WITH
Ganesh Ramakrishnan, Kalyan Kumar
Argument 4
AI must serve the masses, not be “toys” or replace humans (Sunil Gupta)
EXPLANATION
Sunil argues that AI should be a means to solve real problems for the population rather than being developed as a novelty or for entertainment. He urges focus on impactful applications that improve everyday life.
EVIDENCE
He recalls the Prime Minister’s admonition to startups not to create “toys” but to build AI that benefits the masses, emphasizing purpose-driven development ([380-384]).
MAJOR DISCUSSION POINT
Ethical Alignment & Human‑Centric AI
Argument 5
Preserve human‑in‑the‑loop, prevent AI from becoming the product itself (Sunil Gupta)
EXPLANATION
Sunil stresses the importance of keeping humans central to AI systems, ensuring that AI augments rather than replaces human decision‑making, and guarding against AI becoming a product that dictates lives.
EVIDENCE
He notes that AI should not acquire human emotions or culture and that a human-in-the-loop approach must be maintained ([385-386]).
MAJOR DISCUSSION POINT
Ethical Alignment & Human‑Centric AI
AGREED WITH
Ganesh Ramakrishnan, Ankit Bose
Argument 6
Government should fund the first cycle of inferencing on sovereign AI models to accelerate adoption and create sustainable revenue streams.
EXPLANATION
Sunil argues that beyond training, the government must support the initial inferencing phase of domestically built models so that early adopters can use them, generate value, and later transition to private sector funding.
EVIDENCE
He explains that the shared compute framework is proven but requests the government not to limit support only to model training; instead, it should fund the first inferencing cycle for use cases such as agriculture, healthcare, and education, enabling users to start paying for services and allowing revenue models to emerge before private investment takes over [236-241].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He advocates extending the shared-compute framework beyond model training to fund the initial inferencing cycle, enabling early adopters to generate value before private funding takes over [S1].
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
AGREED WITH
Ankit Bose
DISAGREED WITH
Kalyan Kumar
Argument 7
India’s AI future will be voice‑first, requiring multilingual voice models to reach the mass population.
EXPLANATION
Sunil highlights that the majority of Indian users will interact with AI through voice in their native languages, so building robust speech‑to‑text and voice assistants is essential for widespread adoption.
EVIDENCE
He notes that “India’s AI will be voice-based” and that people will be comfortable using AI via feature phones or regular telephone lines, speaking in native or mixed languages rather than typing, underscoring the need for voice-centric solutions [244-247].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion emphasizes that India’s AI will be voice-based, with multilingual speech-to-text models needed for billions of users [S23][S24].
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
Argument 8
Make compute available at low, affordable prices for startups to accelerate AI adoption.
EXPLANATION
Sunil stresses that beyond merely providing GPU capacity, the compute must be offered at a very low price point so that emerging startups can access it without prohibitive costs, thereby fostering rapid innovation and scaling of AI solutions.
EVIDENCE
He notes that the shared compute facility is made available to startups at a very low price, emphasizing the importance of affordable access to GPUs for the ecosystem’s growth [84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shared compute facility is offered to startups at very low prices to foster rapid innovation and scaling of AI solutions [S1].
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
Argument 9
Sovereign Cloud as a domestic platform for mission‑critical government applications
EXPLANATION
Sunil explains that NASCOM has built a Sovereign Cloud in India that currently hosts a large number of mission‑critical applications for the Indian government, providing a secure and locally controlled environment for public services.
EVIDENCE
He states that “We have built Sovereign Cloud in India, which is running a whole lot of mission-critical government of India applications” and adds that they have migrated the Bhashini service from a hyperscale cloud to this sovereign infrastructure ([22-24]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sunil notes that a Sovereign Cloud has been built in India and now hosts many mission-critical government applications, including the migration of Bhashini from a hyperscale provider [S2].
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
Argument 10
Migration of key national services to the sovereign cloud demonstrates a shift from reliance on foreign hyperscale providers
EXPLANATION
By moving Bhashini, an important language‑technology service, from an external hyperscale cloud to the domestic Sovereign Cloud, Sunil highlights a strategic move toward data sovereignty and reduced dependence on foreign providers.
EVIDENCE
He notes that “Recently, we migrated Bhashini from a hyperscale cloud to our Sovereign Cloud” ([23-24]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The migration of the Bhashini language-technology service to the domestic Sovereign Cloud illustrates a strategic move toward data sovereignty [S2].
MAJOR DISCUSSION POINT
Data Infrastructure, Interoperability & Provenance
S
Speaker 1
2 arguments55 words per minute330 words359 seconds
Argument 1
Launch of the Sovereign AI research report to guide national strategy (Speaker 1)
EXPLANATION
Speaker 1 announces the release of a new Sovereign AI research report produced by Amrita Vishwa Vidyapeetham, positioning it as a guiding document for India’s AI roadmap.
EVIDENCE
During the opening remarks, the speaker invites the vice-chancellor of Amrita to launch the report and mentions its availability via a QR code on the digital background ([1]).
MAJOR DISCUSSION POINT
Institutional Collaboration & Reporting
Argument 2
Signing of MOU between Amrita Vishwa Vidyapeetham and NASCOM to foster cooperation (Speaker 1)
EXPLANATION
Speaker 1 announces a formal memorandum of understanding between the academic institution and NASCOM, signalling a partnership to advance sovereign AI initiatives.
EVIDENCE
At the close of the session, the speaker states that an MOU is being signed with Amrita Vishwa Vidyapeetham and NASCOM ([454-455]).
MAJOR DISCUSSION POINT
Institutional Collaboration & Reporting
G
Ganesh Ramakrishnan
21 arguments157 words per minute1464 words558 seconds
Argument 1
Interoperability at every stack layer, data products and contracts to enable participation (Professor Ganesh Ramakrishnan)
EXPLANATION
Ganesh argues that ensuring interoperability across all layers of the AI stack encourages broader participation, allowing diverse stakeholders to contribute and choose appropriate trade‑offs between fidelity, latency, and other factors.
EVIDENCE
He describes interoperability as a means to provide alternatives, support human participation, and enable different fidelity-latency balances, citing examples from his own consortium work ([151-166]).
MAJOR DISCUSSION POINT
Data Infrastructure, Interoperability & Provenance
AGREED WITH
Sunil Gupta, Kalyan Kumar
Argument 2
Need for data provenance, metadata and cataloguing for trustworthy AI (Professor Ganesh Ramakrishnan)
EXPLANATION
Ganesh stresses that trustworthy AI requires provenance at every stage—data collection, curation, tokenisation, and model observability—supported by robust metadata and cataloguing systems.
EVIDENCE
He outlines the necessity of provenance for data aggregation, metadata refinement, tokenisation, and model observability, calling models “glass boxes” to provide transparency ([371-376]).
MAJOR DISCUSSION POINT
Data Infrastructure, Interoperability & Provenance
AGREED WITH
Sunil Gupta, Ankit Bose
Argument 3
Academic‑industry consortiums to co‑design models and foster collaboration (Professor Ganesh Ramakrishnan)
EXPLANATION
Ganesh highlights the creation of a nine‑institution academic consortium that co‑designs foundation models, emphasizing collaborative research and shared ownership of AI assets.
EVIDENCE
He mentions a consortium of nine academic institutions, coordinated through a Section 8 not-for-profit, involving over 100 researchers and students, and cites joint publications and model development efforts ([162-169], [191-200]).
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
AGREED WITH
Kalyan Kumar, Ankit Bose
Argument 4
Ensure provenance, transparency and alignment throughout the stack (Professor Ganesh Ramakrishnan)
EXPLANATION
Ganesh reiterates that AI systems must embed provenance and transparency at each layer to maintain alignment with human values and avoid becoming opaque products.
EVIDENCE
He discusses the importance of provenance, glass-box models, and education to keep AI aligned, warning that without these practices AI could become a product rather than a tool ([371-376]).
MAJOR DISCUSSION POINT
Ethical Alignment & Human‑Centric AI
Argument 5
Scale‑out, not just scale‑up, is required to deliver AI services to the entire population.
EXPLANATION
Ganesh argues that merely increasing the size of centralised compute resources (scaling up) will not meet the diverse needs of India’s billions of users. Instead, a distributed, scale‑out approach is needed to ensure that AI capabilities can be delivered at the required latency and fidelity across the country.
EVIDENCE
He notes that while scaling up would be helpful, “the capabilities are not there” and that “even if it were hypothetically, I think participation would also ensure that people are part of the process”; he then stresses that “scale out” is needed to cater to everyone, implying a distributed architecture ([151-160]).
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
DISAGREED WITH
Sunil Gupta, Kalyan Kumar
Argument 6
Voice‑first AI with extensive multilingual support is essential for Indian adoption.
EXPLANATION
Ganesh emphasizes that India’s AI future will be voice‑driven, requiring models that understand and generate content in many local languages. Building language‑specific experts and covering a broad set of Indian languages will make AI usable for the majority of the population.
EVIDENCE
He states that “India’s AI will be voice-based” and that their speech model covers “22 languages”; he also describes using a mixture-of-experts architecture where experts for Hindi, Marathi, and Telugu collaborate, highlighting the technical focus on multilingual capability ([245-247], [210-212]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ganesh stresses that India’s AI will be voice-driven, requiring models that cover many local languages, aligning with the panel’s emphasis on voice-first, multilingual AI [S23][S24].
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
Argument 7
Current GPU hardware is ill‑suited for AI model training and serving; specialised hardware designs are needed.
EXPLANATION
Ganesh points out that GPUs were originally built for graphics, not for the massive parallelism required by modern AI models. He suggests exploring new hardware concepts, such as a SIG (Specialized Integrated GPU) design, to improve model serving efficiency and reduce legacy constraints.
EVIDENCE
He remarks that “GPUs were never designed for building these models” and asks whether a “SIG design” could be used to achieve better model serving engines, indicating a need for purpose-built compute hardware ([429-432]).
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
Argument 8
Treat data as a monetisable asset through data products, catalogs and contracts to foster participation and generate economic value.
EXPLANATION
Ganosh argues that data should be managed as a strategic asset that can be packaged into data products, with clear catalogues and contracts governing its use. This approach enables creators to retain rights, monetize their contributions, and encourages broader ecosystem participation.
EVIDENCE
He explains that “data is an asset” and that “you could actually transform that asset into IP generation”; he then outlines the need for a “catalog first”, a “data product”, and a “data contract” as foundational for interoperability and value creation ([165-176]).
MAJOR DISCUSSION POINT
Data Governance & Economic Development
DISAGREED WITH
Sunil Gupta
Argument 9
Breaking organisational silos through cross‑sector consortiums accelerates AI innovation and ensures inclusive development.
EXPLANATION
Ganesh stresses that collaboration must go beyond transactional relationships, involving co‑design between academia, industry, and non‑profits. Such consortiums enable shared research, pooled expertise, and faster progress toward sovereign AI solutions.
EVIDENCE
He describes a consortium of nine academic institutions coordinated via a Section-8 not-for-profit, mentions a recent MOU with a heritage foundation in the US, and highlights support from the Bay Area, illustrating a broad, collaborative ecosystem ([194-204]).
MAJOR DISCUSSION POINT
Collaboration & Ecosystem Building
Argument 10
AI should be positioned as complementary or supplementary rather than purely substitutive, providing alternatives and preserving human participation.
EXPLANATION
Ganesh argues that AI systems need to offer options that support human users, allowing AI to augment tasks in some contexts while remaining optional in others, rather than replacing human roles entirely.
EVIDENCE
He notes that there can be situations where AI is substitutional, but many other scenarios require AI to be supplementary or complementary, and emphasizes the importance of providing alternatives and ensuring human participation in AI deployments [151-166].
MAJOR DISCUSSION POINT
Ethical Alignment & Human‑Centric AI
Argument 11
Adopt mixture‑of‑experts architectures to efficiently support India’s multilingual landscape in speech‑to‑text models.
EXPLANATION
Ganesh describes using a projector layer combined with a mixture‑of‑experts design, where language‑specific experts (e.g., Hindi‑Marathi shared expert, Telugu collaborating with Hindi and Tamil) enable high‑quality performance across many Indian languages, demonstrating a scalable technical strategy for multilingual AI.
EVIDENCE
He explains that their LLM-enabled speech-to-text model uses a projector layer and a mixture-of-experts approach, with shared experts for Hindi and Marathi and collaborative experts for Telugu, illustrating how this architecture handles linguistic diversity [210-212].
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
Argument 12
Stakeholders should cultivate curiosity across the entire AI stack to drive holistic innovation.
EXPLANATION
Ganesh calls for everyone involved—researchers, developers, policymakers, and industry—to be inquisitive about all layers of the AI ecosystem, from hardware to data to applications, so that integrated solutions can be created.
EVIDENCE
He states that “everyone should get inquisitive about the entire stack,” urging a broad, cross-layer engagement with AI technologies [426-433].
MAJOR DISCUSSION POINT
Capacity development & Holistic System Design
Argument 13
Leverage India’s existing digital identity and payment infrastructure to create a data ownership and consent framework for sovereign AI
EXPLANATION
Ganesh proposes that the proven Aadhaar and UPI systems can be repurposed to give individuals control over their data, enabling the creation of data catalogs, contracts, and monetisation mechanisms that keep data benefactors as the rightful owners. This approach would strengthen data sovereignty by embedding consent and provenance at the source.
EVIDENCE
He points out that India has demonstrated an effective identity-payment ecosystem and suggests that this can be used to build an environment where data owners retain rights and can create data products, emphasizing that “the data benefactor is also the same person” ([177-184]).
MAJOR DISCUSSION POINT
Data Governance & Sovereign AI
Argument 14
Interdisciplinary empathy between domain experts (e.g., linguists) and technologists is essential for building AI models that truly serve Indian contexts.
EXPLANATION
Ganesh stresses that effective AI requires close collaboration between specialists such as linguists and computer scientists, allowing each to understand the other’s constraints and contribute to model design.
EVIDENCE
He describes co-authoring a healthcare book where he had to empathise with both clinicians and ML practitioners, and later notes that “a linguist has to empathise with the computer scientist and vice versa” to create useful AI solutions [197-202].
MAJOR DISCUSSION POINT
Collaboration & Ecosystem Building
Argument 15
Co‑design of AI solutions across sectors (e.g., healthcare) drives innovative outcomes and ensures relevance to end‑users.
EXPLANATION
Ganesh argues that involving stakeholders from different domains in the design process leads to AI systems that are better aligned with real‑world needs and generate higher impact.
EVIDENCE
He cites the development of a healthcare-focused AI book and the broader consortium effort that brings together academia, industry, and non-profits to co-design models, demonstrating how cross-sector collaboration fuels innovation [197-200].
MAJOR DISCUSSION POINT
Collaboration & Ecosystem Building
Argument 16
India’s AI potential is immense and still largely untapped, requiring holistic research across the entire AI stack.
EXPLANATION
Ganesh points out that the current AI efforts have only scratched the surface of what is possible, calling for continued, comprehensive research that spans hardware, data, models, and applications to fully realise India’s sovereign AI capabilities.
EVIDENCE
He remarks that “the potential is so immense” and that “we have not even scratched the surface” of AI, indicating a need for broader, deeper investigation across all layers of the stack [426-428].
MAJOR DISCUSSION POINT
Artificial intelligence
Argument 17
Edge‑focused AI engines and vector databases are essential for scaling AI across India’s diverse and distributed user base.
EXPLANATION
Ganesh argues that as AI‑enabled devices proliferate, deploying AI capabilities at the edge becomes critical to reduce latency and meet local needs. He highlights the development of a localized vector AI engine that can run on edge hardware, emphasizing that a distributed, edge‑centric approach complements scale‑out strategies.
EVIDENCE
He explains that as AI PCs become more common, edge computing gains importance and HCL is preparing to release a localized vector AI engine designed to operate on edge devices, illustrating the push for edge-ready AI infrastructure [102-104]. He also stresses that merely scaling up centralised compute will not suffice and that a scale-out model is needed to serve the entire population, reinforcing the need for distributed edge deployment [151-160].
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
DISAGREED WITH
Sunil Gupta, Kalyan Kumar
Argument 18
AI systems must retain human steering control to avoid becoming autonomous products
EXPLANATION
Ganesh warns that if AI is not kept aligned, it could turn into a product that operates without human direction, emphasizing the need for continuous human oversight and control over AI decision‑making.
EVIDENCE
He says, “I think the biggest challenge in not making AI aligned is that we will become products, not even consumers… we want to be in the steering wheel” ([367-369]).
MAJOR DISCUSSION POINT
Ethical Alignment & Human‑Centric AI
Argument 19
A not‑for‑profit Section 8 consortium model enables equitable collaboration between academia, industry, and non‑profits, fostering shared ownership of sovereign AI assets.
EXPLANATION
Ganesh explains that the AI effort is organized as a consortium of nine academic institutions coordinated through a Section 8 not‑for‑profit company, which allows both for‑profit and non‑profit entities to work together on shared research and development, ensuring that the resulting AI assets are collectively owned rather than dominated by any single commercial player.
EVIDENCE
He describes the consortium of nine academic institutions, coordinated via a Section 8 not-for-profit entity, involving over 100 researchers and master’s students, and later mentions a recent MOU with a heritage foundation in the US that further expands collaborative support, illustrating the inclusive structure of the partnership [162-169][194-204].
MAJOR DISCUSSION POINT
Collaboration & Ecosystem Building
Argument 20
Protecting open‑source foundations from commercial capture is essential; sovereign AI should prioritize open‑source development to avoid dependence on proprietary, closed‑source technologies.
EXPLANATION
Ganesh warns that while open‑source components are a key building block for sovereign AI, many open‑source projects are being acquired and turned into closed‑source or dual‑use technologies, which threatens the openness and accessibility needed for a truly sovereign ecosystem.
EVIDENCE
He notes that several open-source companies are being acquired and becoming closed-source, and that some are being classified as dual-use, limiting their availability for sovereign AI development [284-286].
MAJOR DISCUSSION POINT
Artificial intelligence
Argument 21
End‑to‑end collaboration across the entire AI stack—from algorithm research to application deployment—is necessary for sovereign AI to ensure that innovations translate into real‑world impact.
EXPLANATION
Ganesh stresses that collaboration must go beyond transactional relationships and include co‑design that spans algorithmic development, model training, and practical application layers, so that new algorithms are integrated with use‑case specific solutions and deliver tangible benefits for India.
EVIDENCE
He describes that collaboration begins with a willingness to understand the other side, co-design of models, and that new algorithms can emerge but must be carried through to application layers, emphasizing holistic, stack-wide cooperation [193-200].
MAJOR DISCUSSION POINT
Collaboration & Ecosystem Building
A
Ankit Bose
6 arguments173 words per minute1450 words501 seconds
Argument 1
Call for coordinated policy to make compute widely available (Ankit Bose)
EXPLANATION
Ankit urges the creation of a coordinated policy framework that treats compute as a shared commodity, enabling various ecosystem players to collectively build and access GPU resources.
EVIDENCE
He asks Sunil and Kalyan what the single top action should be for sovereign capability and later frames the question about making compute a shared commodity for the country ([92-95], [215-222]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ankit calls for a coordinated policy treating compute as a shared commodity, echoing the panel’s discussion of a government-led shared compute model and its scaling [S1].
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
AGREED WITH
Sunil Gupta
Argument 2
Upskilling 150 k developers, curriculum overhaul (Ankit Bose)
EXPLANATION
Ankit outlines NASCOM’s initiative to train 150,000 developers within six months and to revamp technical curricula (B.Tech, M.Tech, MCA) with deeper specialisation to meet AI skill demands.
EVIDENCE
He describes the target of 150k developers, the partnership with MIT and education industry to rewrite curricula, and the focus on specialist training rather than generic engineering ([312-320]).
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
AGREED WITH
Kalyan Kumar, Ganesh Ramakrishnan
DISAGREED WITH
Kalyan Kumar
Argument 3
Requirement for cross‑functional collaboration and executive buy‑in to drive adoption (Ankit Bose)
EXPLANATION
Ankit summarises that successful AI adoption requires tightly coordinated teams with a single point of view and strong executive sponsorship to overcome organisational inertia.
EVIDENCE
He explicitly summarises three points-close collaboration, single point of view, executive sponsorship-as the solution to adoption challenges ([144-146]).
MAJOR DISCUSSION POINT
Adoption Challenges & ROI
AGREED WITH
Brandon Mello
Argument 4
AI alignment and safety must be prioritized to prevent the development of hazardous or misaligned systems that could harm society.
EXPLANATION
Ankit warns that AI should not be pursued as a short‑term game; instead, long‑term alignment safeguards are needed to avoid creating technologies that could become dangerous.
EVIDENCE
He remarks that “AI is not a short game… we have to mitigate… we don’t align with something which is hazardous to us” indicating a call for safety-first development [352-357].
MAJOR DISCUSSION POINT
Ethical Alignment & Human‑Centric AI
Argument 5
Physical space constraints are a critical factor in scaling AI compute resources across the country.
EXPLANATION
Ankit highlights that beyond compute power, the availability of physical infrastructure (space) is essential for deploying large‑scale AI hardware nationwide.
EVIDENCE
He briefly notes “one thing is space” when discussing the need to keep up the pace of AI infrastructure deployment [238-239].
MAJOR DISCUSSION POINT
Infrastructure & Scaling
Argument 6
NASCOM is drafting a policy document and roadmap for sovereign AI and AGI for the Indian government
EXPLANATION
Ankit states that NASCOM is preparing a policy paper for the government that outlines a roadmap for sovereign AI and artificial general intelligence, indicating a proactive role in shaping national AI strategy.
EVIDENCE
He mentions “We are writing a current policy document for government on sovereign AI and AGI roadmap” and refers to a QR code that will provide access to this material ([419-424]).
MAJOR DISCUSSION POINT
The enabling environment for digital development
K
Kalyan Kumar
8 arguments175 words per minute1697 words579 seconds
Argument 1
Long‑term hardware fab (India Chips Limited) to secure future compute (Kalyan Kumar)
EXPLANATION
Kalyan announces a joint venture with Foxconn to build a 16/32 nm semiconductor fab (India Chips Limited), describing it as patient capital that will eventually supply domestic chips for AI compute.
EVIDENCE
He provides details of the JV, the fab’s technology node, its OSAT nature, and the five-year timeline, emphasizing the urgency to start now ([441-447]).
MAJOR DISCUSSION POINT
Compute Infrastructure & Sovereign Capacity
AGREED WITH
Sunil Gupta, Ankit Bose
Argument 2
Building a modern data stack (vector DB, edge inferencing) as a core layer (Kalyan Kumar)
EXPLANATION
Kalyan stresses that beyond hardware, a modern data infrastructure—including vector databases, edge inferencing, and centralized data platforms—is essential for scaling AI applications.
EVIDENCE
He details HDB, Actian’s Ingress patents, acquisition of a vector engine from CWI, and plans to release a localized vector AI engine for edge devices ([96-108]).
MAJOR DISCUSSION POINT
Data Infrastructure, Interoperability & Provenance
AGREED WITH
Sunil Gupta, Ganesh Ramakrishnan
Argument 3
Shift from services to building own IP; need smarter engineers and research focus (Kalyan Kumar)
EXPLANATION
Kalyan argues that India must pivot from a service‑oriented model to building proprietary IP, requiring engineers with systems thinking, research orientation, and deeper domain expertise.
EVIDENCE
He recounts HCL’s 2015-16 strategic shift, the need for smarter engineers over coders, and examples of IP creation across global centres ([266-292]).
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
AGREED WITH
Ankit Bose, Ganesh Ramakrishnan
Argument 4
Investing in fundamental science (quantum, physics) for next‑gen compute (Kalyan Kumar)
EXPLANATION
Kalyan highlights the necessity of fundamental research in quantum computing and physics to drive future compute paradigms, noting that current GPU‑centric approaches may be insufficient.
EVIDENCE
He references the PSA’s quantum roadmap, the need for new compute paradigms, and the scarcity of physics talent ([298-304]).
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
Argument 5
Strong regulatory oversight is needed in consumer AI to protect users from becoming the product and to manage data‑give‑to‑get dynamics.
EXPLANATION
Kalyan points out that consumer‑facing AI applications often turn users into data sources, so a robust regulatory framework is required to safeguard privacy and ensure fair data exchange.
EVIDENCE
He explains that in consumer AI “you are the product” and emphasizes the “role of the regulator” in managing the give-to-get model, highlighting the need for policy intervention to protect users [390-398].
MAJOR DISCUSSION POINT
Human‑Centric AI & Data Governance
DISAGREED WITH
Sunil Gupta
Argument 6
Enterprise AI success depends on robust data lineage and metadata discovery capabilities to enable trustworthy data‑first approaches.
EXPLANATION
Kalyan argues that many enterprises lack proper metadata and data‑lineage tools, which hampers AI deployment; establishing these capabilities is critical for reliable AI outcomes.
EVIDENCE
He notes that “most companies don’t have metadata” and stresses the need for “metadata discovery, data lineage, and cataloguing” to build trustworthy data products for AI applications [395-401].
MAJOR DISCUSSION POINT
Data Infrastructure & Trust
Argument 7
Building sovereign software products by design to ensure national control over technology
EXPLANATION
Kalyan emphasizes that HCL Software not only delivers services but also creates software products that are engineered to be sovereign by design, ensuring that critical enterprise tools remain under Indian ownership and control.
EVIDENCE
He describes his role as “We are … building software products which are sovereign by design” while outlining HCL’s scale and revenue ([15-17]).
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
Argument 8
AI development should prioritize fewer, smarter engineers with systems thinking over large numbers of coders.
EXPLANATION
Kalyan argues that the future of sovereign AI in India depends on hiring a smaller pool of highly skilled engineers who can think system‑wide and conduct research, rather than mass hiring of generic coders. This shift is essential to move from a service‑oriented model to building proprietary IP.
EVIDENCE
He states, “You need lesser people, smarter people. You need engineers more than coders. See what’s happening is that we’re building quarters. You need engineers, people who think systems thinking, you need people who are research-bent” ([287-292]).
MAJOR DISCUSSION POINT
Skill Development & Indigenous IP Building
DISAGREED WITH
Ankit Bose
B
Brandon Mello
4 arguments147 words per minute1171 words475 seconds
Argument 1
ROI invisibility: CFOs cannot quantify benefits, limiting pilot approvals (Brandon Mello)
EXPLANATION
Brandon points out that many CFOs lack tools or data to calculate ROI for AI projects, leading to difficulty in securing budgets and causing pilots to stall.
EVIDENCE
He cites that a third of CFOs cannot quantify ROI and only one in ten have tools to measure it, which hampers project approval ([119-124]).
MAJOR DISCUSSION POINT
Adoption Challenges & ROI
Argument 2
Organizational friction and lack of executive sponsorship stall AI projects (Brandon Mello)
EXPLANATION
He describes how departmental silos, procurement bottlenecks, and the absence of executive champions cause AI initiatives to drag on for months or years, killing momentum.
EVIDENCE
He details friction across IT, procurement, and the need for executive sponsorship, noting that without it projects never get approved ([129-138], [139-142]).
MAJOR DISCUSSION POINT
Adoption Challenges & ROI
AGREED WITH
Ankit Bose
Argument 3
Real‑world use cases, tool consolidation, language localisation, and data security as adoption enablers (Brandon Mello)
EXPLANATION
Brandon argues that AI adoption improves when solutions address concrete everyday problems, consolidate fragmented tools, support India’s multilingual landscape, and assure data security.
EVIDENCE
He mentions GenSpark’s focus on office-work automation, the need to reduce tool-switching time, challenges of multiple Indian languages, and concerns about data handling ([336-351]).
MAJOR DISCUSSION POINT
Adoption Challenges & ROI
Argument 4
Uncertainty around human‑AI interaction; need to define safe engagement (Brandon Mello)
EXPLANATION
Brandon reflects on the broader societal uncertainty about how humans will interact with increasingly capable AI systems, calling for clearer frameworks to manage this relationship.
EVIDENCE
He shares a personal anecdote about being asked how humans should interact with AI, noting the rapid evolution of AI and the lack of established guidelines ([358-366]).
MAJOR DISCUSSION POINT
Ethical Alignment & Human‑Centric AI
Agreements
Agreement Points
India faces a critical shortage of GPU compute and must scale infrastructure dramatically to enable sovereign AI at national scale.
Speakers: Sunil Gupta, Ankit Bose, Kalyan Kumar
GPU scarcity and need for massive scale (Sunil Gupta) Call for coordinated policy to make compute widely available (Ankit Bose) Long‑term hardware fab (India Chips Limited) to secure future compute (Kalyan Kumar)
Sunil stresses that the lack of abundant GPU resources is the core bottleneck for AI adoption and projects a need for millions of GPUs ([54-60][70-78]). Ankit asks whether compute can be treated as a shared commodity and a national resource ([215-222]). Kalyan points to the upcoming India Chips Limited fab as a strategic move to secure future domestic chip supply ([441-447]).
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with multiple reports highlighting India’s need to expand from tens of thousands to hundreds of thousands of GPUs for population-scale AI, as noted in industry briefings and policy papers [S45][S46][S47].
A government‑led shared compute facility, where multiple providers contribute GPUs at low cost, is a viable model to democratise access to AI resources.
Speakers: Sunil Gupta, Ankit Bose
Shared compute facility as a commodity, government empaneling and scaling (Sunil Gupta) Call for coordinated policy to make compute widely available (Ankit Bose)
Sunil describes the empaneling process that has created a pool of 38,000 GPUs, with an additional 20,000 announced, offered to startups at very low prices ([224-236]). Ankit frames the need for a policy that treats compute as a shared commodity for the country ([215-222]).
POLICY CONTEXT (KNOWLEDGE BASE)
The consensus on heterogeneous, shared compute for democratizing AI is documented in the Heterogeneous Compute for Democratizing Access report and the AI Summit working group on democratizing resources [S41][S58][S59].
India must retain its data domestically and build robust data‑governance mechanisms (catalogues, contracts, provenance) to ensure sovereignty and trust.
Speakers: Sunil Gupta, Ganesh Ramakrishnan, Kalyan Kumar
India’s rich datasets as an advantage, but must be hosted locally (Sunil Gupta) Need for data provenance, metadata and cataloguing for trustworthy AI (Professor Ganesh Ramakrishnan) Building a modern data stack (vector DB, edge inferencing) as a core layer (Kalyan Kumar)
Sunil notes that India creates/consumes 20 % of global data but only 3 % is hosted locally ([254-257]). Ganesh argues for data products, catalogs and contracts to enable participation and provenance ([165-176]). Kalyan outlines HCL’s work on vector databases and edge-ready data platforms as essential infrastructure ([96-108]).
POLICY CONTEXT (KNOWLEDGE BASE)
Policy emphasis on data localisation and governance is reflected in discussions on digital sovereignty and regulatory capacity for AI testing [S44][S59].
Interoperability across all layers of the AI stack and a scale‑out architecture are essential to serve India’s diverse, billions‑strong user base.
Speakers: Ganesh Ramakrishnan, Sunil Gupta, Kalyan Kumar
Interoperability at every stack layer, data products and contracts to enable participation (Professor Ganesh Ramakrishnan) Shared compute facility as a commodity, government empaneling and scaling (Sunil Gupta) Edge‑focused AI engines and vector databases are essential for scaling AI across India’s diverse and distributed user base (Kalyan Kumar)
Ganesh stresses that interoperability encourages participation and supports scale-out rather than just scale-up ([151-160]). Sunil’s shared-compute model with multiple providers embodies a practical scale-out approach ([224-236]). Kalyan highlights edge-ready vector engines to bring AI to the periphery ([102-104]).
POLICY CONTEXT (KNOWLEDGE BASE)
Interoperability and scale-out architecture are highlighted as key for heterogeneous compute and scaling GPU numbers to millions [S41][S45].
India’s AI future will be voice‑first and multilingual, requiring models that support many local languages.
Speakers: Sunil Gupta, Ganesh Ramakrishnan
India’s AI future will be voice‑first, requiring multilingual voice models to reach the mass population (Sunil Gupta) Voice‑first AI with extensive multilingual support is essential for Indian adoption (Professor Ganesh Ramakrishnan)
Sunil points out that AI in India will be accessed via voice on feature phones, covering native languages ([244-247]). Ganesh reinforces this by noting their speech-to-text model covers 22 languages and uses mixture-of-experts for Indian languages ([245-247]).
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple panels stressed a voice-first, multilingual strategy for Indian languages, citing Bhashini expansion and the need for Indian-language models [S55][S57][S61].
Human‑in‑the‑loop oversight and AI alignment are non‑negotiable to prevent AI from becoming an autonomous product that harms society.
Speakers: Sunil Gupta, Ganesh Ramakrishnan, Ankit Bose
Preserve human‑in‑the‑loop, prevent AI from becoming the product itself (Sunil Gupta) Need for data provenance, metadata and cataloguing for trustworthy AI (Professor Ganesh Ramakrishnan) AI alignment and safety must be prioritized to prevent the development of hazardous or misaligned systems (Ankit Bose)
Sunil warns that AI must remain a tool with human steering and not acquire human emotions ([385-386]). Ganesh calls for provenance and glass-box models to keep AI aligned ([371-376]). Ankit stresses the need for long-term alignment safeguards ([352-357]).
POLICY CONTEXT (KNOWLEDGE BASE)
Trusted AI at scale and sovereign AI risk frameworks call for human-in-the-loop controls and alignment mechanisms [S43][S44][S50].
Upskilling developers, revising curricula and fostering specialist talent are essential to build indigenous AI IP.
Speakers: Kalyan Kumar, Ankit Bose, Ganesh Ramakrishnan
Shift from services to building own IP; need smarter engineers and research focus (Kalyan Kumar) Upskilling 150 k developers, curriculum overhaul (Ankit Bose) Academic‑industry consortiums to co‑design models and foster collaboration (Professor Ganesh Ramakrishnan)
Kalyan argues for a pivot to building IP with fewer, smarter engineers and deeper domain expertise ([266-292]). Ankit outlines NASCOM’s target to train 150,000 developers and rewrite technical curricula ([312-320]). Ganesh highlights a nine-institution consortium that co-designs models and builds IP ([162-169]).
POLICY CONTEXT (KNOWLEDGE BASE)
Policy recommendations for AI upskilling emphasize a needs-based, quality-focused approach rather than sheer quantity [S52][S53][S56].
Successful AI adoption requires close cross‑functional collaboration, a single point of view and strong executive sponsorship.
Speakers: Brandon Mello, Ankit Bose
Organizational friction and lack of executive sponsorship stall AI projects (Brandon Mello) Requirement for cross‑functional collaboration and executive buy‑in to drive adoption (Ankit Bose)
Brandon identifies ROI invisibility, departmental silos and missing executive champions as reasons pilots fail ([119-124][129-142]). Ankit summarises that close collaboration, a single point of view and executive sponsorship are needed to solve adoption challenges ([144-146]).
POLICY CONTEXT (KNOWLEDGE BASE)
High consensus on collaborative, cross-sector governance for AI is documented in AI policy roadmaps and summit discussions [S51][S58][S59].
Government should fund the first inferencing cycle of sovereign models to catalyse early adoption and create sustainable revenue streams.
Speakers: Sunil Gupta, Ankit Bose
Government should fund the first cycle of inferencing on sovereign AI models to accelerate adoption and create sustainable revenue streams. Call for coordinated policy to make compute widely available (Ankit Bose)
Sunil urges that beyond training, the government should support the initial inferencing phase so users can start paying for services and generate revenue ([236-241]). Ankit frames the broader policy need for shared compute as a national resource ([215-222]).
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for catalytic government funding of early inferencing and subsidized GPU access appear in policy briefs on AI growth and democratizing resources [S40][S58][S45].
Similar Viewpoints
Both stress that without affordable, abundant compute resources AI projects cannot progress—Sunil highlights the hardware shortage while Brandon points out that lack of ROI measurement (and thus funding) stalls pilots ([84][119-124]).
Speakers: Sunil Gupta, Brandon Mello
GPU scarcity and need for massive scale (Sunil Gupta) ROI invisibility: CFOs cannot quantify benefits, limiting pilot approvals (Brandon Mello)
Both argue that a modern, interoperable data infrastructure—including vector databases and edge capabilities—is essential for scalable sovereign AI ([96-108][151-160]).
Speakers: Ganesh Ramakrishnan, Kalyan Kumar
Interoperability at every stack layer, data products and contracts to enable participation (Professor Ganesh Ramakrishnan) Building a modern data stack (vector DB, edge inferencing) as a core layer (Kalyan Kumar)
Both propose a policy‑driven, shared‑compute model where the government coordinates providers to make GPU resources widely accessible at low cost ([215-222][224-236]).
Speakers: Ankit Bose, Sunil Gupta
Call for coordinated policy to make compute widely available (Ankit Bose) Shared compute facility as a commodity, government empaneling and scaling (Sunil Gupta)
Unexpected Consensus
Both industry leaders (Sunil Gupta) and startup‑focused experts (Brandon Mello) agree that affordable compute is a prerequisite for AI adoption, despite their different market positions.
Speakers: Sunil Gupta, Brandon Mello
Make compute available at low, affordable prices for startups to accelerate AI adoption (Sunil Gupta) ROI invisibility: CFOs cannot quantify benefits, limiting pilot approvals (Brandon Mello)
Sunil explicitly mentions low-price access to GPUs for startups ([84]), while Brandon highlights that without clear ROI (often a cost issue) pilots fail ([119-124]). Their convergence on cost as a barrier was not anticipated given their differing perspectives.
POLICY CONTEXT (KNOWLEDGE BASE)
Consensus on affordable compute is reflected in industry-government-academic alignment reports and Sunil Gupta’s statements on compute gaps [S41][S46][S47].
Consensus between a senior government‑linked AI provider (Sunil Gupta) and an academic researcher (Ganesh Ramakrishnan) on the necessity of a voice‑first, multilingual AI strategy for mass adoption.
Speakers: Sunil Gupta, Ganesh Ramakrishnan
India’s AI future will be voice‑first, requiring multilingual voice models to reach the mass population (Sunil Gupta) Voice‑first AI with extensive multilingual support is essential for Indian adoption (Professor Ganesh Ramakrishnan)
Despite coming from different sectors, both stress that voice-based, multilingual models are the key to reaching billions of users ([244-247][245-247]). This alignment across industry and academia was not explicitly highlighted earlier.
POLICY CONTEXT (KNOWLEDGE BASE)
The alignment between government-linked providers and academic researchers on voice-first multilingual AI is documented in panels discussing Indian language models [S55][S46][S57].
Overall Assessment

The panel shows strong convergence on three pillars: (1) massive, affordable compute infrastructure (including shared‑compute models and future domestic chip fab); (2) robust data governance and interoperable data stacks; (3) human‑centric, multilingual, voice‑first AI with clear alignment and executive sponsorship. Skill development, collaborative consortia and government support for early inferencing are also widely endorsed.

High consensus across industry, academia and policy makers, indicating a unified national agenda for sovereign AI. The alignment suggests that forthcoming policies are likely to focus on shared compute facilities, data sovereignty frameworks, and large‑scale skill‑building programmes, which could accelerate India’s AI capabilities while ensuring ethical and inclusive outcomes.

Differences
Different Viewpoints
Approach to AI skill development and workforce upskilling
Speakers: Ankit Bose, Kalyan Kumar
Upskilling 150 k developers, curriculum overhaul (Ankit Bose) AI development should prioritize fewer, smarter engineers with systems thinking over large numbers of coders.
Ankit proposes training 150,000 developers in six months and revamping curricula to create many specialists [312-320], while Kalyan argues that the focus should be on a smaller pool of highly skilled engineers with systems-thinking abilities rather than mass coder hiring [287-292].
POLICY CONTEXT (KNOWLEDGE BASE)
Moderate disagreement on skill development priorities (fundamental vs applied) was noted in the AI workforce transformation discussion [S53].
Strategy for scaling compute resources for sovereign AI
Speakers: Sunil Gupta, Ganesh Ramakrishnan, Kalyan Kumar
GPU scarcity and need for massive scale (Sunil Gupta) Shared compute facility as a commodity, government empaneling and scaling (Sunil Gupta) Scale‑out, not just scale‑up, is required to deliver AI services to the entire population. Edge‑focused AI engines and vector databases are essential for scaling AI across India’s diverse and distributed user base.
Sunil stresses that India lacks enough GPUs and proposes a centralized shared-compute pool created through government empaneling, aiming to increase GPU numbers to tens of thousands [54-60][70-78][224-236], whereas Ganesh (and Kalyan) argue that merely scaling up central capacity is insufficient and that a distributed, scale-out architecture with edge-ready engines is needed to reach the whole population [151-160][102-104].
Who should fund the initial inferencing phase of sovereign models
Speakers: Sunil Gupta, Kalyan Kumar
Government should fund the first cycle of inferencing on sovereign AI models to accelerate adoption and create sustainable revenue streams. Strong regulatory oversight is needed in consumer AI to protect users from becoming the product and to manage data‑give‑to‑get dynamics.
Sunil calls for government financial support for the first inferencing cycle to enable early adoption and revenue generation [236-242], while Kalyan emphasizes regulatory oversight and suggests that private sector and market mechanisms should drive adoption, implying less direct government funding for inferencing [390-398].
Data strategy: localisation vs monetisation
Speakers: Ganesh Ramakrishnan, Sunil Gupta
Treat data as a monetisable asset through data products, catalogs and contracts to foster participation and generate economic value. India’s rich datasets as an advantage, but must be hosted locally.
Ganesh proposes building data products, catalogs and contracts to monetize data and encourage ecosystem participation [165-176], whereas Sunil highlights that most Indian data is stored abroad and calls for domestic hosting to ensure sovereignty, focusing on localisation rather than monetisation [254-257].
Unexpected Differences
Long‑term hardware production vs reliance on imported GPUs
Speakers: Kalyan Kumar, Sunil Gupta
Long‑term hardware fab (India Chips Limited) to secure future compute (Kalyan Kumar) GPU scarcity and need for massive scale (Sunil Gupta)
Kalyan announces a joint venture to build a domestic semiconductor fab for future AI chips, emphasizing patient capital and a five-year timeline [441-447], while Sunil focuses on acquiring large numbers of NVIDIA GPUs from abroad to meet current demand, indicating differing views on the primary source of compute hardware [54-60][70-78].
Quantity vs quality in AI workforce development
Speakers: Ankit Bose, Kalyan Kumar
Upskilling 150 k developers, curriculum overhaul (Ankit Bose) AI development should prioritize fewer, smarter engineers with systems thinking over large numbers of coders.
Ankit’s plan targets mass training of 150,000 developers and curriculum changes to quickly expand the talent pool [312-320], whereas Kalyan argues for a strategic shift toward hiring fewer but more capable engineers with deep systems expertise, challenging the mass-training approach [287-292].
POLICY CONTEXT (KNOWLEDGE BASE)
Debate over focusing on sheer numbers of trainees versus depth of expertise is highlighted in upskilling commentary emphasizing quality over quantity [S52].
Overall Assessment

The panel converged on the need for a sovereign AI ecosystem but diverged on how to achieve it. Major friction points include the preferred method of scaling compute (centralised shared pool vs distributed edge‑centric scale‑out), the role of government funding versus regulatory or private‑sector mechanisms for early inferencing, and contrasting philosophies on workforce development (mass upskilling vs elite engineering). These disagreements reflect differing priorities between immediate infrastructure deployment and longer‑term strategic autonomy.

Moderate to high – while all participants share the overarching goal of sovereign AI, the contrasting approaches to compute provisioning, funding models, and talent strategy could impede coordinated policy implementation unless reconciled.

Partial Agreements
Both agree that India must build sovereign AI capability and increase compute capacity, but Sunil favours a centralized shared‑compute model while Ganesh stresses distributed scale‑out and edge deployment as essential for nationwide reach [54-60][70-78][151-160].
Speakers: Sunil Gupta, Ganesh Ramakrishnan
GPU scarcity and need for massive scale (Sunil Gupta) Scale‑out, not just scale‑up, is required to deliver AI services to the entire population.
Takeaways
Key takeaways
Compute scarcity is the primary bottleneck for sovereign AI in India; massive scaling of GPU resources is required. A shared, government‑empanelled compute facility is being built, but it must be expanded to millions of GPUs for training and inferencing. Long‑term hardware self‑reliance (e.g., India Chips Limited fab) is essential to secure future compute capacity. A modern data stack—including vector databases, edge inferencing, metadata catalogues, and data provenance—is critical for trustworthy AI. Interoperability at every layer of the AI stack enables participation, choice, and the ability to combine multiple models and services. Adoption is hindered by ROI invisibility, lack of executive sponsorship, and organizational friction; real‑world, language‑localised use cases and tool consolidation are needed. Skill development must shift from a services‑only model to building indigenous IP; upskilling 150 k developers and revising curricula are planned. Ethical alignment and human‑in‑the‑loop design are required so AI serves the masses rather than becoming a product or a risk. Collaboration between academia, industry, and government (e.g., the nine‑institution consortium, Amrita‑NASCOM MOU) is seen as the engine for sovereign AI progress.
Resolutions and action items
Launch of the Sovereign AI research report by Amrita Vishwa Vidyapeetham. Signing of an MOU between Amrita Vishwa Vidyapeetham and NASCOM to foster cooperation. Government to continue empaneling and subsidising GPU capacity, with a target of 38,000 GPUs already and an additional 20,000 announced. NASCOM to create a policy document on sovereign AI and AGI roadmap (QR code provided for feedback). NASCOM’s initiative to upskill 150,000 developers across India within six months. HCL announced the upcoming joint venture ‘India Chips Limited’ to build a 16/32 nm fab for future compute needs. Call for the first cycle of inferencing on trained models to be funded by the government to jump‑start adoption. Panelists invited to stay for a group photo and to provide feedback on the report via the QR code.
Unresolved issues
Financing and logistics for scaling GPU infrastructure from tens of thousands to the millions needed for nationwide inferencing. Concrete mechanisms to ensure that India’s massive data volumes are hosted locally and governed securely. Specific business models that will make large‑scale inferencing financially sustainable after the initial government subsidy. Standardised frameworks for data provenance, metadata, and contracts that can be adopted across diverse sectors. Clear guidelines for safe human‑AI interaction and alignment to prevent misuse or loss of control. How to effectively coordinate and govern the interoperability of heterogeneous models, tools, and platforms. Strategies to overcome ROI invisibility and to institutionalise executive sponsorship for AI projects.
Suggested compromises
Treat compute as a shared commodity: multiple private providers contribute GPUs, with government‑driven price competition and empanelment to keep costs low. Adopt interoperability as a design principle, allowing multiple vendors and open‑source alternatives to coexist rather than enforcing a single stack. Combine government funding for both model training and the initial phase of inferencing, then transition to private‑sector financing once usage scales. Balance scaling‑up (larger models) with scaling‑out (distributed edge inferencing) to meet both latency and capacity requirements. Encourage co‑design between academia and industry to leverage domain expertise while sharing development costs.
Thought Provoking Comments
The biggest problem for taking AI to the masses in India is how to make compute available in an abundant way – we need a shared, low‑price GPU infrastructure that becomes a hygiene factor.
He pinpointed the core bottleneck (compute) that underlies all other AI capabilities, moving the conversation from abstract potential to a concrete, actionable infrastructure challenge.
This comment shifted the discussion toward concrete government‑industry collaboration on GPU provisioning, prompting follow‑up questions about shared commodity compute and leading Sunil and others to describe the empanelment model and scaling plans.
Speaker: Sunil Gupta
When you look at sovereign AI, the data layer is the most important – we need a centralized‑to‑edge data platform, vector DBs, and data contracts/catalogs to ensure quality and interoperability.
He introduced a less‑discussed but critical component – the data stack – and highlighted specific technical assets (Actian, Vector engine) that differentiate HCL’s approach.
Opened a new thread about data infrastructure, leading Ganesh to expand on data products and interoperability, and deepening the technical depth of the conversation beyond compute.
Speaker: Kalyan Kumar
95 % of AI pilots never make it to production because of ROI invisibility, data‑trust/compliance friction, and the champion problem – lack of executive sponsorship.
He reframed the challenge from a purely technical issue to a business‑adoption problem, identifying three systemic barriers that explain why many AI projects stall.
Shifted the tone from infrastructure to adoption, prompting Ankit to summarise the need for collaborative teams and executive sponsorship, and influencing later remarks about user‑centric design.
Speaker: Brandon Mello
Interoperability at every layer encourages participation, offers alternatives, and enables scaling out – without it we risk a single‑vendor lock‑in and limit the ecosystem.
He introduced the concept of interoperability as a strategic principle for sovereign AI, linking technical design to ecosystem health and policy.
Guided the discussion toward open standards and collaborative models, influencing Sunil’s comments on shared compute and Kalyan’s points on data contracts.
Speaker: Ganesh Ramakrishnan
Collaboration is not just transactional; it requires empathy across domains – linguists, computer scientists, and policymakers must co‑design models, as shown by our multilingual mixture‑of‑experts architecture.
He provided a concrete example of co‑design leading to technical innovation, emphasizing interdisciplinary empathy as a moat for India’s AI development.
Reinforced the earlier interoperability theme, added depth to the discussion on building Indian‑specific models, and inspired Kalyan’s remarks on shifting from service to building IP.
Speaker: Ganesh Ramakrishnan
The skill shift needed is from hiring many coders to fewer, smarter engineers who can do systems thinking, research, and even quantum‑level compute – we must invest in fundamental science and not just short‑term coding talent.
He challenged the prevailing talent strategy, urging a long‑term, research‑oriented approach and linking it to future compute paradigms like quantum.
Prompted a broader view of talent development, influencing Ankit’s mention of upskilling 150k developers and setting the stage for discussions on education reform.
Speaker: Kalyan Kumar
AI should not become a product that consumes humans; we must keep humans in the loop, ensure provenance at every stack level, and avoid building ‘toys’ that don’t serve the masses.
He brought an ethical and purpose‑driven perspective, echoing the summit’s impact focus and warning against misaligned AI development.
Re‑centered the conversation on societal impact, leading to consensus among panelists about human‑centric AI and influencing the closing remarks about sovereignty and regulation.
Speaker: Sunil Gupta
Break AI into four domains – consumer, enterprise, government, and critical national infrastructure – each needs its own regulatory and choice framework; sovereignty is about giving users choice of platform.
He provided a structured taxonomy for sovereign AI, clarifying that a one‑size‑fits‑all approach won’t work and emphasizing choice as a core sovereign principle.
Synthesised earlier points into a clear framework, helping wrap up the discussion and guiding the final emphasis on policy, regulation, and multi‑vendor ecosystems.
Speaker: Kalyan Kumar
Overall Assessment

The discussion was driven forward by a series of pivotal insights that moved the conversation from high‑level enthusiasm to concrete challenges and solutions. Sunil Gupta’s focus on compute scarcity anchored the dialogue in infrastructure realities, while Kalyan Kumar’s emphasis on the data stack and skill transformation broadened the technical and talent dimensions. Brandon Mello shifted the lens to adoption barriers, prompting a consensus on the need for executive sponsorship. Ganesh Ramakrishnan’s calls for interoperability and interdisciplinary co‑design introduced a strategic, ecosystem‑wide perspective that tied together infrastructure, data, and talent. Together, these comments created a layered narrative: first identifying the foundational bottlenecks, then outlining the necessary technical and human infrastructure, and finally framing the ethical and policy imperatives for sovereign AI in India. This progression shaped a nuanced, actionable roadmap rather than a purely promotional dialogue.

Follow-up Questions
How can compute be treated as a shared commodity across the ecosystem to meet India’s massive GPU demand?
Addressing the shortage of GPUs is critical for scaling sovereign AI models and inference workloads for a billion‑plus population.
Speaker: Ankit Bose
What frameworks and standards are needed to ensure interoperability at every layer of the AI stack?
Interoperability enables participation, alternative solutions, and scaling across diverse models, data, and hardware, which is essential for a sovereign AI ecosystem.
Speaker: Ganesh Ramakrishnan
How should India develop data catalogs, data products, and data contracts to monetize data while respecting ownership rights?
Creating clear data ownership and monetization mechanisms is vital for building a sustainable data economy and supporting AI model training.
Speaker: Ganesh Ramakrishnan
What research is needed to build a robust data platform (including vector databases, edge inference, and data contracts) for sovereign AI?
A strong data infrastructure underpins model quality, scalability, and distributed inference, especially as AI workloads move to the edge.
Speaker: Kalyan Kumar
How can India accelerate skill development to shift from service‑oriented talent to engineering and research talent for AI and emerging technologies like quantum computing?
Building a workforce of engineers and researchers, rather than just coders, is necessary for creating indigenous IP and long‑term AI leadership.
Speaker: Kalyan Kumar
What strategies can mitigate the three adoption barriers identified (ROI invisibility, data‑trust/compliance friction, and lack of executive sponsorship) in Indian enterprises?
Overcoming these barriers is essential to move AI pilots from proof‑of‑concept to production at scale.
Speaker: Brandon Mello
How can the government support the first cycle of AI model inferencing to enable revenue‑generating use cases?
Funding inference infrastructure is needed to bridge the gap between model training and real‑world adoption, especially for sector‑specific applications.
Speaker: Sunil Gupta
What approaches can ensure AI alignment and provenance throughout the data‑to‑model pipeline to prevent AI from becoming a mere product?
Maintaining alignment and traceability safeguards ethical use and keeps humans in control of AI outcomes.
Speaker: Ganesh Ramakrishnan
How can India increase domestic hosting of its own data (currently only ~3% is hosted locally) to strengthen sovereignty?
Local data residency reduces reliance on foreign infrastructure and supports secure, sovereign AI development.
Speaker: Sunil Gupta
What are the technical and policy steps required to build a national AI/AGI roadmap, including quantum‑AGI capabilities?
A comprehensive roadmap guides coordinated investment, research, and regulation needed for long‑term AI leadership.
Speaker: Ankit Bose
How can AI be designed for voice‑based interaction on low‑end devices (e.g., feature phones) to reach the broader Indian population?
Enabling AI access on basic devices expands inclusion and leverages the massive smartphone‑plus‑feature‑phone user base.
Speaker: Sunil Gupta
What governance models are needed to balance choice of compute providers (hyperscalers, sovereign clouds, private infra) while ensuring security and sovereignty?
Providing multiple compute options safeguards against vendor lock‑in and supports national security objectives.
Speaker: Kalyan Kumar
What mechanisms can be put in place to capture and preserve provenance at each step of the AI stack (data aggregation, curation, model performance) for transparency?
Provenance tracking enhances trust, auditability, and compliance with emerging AI regulations.
Speaker: Ganesh Ramakrishnan
How can the AI community develop and adopt interoperable data contracts that enable seamless data sharing across academia, industry, and government?
Standardized data contracts facilitate collaboration, data monetization, and compliance with data‑ownership principles.
Speaker: Ganesh Ramakrishnan
What research is required to explore alternative compute paradigms (e.g., quantum, specialized ASICs) for AI workloads beyond traditional GPUs?
Exploring new hardware could address the scaling limits of GPU‑based compute and provide a strategic advantage.
Speaker: Kalyan Kumar

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.