India’s AI Future Sovereign Infrastructure and Innovation at Scale
20 Feb 2026 16:00h - 17:00h
India’s AI Future Sovereign Infrastructure and Innovation at Scale
Summary
The session opened with the launch of Amrita Vishwa Vidya-peetham’s sovereign AI research report and a panel of industry and academic leaders, including representatives from NASCOM AI, IIT Bombay, Yotta, Tata Communications, HCL Software and GenSpark [1][3-5]. Ankit Bose, head of AI at NASCOM, introduced the discussion and asked each panelist to identify the single most important action for India to build sovereign AI capability [12][44-45].
Sunil Gupta highlighted that India’s main bottleneck is abundant compute, noting that the country lacked sufficient GPU resources until recent partnerships secured nearly 10 000 GPUs and a government-backed shared compute facility now aims to reach 38 000, with an additional 20 000 announced [54-57][66-68][235-237]. He argued that scaling this infrastructure and subsidising the first wave of model inference are essential for widespread adoption, because millions of GPUs will be needed for both training and serving AI services across sectors [239-242][247-250][254-257].
Kalyan Kumar stressed that sovereign AI also requires a robust data layer, describing HCL’s acquisition of vector-DB technology, the upcoming localized vector AI engine, and the need for data platforms, catalogs and contracts to ensure high-quality, distributed data for edge inference [96-104][106-108]. He warned that focusing only on hardware and models ignores the necessity of a data-centric approach, which he sees as the foundation for scaling AI applications [105-107].
Ganesh Ramakrishnan advocated interoperability at every stack layer, arguing that it enables participation, alternative solutions and scalable collaboration among academia, industry and government [151-156][162-168]. He linked interoperability to data ownership, proposing a data-product and data-catalog framework that respects creators’ rights and facilitates secure sharing [170-176][178-181]. He also emphasized co-design and collaboration across institutions, citing his consortium of nine academic bodies and recent international MOUs as examples of how joint effort can accelerate model development [191-199][203-209].
Brandon Mello identified three adoption barriers: “ROI invisibility” where CFOs cannot quantify returns, data-trust and compliance friction, and the lack of executive sponsorship, all of which stall AI pilots from reaching production [119-124][129-134][139-142]. He suggested that clear ROI metrics, streamlined governance and dedicated champions are needed to move projects beyond the pilot stage [125-128][135-138][140-142].
The panel collectively agreed that building sovereign AI requires coordinated compute provisioning, interoperable data infrastructure, skill development for engineers and researchers, and sustained government-industry collaboration [216-222][266-274][426-432]. The discussion concluded with a call for continued collaboration, the upcoming NASCOM “7 AI” initiative, and the intention to draft a national sovereign AI and AGI roadmap, underscoring the strategic importance of these efforts for India and the Global South [414-419][426-433].
Keypoints
Major discussion points
– Compute scarcity is the bottleneck for sovereign AI in India.
Sunil Gupta stresses that while India has talent, data, and market size, it lacked sufficient GPU-based compute, which he and his company have been trying to supply (e.g., “the core problem … how do you make compute available in an abundant way” [46-60]; “we are running almost 10,000 chips” [66-68]; “India will need multiple million GPUs” [75-78]). He later describes the government-backed shared-compute facility that now aggregates ~38,000 GPUs and is being expanded (e.g., “the shared compute facility … combination of the compute capacity created by multiple providers” [224-236]; “India will be going to 50-60,000 GPUs … we need millions of GPUs” [239-258]).
– A robust data stack and edge infrastructure are essential alongside hardware.
Kalyan Kumar outlines the need for a centralized-yet-distributed data platform, vector databases, and edge-ready inference, arguing that “the data platform is going to become very important” for scaling AI deployments ( [96-108]).
– Interoperability and ecosystem collaboration are critical for scaling and inclusivity.
Ganesh Ramakrishnan calls for “interoperability at every layer” to enable participation, alternative solutions, and data-product ecosystems ( [151-168]). He later adds that co-design, academic-industry consortia, and open data contracts are the “biggest takeaway” for building India’s AI moat ( [193-205]).
– Adoption failures stem from organizational and ROI challenges, not just technology.
Brandon Mello points out that 95 % of AI pilots never reach production because of “ROI invisibility,” “data and trust and compliance friction,” and the “champion problem” (lack of executive sponsorship) ( [119-143]). He later reinforces that solving real-world use cases, consolidating tools, and addressing language & data-security concerns are needed for mass adoption ( [335-351]).
– Skill development and a shift from services to building IP are required for long-term sovereignty.
Kalyan emphasizes moving from a service-oriented model to building proprietary products, hiring “engineers, not just coders,” and investing in deep research (including quantum compute) to create home-grown IP ( [266-306]). NASCOM’s parallel effort to up-skill 150k developers and revamp curricula is cited as a concrete step ( [312-327]).
Overall purpose / goal of the discussion
The panel was convened to launch the Sovereign AI Research Report (Amrita Vishwa Vidyapeetham) and to chart a coordinated roadmap for India-and the broader Global South-to achieve AI sovereignty. Participants were asked to identify the single most impactful action their domain could take, with the aim of aligning industry, academia, and government around concrete priorities (e.g., compute availability, data infrastructure, interoperability, talent, and adoption pathways) that will enable India to develop, deploy, and control its own AI models and services.
Overall tone and its evolution
– Opening (0-5 min): Formal and celebratory, with introductions and acknowledgments of the report launch.
– Mid-session (5-30 min): Shifts to a problem-solving tone; speakers present urgent challenges (compute shortage, data stack gaps) and propose strategic solutions, often with a sense of urgency (“we need millions of GPUs,” [239-258]).
– Later segment (30-45 min): Becomes collaborative and optimistic, emphasizing interoperability, consortium building, and skill development as enablers.
– Closing (45-55 min): Returns to a call-to-action tone, urging participants to contribute to shared resources (QR code, MOU) and stressing the long-term, nation-building mission of sovereign AI.
Overall, the discussion moves from introductory formality to a focused, solution-oriented dialogue, ending with a unifying, forward-looking call for collective action.
Speakers
– Speaker 1
Role / Title: Moderator / Event host (introduced the panel and announced the report launch)
– Sunil Gupta
Role / Title: Co-founder and CEO of IOTA (also referenced as co-founder, MD and CEO of Yotta)
Areas of Expertise: Data centre operations, sovereign cloud infrastructure, large-scale GPU compute for AI models
Affiliation: IOTA (Yotta) – runs data-centre campuses and built the Sovereign Cloud in India [S1][S2]
– Ganesh Ramakrishnan (also listed as Professor Ganesh Ramakrishnan)
Role / Title: Professor, IIT Bombay (distinguished panelist)
Areas of Expertise: Sovereign AI, foundation model development, interoperability, multilingual AI for India
Affiliation: IIT Bombay [S6][S7]
– Ankit Bose
Role / Title: Head of AI, NASCOM
Areas of Expertise: AI strategy and implementation for NASCOM, developer enablement, AI education initiatives
– Kalyan Kumar
Role / Title: Chief Product Officer (CPO), HCL Software
Areas of Expertise: Enterprise software products, sovereign-by-design software, data platforms, vector databases, AI infrastructure
– Brandon Mello (referred to as Brenno Mello)
Role / Title: Founding GTM Executive, GenSpark (Genspark.ai)
Areas of Expertise: AI product commercialization, go-to-market strategy, enterprise AI adoption, agentic AI for knowledge workers [S14][S15]
– Professor Ganesh Ramakrishnan (duplicate entry of Ganesh Ramakrishnan; listed for completeness)
Additional speakers:
– Professor Suresh – Mentioned in the opening remarks as a professor invited to the stage; no further details provided.
– Bharat Jain – Panelist; no title or affiliation specified in the transcript.
– Bhaskar Gorti – EVP, Tata Communications (listed in the introductory panel lineup).
The session opened with a formal inauguration of the Sovereign AI Research Report produced by Amrita Vishwa Vidya-peetham. Speaker 1 thanked the audience, introduced the report’s release and invited senior representatives from Amrita – Pro-Vice-Chancellor Dr Manisha V Ramesh and the head of the AI-Safety Research Lab Dr Shiva Ramakrishnan – to the stage, followed by the panelists: Professor Ganesh Ramakrishnan (IIT Bombay), Bharat Jain, Sunil Gupta (Yotta), Bhaskar Gorti (Tata Communications), Kalyan Kumar (HCL Software) and Brenno Mello (GenSpark.ai) [1][3-5][12-18].
Ankit Bose, head of AI at NASCOM, opened the discussion by noting the successful launch and asking each participant to identify the single most important action India should take to build sovereign AI capability for the nation and the Global South [8-13][44-45].
Compute scarcity was identified as the primary bottleneck. Sunil Gupta explained that, although India possesses talent, data and a massive market, it lacks the specialised GPU-based compute required for modern AI. He framed the core problem as “how do you make compute available in an abundant way so that it becomes a hygiene factor” [46-60]. By the time of the panel his company was operating “almost 10 000 chips” and had trained the majority of the sovereign models now being released [66-68]. He warned that India will need multiple million GPUs to support both training and inferencing at scale [75-78]. To address this, the government has created a shared-compute facility that aggregates capacity from multiple providers, currently totalling about 38 000 GPUs, with an additional 20 000 announced [224-236][237]. Gupta stressed that this facility must be expanded to “50-60 000 GPUs” and ultimately to “millions of GPUs” to meet the demands of a billion-plus user base, especially as AI in India will be largely voice-first and accessed on low-end devices [239-258][244-247].
Turning to the data stack, Kalyan Kumar highlighted HCL’s acquisition of vector-DB technology (Actian’s Ingress engine and a Dutch CWI asset) and announced a forthcoming “localized vector AI engine” designed for edge deployment [96-104]. He argued that “the data platform is going to become very important” because AI applications will only scale if they are built on high-quality, well-catalogued data, with data products, contracts and metadata forming the foundation for trustworthy AI [105-108][106-108]. He also emphasized the need for a skill shift from coders to engineers and outlined the joint-venture with Foxconn – India Chips Limited – to build a 16/32 nm fab, describing it as “patient capital” that will secure future compute capacity even though the fab will take five years to become operational [266-292][441-447].
Professor Ganesh Ramakrishnan expanded the discussion to interoperability across the entire AI stack. He asserted that “interoperability at every layer encourages participation” and enables alternative solutions, scale-out architectures and the ability to balance fidelity, latency, sensitivity and specificity [151-160]. Ganesh linked interoperability to a data-product ecosystem, proposing “data catalogs and data contracts” that respect the creator’s rights (the principle “jiska data uska adhikar”) and facilitate secure sharing [165-176][178-181]. He illustrated the concept with his own consortium of nine academic institutions, which co-designs models such as a 22-language speech-to-text system using a mixture-of-experts architecture, thereby creating a “voice-first, multilingual AI” that can run on feature phones [191-209][212-213][244-247].
Brenno Mello shifted the focus to adoption barriers. Citing a MIT report, he noted that “95 % of AI pilots never make it to real production” and identified three systemic obstacles: “ROI invisibility” – CFOs cannot quantify returns, leading to stalled pilots; “data-trust and compliance friction” caused by departmental silos; and the “champion problem” – a lack of executive sponsorship [119-124][129-134][139-142]. He suggested that clear ROI metrics, streamlined governance and dedicated champions are essential to move projects beyond proof-of-concept [125-128][135-138][140-142].
Summarising these insights, Ankit emphasised that close-collaborated teams with a single point of view and executive sponsorship are required to overcome adoption challenges [144-146].
The conversation returned to the theme of compute as a shared national commodity. Ankit asked whether the country could treat compute like a utility, with the government coordinating providers to ensure low-price, abundant access [215-222]. Gupta affirmed that the current empanelment model already creates a “shared-compute facility” and argued that the government should also subsidise the first inferencing cycle of sovereign models to catalyse early revenue-generating use cases [236-242]. He reiterated that India creates and consumes “20 % of the world’s data” yet only “3 % is hosted in India”, underscoring the urgency of domestic infrastructure [254-257].
Skill development and the shift from services to indigenous IP were addressed by Kalyan. He traced HCL’s evolution from a service-oriented firm to a product-builder, noting the need for “engineers, not just coders”, and for research in fundamental science such as quantum computing [266-292][298-306]. The India Chips Limited joint-venture was presented as a long-term investment in domestic compute capacity [441-447][450-453].
Complementing this, Ankit described NASCOM’s ambition to up-skill 150 000 developers within six months, rewrite B.Tech/M.Tech curricula and introduce specialisations that produce “smarter engineers” capable of building sovereign AI [312-320][326-327].
Across the panel there was strong agreement on several pillars: (1) the necessity of massive, affordable GPU compute delivered through a shared-compute facility; (2) the importance of a modern, interoperable data stack with provenance, catalogs and contracts; (3) the need for collaborative, co-design ecosystems that span academia, industry and government; (4) the vision of a voice-first, multilingual AI serving billions; and (5) the imperative of human-in-the-loop, ethically aligned AI [46-60][96-108][151-168][191-209][244-247][371-376][385-386].
Rather than a conflict, the panel highlighted complementary perspectives: Gupta emphasized expanding the shared-compute facility, while Ganesh stressed that interoperability and scale-out architectures are essential to reach India’s diverse population [224-236][151-160]. On funding, Gupta called for government support for the first inferencing phase, whereas Kalyan focused on talent development and the India Chips Limited venture rather than direct subsidies [236-242][266-292][441-447]. On talent strategy, Ankit’s mass-upskilling plan contrasted with Kalyan’s call for a smaller, highly skilled engineering cohort [312-320][287-292]. Regarding data, Ganesh emphasized ownership, data catalogs and contracts rather than monetisation [165-176].
The panel concluded with a call to action. NASCOM announced the forthcoming “7 AI” initiative, a draft national sovereign-AI and AGI roadmap (accessible via a QR code), and the signing of an MOU between Amrita Vishwa Vidya-peetham and NASCOM to deepen collaboration [414-424][426-433][454-455]. Participants were urged to provide feedback, stay for a group photo and continue the dialogue.
In summary, the consensus was clear: India must combine massive, affordable compute, interoperable data infrastructure, skilled talent, and coordinated public-private partnership to achieve sovereign AI for the nation and the Global South.
Thank you. Thank you. hello and good afternoon everyone thank you for joining us for this session on sovereign AI for India before we begin the panel discussion again we are happy to announce that there will be a launch of the sovereign AI research report by Amrita Vishwa Vidyapetam may I invite the following representatives to kindly join us on stage first for the release of the report from Amrita we would like to invite pro vice chancellor Dr. Manisha V. Ramesh and if available head of the AI safety research lab Dr. Shiva Ramakrishnan and any other representatives from Amrita Vishwa Vidyapetam that you would like to invite on stage sir alright Alright, Professor Suresh and if we could please have you on stage I would like to invite Mr.
Ankit Bose, Head NASCOM AI on stage as well We will Thank you so much Yeah, yeah, absolutely You can take a seat sir if you want Thank you Thank you. Thank you, everyone. We now move into the panel discussion. To guide this conversation, we are joined by Mr. Ankit Bose, head NASCOM AI. Joining him today are our distinguished panelists, Professor Ganesh Ramakrishnan from IT Bombay and Bharat Jain, Mr. Sunil Gupta, co -founder, MD, and CEO of Yotta, Mr. Bhaskar Gorti, EVP, Tata Communications, Mr. Kalyan Kumar, CPO, HCL Software, and Mr. Brenno Mello, founding GTM executive, GenSpark. Ankit, over to you. Professor Ganesh will be shortly joining us in two minutes. Thank you.
So hi everyone, I think we had a good launch and we have a very strong panel. So Ganesh was on the way and he is still stuck on the traffic, he is walking in. So meanwhile we start the discussion, I think, you know, happy to have a very strong panel. So why don’t we do this, we start with the introduction, right? I think Kalyan, we can start with your quick introduction. So Neil and then Bruno.
Yeah, hi, Kalyan Kumar, call me KK. I run the software product business for HCL, HCL Software. We are the largest India headquartered enterprise B2B software company with about 10 ,000 customers and about 1 .5 billion dollars of revenue. And very intricately involved in building software products which are sovereign by design.
Hello, good afternoon. Good afternoon. Good afternoon. My name is Sunil Gupta. I am co -founder and CEO of IOTA. So we run data center campuses. We have built Sovereign Cloud in India, which is running a whole lot of mission -critical government of India applications. Recently, we migrated Bhashini from a hyperscale cloud to our Sovereign Cloud. Our claim to fame in the last two years is that we have got thousands of NVIDIA GPU chips in India. And all the models which you are hearing getting launched in this summit, MITS, Sarvam model, IOT, Bombay’s Bharat Gen model or Socket model, they all have been trained on our GPU clusters, and now they are being made available to public use.
Thank you.
Hello. Good afternoon. My name is Brandon Mello. I work for Genspark .ai, a follow -up -based company. We have been around for about 10 months. We are the largest growing AI company right now in the world. We just broke $200 million in ARR. Our solution has been incredibly well -received. adopted in the market. It is our third largest market and our solution is to drive adoption from the bottom up by bringing agentic AI to the knowledge worker. Thanks for letting me be here.
Great, great, great. And hi, folks. I’m Ankit Bose. I head AI for NASCOM. So, whatever NASCOM does in AI something, I support that. I lead that, right? And we will be joined by Ganesh, who is from Bhadrajin. He’s leading the, you know, sovereign AI modern building effort in the country, right? So, I think meanwhile you join, let’s start. I think, Sunil, let me start with you, right? The first question I think I would want to ask after five days of immense brainstorming around, you know, AI for the country, AI for the world, right? You know, what is the top thing you say which, you know, India has to do, right, to build its sovereign capability, not only for the country, plus for the global south?
Yeah. Ankit, if I take everybody, Just two years or maybe two and a half years down the line, when Chad GPT got on world scene, basically AI capability came in consumer hands. A big debate happened in India’s obviously government circle, industry circle, telecom circle, technology circles everywhere. That while India has got everything which is needed to succeed in AI, like we have been software and services leaders for last three decades. We have a startup ecosystem. On skill set index of mathematics, science, engineering, we are always the best. As a market, we are literally close to 1 billion people carrying smartphones, creating consuming content. AI ultimately resulted to most of the cases, you know, some apps which will be giving some productivity to us.
So both on the demand side and the supply side, including data sets like India will have the best data sets available. So everything India has, but what India was not having at that time was compute. Because AI does not run. And regular data centers or regular CPU computes, it required this. specialized GPU computes. So I would say that the biggest problem and of course you have to take care of the entire stack models, data sets, applications, everything. But the core problem to solve for taking AI to the masses was that how do you make compute available in an abundant way so that we don’t think of that. That should become just a hygiene which is always available.
And that’s the problem we tried to solve. You know way back at that time Jensen was in India. I happened to get to meet him and he says we as NVIDIA are too committed to India. We can extend your parity allocation. We can give you engineering support, everything. But somebody has to take a step forward of not only putting your data centers and power and everything but you also need to put in chips and we will give you everything. And from there to now today we are running almost 10 ,000 chips. You know as I said majority of the models which you are hearing sovereign models getting launched in India. You know they have been trained on a GPU.
But the real thing I would say is start now. Many of these models are great, you must have heard Sarvam Modeller beating Gemini and ChatGPT on many of the match marks. And they are making them absolutely for India use cases like OCR, you know the handwritten notes and all that thing, how do you get convert and all that stuff. So these are real India purpose built use cases and models. When they start scaling, when they start getting adopted by masters, we have seen one UPI changed our lives. Imagine we have UPI in 50 different sectors in the country, 50 UPI movement will come into India. At that time, the number of GPUs required will be millions. Today we are happy as a country, we have X thousand of GPUs.
But if you as a single company like SpaceX or like Meta can have 1 million GPUs, India as a country require multiple million GPUs. So while we are working on all the upper layer of stacks and Indians are very good at that, models, data sets, applications. We need to solve this issue. We are taking care of infrastructure problems. We are taking care of railways and roadways and airports. We also need to create this digital infrastructure. We take care of that, make it available abundantly to every startup, every, you know, I would say academic community. We make it available at a very low price. Government India AI machine is doing a human’s role. On one side, they have asked people like us, incentivized us to invest into the GPUs.
But they are taking GPUs from us, putting their own money, putting their own subsidy and then giving it to Sarvams and IITs and sockets of the world. And they think now you make, you don’t have to bother about money. Just go and make India’s plastic model. And the result is to seem in two years, India has come a long way and we have a long way to go. Compute problem has to be solved.
Great. Thank you. Thank you, Sunil. Same question to you, KK. You know, what is the one thing you feel can add the edge, right? The whole.
When you look at sovereign and I think Minister of Electronics and IIT Vaishnavji, he was mentioning. The. Mr. talking the five layers layer stack right and that’s where if you what sunil mentioned is for a easier way i say i use the word infrastructure which can combine energy or the ping power uh cooling the whole stack so that’s that’s providing that layer and then explain the whole model piece i think as you train and when you start to deploy at scale a couple of things becoming very interesting so you need to start to also build a data stack data platforms vector dbs edge vector i personally think you can do as much centralization the way the data consumption model is going is going to highly get distributed going to go down into the edge correct so you need a very different kind of inferencing and those capabilities so you need a data layer something which uh which we are doing is very interesting outside of oracle and ibm uh the only other company which has all the patents for database is Ethier, because we acquired Actian.
So Actian owns the original patent of Ingress. And every derivative today, whether it is Postgres or every one of them is basically an Ingress query processor derivative, including SQL Server and others. Like that, we also acquired an asset from CWI in Netherlands. So we have a VectorDB, the original Vector engine. So we’ve been building a lot of those asset portfolio, HDB, now releasing a, in April we’re going to release a localized vector AI engine, which again can run on, because as the AI PCs become more and more, Edge becomes more and more, so building that. And building the data disciplines. I think that’s a very important layer. A lot of times what happens is we worry about infrastructure, and then we think about model, and then app.
The data platform is going to become very important, because as we’re building the data platform, the enterprise will only scale if you get your data. centric approach, data products, data contracts, data catalogs and those kind of things. Because finally the AI use case is going to be built on how good quality your data is. Yeah.
Great point. I think compute data, data stack for the country, I think very important. Let me come to Venu. Again, the same question, right? If India have to build a server AI for the country and Global South, what’s the top one thing you will say which will help the whole cause?
Yeah, so it’s interesting. MIT last year ran a big report and they said 95 % of AI pilots actually never made it to real production, right? So in my point of view, this is never really a tech problem. It’s really a production problem, right? So in my point of view, actually like when I look at a our solution, right, like we are able to deploy over thousands of companies in only eight weeks, right? So when I look at that, there’s really, it comes down to three reasons why this is happening in the industry, right? And the first one is what I call ROI invisibility, right? So when you look at companies right now, it’s really easy to get a budget for a pilot, right?
But what comes to the reality is can they get a budget to get the project done, right? So the data that I have to share with you guys, which is astonishing, is a third of CFOs really nowadays, they cannot quantify ROI inside of their organizations, right? And only one out of ten can actually have tools that can actually measure ROI, right? So. What ended up happening is whenever you talk to those organizations. right? Companies, and you ask, like, how are you actually going to measure productivity gains or how are you going to, like, they don’t have the answer, right? So it ends up, like, what’s the baseline? Like, they don’t have the answer, right? So whenever you bring to, like, the CFO to get that project approval, ends up on the project never getting approved and ends up on that cycle of, like, it ends up getting stuck into a pilot, right?
So when you look at what, you know, number two is, like, I think it’s data and trust and compliance friction, right? I think there’s a huge red tape in terms of what happens inside of organizations, right? I think that it’s very departmentalized, where, like, each part of the organization is trying to solve for each part of the department, right? So when I look at IT, it’s trying to solve for IT. Procurement is trying to solve for IT. Procurement is trying to solve for IT. Procurement is trying to solve for IT. procurement. Because no one’s really trying to solve that as an organization, the project ends up stalling. So something that can essentially take a few months to resolve ends up taking six months to a year.
And like I say in sales, time kills every deal. Last but not least, I think my third point is the champion problem. I think there’s a severe issue within organizations nowadays is there’s really no executive sponsorship. And whenever you don’t have executive sponsorship, especially for AI opportunities, deals never get approved. And people, especially at the bottom tier, they don’t understand what’s going on. And when there’s no clear alignment within the middle tier management, deals never get approved.
Great. I think let me summarize probably the three points that, you know, you need a close collaborated teams, right, with a single point of view with executive sponsorship. I think that will solve the adoption piece at least at last, right? Let me come to you, Professor Ganesh, right? Ganesh, I think what we are discussing is the, we have discussed a lot on AI for last five days for India, for globe, you know, and then we had three point of views. I asked them, give me one top thing. You heard probably from Breno and KK and then from, you know, Sunil was confused. What is that your top one take which India should do so that we can lead the seven race for the country and the globe?
I would suggest interoperability at every layer. I think it is also alluded to by earlier panelists. Interoperability encourages participation and in the words of PSA, if you are there in our Bharat, genesision is a meaningful participation right interoperability also helps you present alternatives because there is no one size fits all and you need to also ensure that in the trade off between fidelity and latency or between sensitivity and specificity you are able to find the right sweet spot which is suitable for you you can pick something that is appropriate I just on a lighter note I was driving from the PSA office and there was such traffic jam which most of you experienced so I exercised my sovereignty and I started walking so you find alternatives when you think sovereign 3 kilometers that’s why I was late so there are alternatives and also provisions for human participation much better there could be places where AI could be substitutional but many other places where you may want it to be just supplementary or complementary.
So alternatives is another thing that interoperability provides for. And I think the very key is scale out. I mean if just by scaling up we could cater to everyone, great. I would say that at least matches one checkbox which is people being catered to. But even we are not there. Scaling up is not going to cater. The capabilities are not there. But even if it were hypothetically, I think participation would also ensure that people are part of the process. It’s informed. I mean Bharat Jain, I take pride in one of our consortium members at IIM Indore. We are a consortium of nine academic institutions. And in the Institute of Management, what are they doing? They do a fabulous job in going to many of the second tier cities, going to people who have data and engage in conversations, education.
That data is an asset and you could actually transform that asset into IP generation. generation and not just source data. So the dialogue, right, and informed decision making is where participation is encouraged when you have interoperability. I just want to add just what he said. He made a very interesting point. How do you monetize data, correct? And this is something which needs a very different approach because today what happens is you are sourcing data and I think PM yesterday made a very amazing statement, correct? He’s saying, jiska data uska adhikar, correct? Very interesting. But if you look at what he’s saying is the creator of the data, the producer of the data, the consent provider for the use, all have a role to play and that’s what I’ve been using this word called data product or a data catalog.
So you need a catalog first. You need to build a data product and then set up a data contract, which is the fundamental, fundamental for interoperability. I just want to add. Because if that gets solved, I can choose my own personal data and say my data catalog, you can have five things to access. I think India has proven that amazing way of identity payments. So I think we can actually set up an environment where you can really build this. And the data benefactor is also the same person. So great point, Professor. I think it probably means definitely removing or optimizing the various layers and taking it to the last person in the rank. And it will help scale to the 1 .4 billion what we need.
I think thank you for that. Let me ask you again a second question. I think this is a very, very direct question. I think as a country, I think we are building our foundation models. You are one of the person who is building foundation models of the country. And at large, we have built sub -500 billion parameter model. And globally, we are going to 5 trillion or plus. The comparison is so huge, right? What do you think India’s moat can be when we are really, you know, in such a situation where we are at a disadvantage, though we have to aggressively, you know, handle that? Yeah, so the other important takeaway, which probably, you know, addresses some part of what you’re saying, what you’re asking is cooperation, right?
Collaboration. A collaboration, honestly, is not just a transactional process. It begins here, right? The will to understand the other side. I just published a book, you know, Informatics and AI for Healthcare. This is with my colleague, Shetha Jadhav. And what we did in the entire book was I tried to, I mean, I empathize with all the entire life cycle of a healthcare practitioner. And we tried to map every, ML example, informatic example, parsing to healthcare, right? and vice versa there was reciprocation from the other side as well it was very interesting exercise I think that’s how co -design also happens, so collaboration is actually to do innovation and again China has shown in many ways, right in contrast to the US ecosystem that co -design can lead to very innovative ideas, and co -design often is even lacking at the level of algorithms and infrastructure, right right there, new algorithms can come up but all the way to application layers so collaboration also comes by creating ecosystem where people can participate since you alluded again to Bharat Jain, we have a consortium of 9 academic institutions and the whole collaboration is through a section 8 company a not for profit company, which engages with for profit entities but also the academic institutions 60 full time employees work with 100 plus researchers, master students, it’s been a very profound exercise in a very short span of time I mean we may say we are late since you brought up also the landscape outside which is 1 trillion plus parameters and that’s also our North Star at least from the India AI vision that is our goal to get to at least 1 trillion parameters but even the 17 million parameter model that we have released there is a lot of research due diligence that has gone into the architecture choice and actually we are very proud of whatever model we released because ensuring that you know if you have two shared experts one of them is actually catering to languages and mixed code the other is catering to domain due diligence that was actually done based on Indian context right the fact that we covered 22 languages in our speech model the text to speech model again all of that is raised we explicitly captured the common phonetic vocabulary of Indian language And that’s only possible through this process of empathy.
I mean, linguist has to empathize with the computer scientist and vice versa. If we do that, we can actually create magic. Believe me. You can create magic. We just have to break our silos and the biggest silos sitting here. I mean, in fact, an endorsement to this was when we actually built our LLM enabled speech to text model. We had a projector layer which actually projected from speech to text. And we used a mixture of experts for the projection. It was very interesting. The expert for Hindi and Marathi performed very similarly. I mean, they were the same expert. Expert got shared. Whereas for Telugu, there was collaboration between Hindi and Tamil experts. So, data, domain knowledge, all of them actually are reinforcing each other.
So, this is actually a time where we can break the language barrier in my interaction with you. on 8th Jan, I gifted him a book from our consortium called Samanway Samanway stands for bringing all languages together and he said, we need to use AI also to show the strength of India it’s not just AI for India, but AI by India great, great, I think the point of collaboration and you know the story what we all have heard single stake course is a bunch of stakes I think it’s very true and that’s what is the mode for India collaboration, building that collaborative effort between different universities, bringing 9 different universities together to work and it’s a gigantic work, especially what you have created is amazing also, we are very happy 3 days back, we also announced at MOU with our heritage foundation sitting in the US we got a lot of support from people in the Bay Area, so once you open up for collaboration, you will find there is support from around the world and it’s very very good and I think that’s the most important Great, great, great.
Thank you, thank you, Professor Ganesh.
So, let me come to Sunil, you, right? I think we all agree that, you know, compute is one of the biggest player and pillar, right? And then government is doing their bit, right? I think they are doing their bit. But again, I think in terms of compute for the country, for some unity, can it be a shared commodity? Can it be, you know, some commodity which different, you know, factors of the country or probably ecosystem come together and build, right? How to solve that problem? Because as you rightly said, few thousands versus few lakhs, right? That’s something, yeah, very high.
Number one, they said, you all come and panel with us at a right price point, right quality, and you declare how much GPUs you can give. They were not forcing us. They said, okay, you decide how much you want to give. We all got empaneled. We contributed GPUs, which were made available to startups. Then government said, every quarter we will come and we’ll encourage new and new providers to come up with the facility. And even existing players can also top up their capacities. And every next time, because the market forces, when the quantities start increasing, supplies start increasing, the pricing also will start reducing. Government say, okay, if new player comes, they can reduce the price.
Existing players will have to match. And they keep on empaneling more and more capacity. And that is something which has resulted into that 38 ,000 GPUs, which government is talking about, the shared compute facility, which is nothing but a, you can say, combination of the compute capacity created by multiple providers like us. And now yesterday, Prime Minister announced that 20 ,000 more are being added to this facility. So I would say, both as a concern, except this is proven that last 18 months, must is doable and both are the technology right while technically it’s possible that the same model can get trained like Ganeshji I’m sure can can talk very authoritatively on this subject technically also you can train on multiple different clusters of course inferencing you can do in multiple different places but even if you don’t do that you are actually what government did very democratically okay IIT we will put you into this service provider okay Sarvam will put you into this service provider okay GAN will put you into this provider so government is democratically making sure that they are encouraging industry to invest into this creating this capability which is required and we because we are getting business we are scaling up now we are investing more and more now and then they are making it available to people because India needs its own models we may use frontier models for certain purposes but as minister was saying that 95 percent of the use cases of the country can very well be done by a 20 billion to 100 billion parameter model right of course Ganeshji is carrying a mandate to create a trillion parameter model also in which country required almost we can for all those things why anybody else can do right their success Bharat Jain success and Sarvam success has proven that India can do it right so I would say that shared compute framework which has been done it is proven we just need to scale it up and my request to government which I think they are doing is don’t limit it only for training of models because models training is one step done now these models will be going to massage for adoption and you require millions of GPUs I think I’m repeating myself but that is where government need to fund the first cycle of inferencing on these models when users start adopting let’s say agriculture use case or a healthcare use case or a education use case or whichever use case which come on multiple UPI equal and use case will come up it will take time for users to start adopting it start accepting it making it a part of their lives at that time it will take time for users to start adopting it start accepting it making it a part of their lives at that time only user will be happy to pay 10 paisa per transaction or maybe 50 rupees per month subscription for that that time these models and use cases will become self -sufficient to generate revenue also then they will need government support but at least for i would say first cycle of inferencing maybe one year or two years government not only support the funding of the training of the model but also they support the first phase of inferencing on this model so that adoption happens revenue models emerge and after that government can say okay let private sector invest and government will come back to their original role of regulator
great so i think i think probably it will augment and put fewer thoughts right so the india mission has really created the single fire right yeah this fire is going to every state in the country yes all 28 states all eight union territories they are building aicoes yes and the mandate for each co is to give compute right i think that like a small wildfire it will spread all across the country it will be phenomenal but again i think at the same time you know we have to keep up the pace right i think one thing is space.
Absolutely, Ankit, just trying to, this is something which I know two years back when we said that I’m putting 8000 GPUs, everybody started laughing. Because we were starting with the base when India was not having GPUs, right? Today, we comfortably say okay, India will be going to 50 -60 ,000 GPUs but even today I can tell you India require millions of GPUs. In US, just 3 or 4 deep tech companies are collectively owning millions of GPUs. India has got 1 .4 billion people out of which 1 billion people are carrying smartphones, creating, consuming content every single minute. And as Ganeshji will talk about, they all are creating voice -based AI because India’s AI will be voice -based. People are talking in their own native language or a mixture of Hindi, English, everything.
And they’ll be comfortable doing that instead of writing in their native language or screen which is not so easy. When you’re doing that and actually innovations are being done that even from feature phone or regular telephone line, not using smartphones, you will be able to talk to an AI model at the back end. When you are basically talking about 1 .4 billion people coming in the AI fold for multiple use cases. Just imagine what type of number of GPUs will be coming for inferencing and how many GPUs will be coming for training multiple models for sectoring all these things. So you are right, Ankit. What we have done in last two years is kudos to the whole ecosystem, to government and everybody, all of us.
But we need to keep on building for next 7, 8, 10 years. Sorry, just to give one or two more data points. India is creating and consuming 20 % of world’s data. One -fifth of the world’s data is created and consumed by India. Only 3 % of that data is hosted in India. That shows the upscope of the infrastructure both at the physical data center level and also in terms of the compute or GPU level India need to build. Because we don’t want any single country or any single company start dictating our digital destiny. We need to be as much sovereign as possible.
Thank you, Sunil. Thank you. Kalyan, let me come to you. So, Kalyan, I think one big base for the sovereignty is the skill set. to research, develop, deploy, right? And do all of that responsibly, right? I think SCL being, you know, one of the companies who have done that, right, in the last two, three years, what will be your nuggets, right? I think how other companies, other players in the country, other countries can do that, right?
So, if you look at, see, what is India known for? India is known for capability, historically. NASSCOM, right? But that capability was historically, and for a majority, and most of the business capability for hire. You basically are building capability to build things for others, and that’s been the core business. We’ve now become pretty much, if you really look at, if some other country thinks sovereignty, 50 % of their, global tech engineering services, development operations talent is sitting out of India. You see those GCC crates. But where is the pivot? The pivot is, I think what Professor was talking about, is you have to pivot towards build. We are always more towards service. So building, research, development, build your own IP, and how do you make India for the world?
I think it’s very important. I think that’s what our journey has been. So what we did is in 2015 -16, because we have one advantage, we are a single majority shareholder run company. Mr. Nader had a very ambitious vision. He said, we are building products for others, we should start building for yourself. It’s 2015. It’s a very conscious strategy, and he realized if you want to play in the global market, you need to have access to market permission and market access. Because people would only buy if you are a software product company. So that whole idea of acquiring India intellectual property, because if you really start to see the underlying of these pieces, you could build on open source and other stuff, but suddenly what’s happening is some of these open source companies are getting acquired and suddenly becoming closed source.
This is becoming a very interesting plan and suddenly some of them are getting classified as dual use. Suddenly they’ll say, oh, this is dual use tech, so I can only release this. So what we’re seeing from a skill standpoint, you need lesser smarter people. So I’m making a very controversial statement. You need lesser people, smarter people. You need engineers more than coders. See what’s happening is that we’re building quarters. You need engineers, people who think systems thinking you need people who are research bent. I meet students and I asked MBA students, what did you do? I did engineering. I said, why the hell did you waste four years of your life? If you wanted to go and do an MBA, the things like, why are you not doing deeper?
Why don’t you specialize in a domain? But those are things like even fundamental things. I would say. The big leap is going to be. I think India can solve something very interestingly, and as he’s referring to the PSA, quantum. Because I think the kind of compute needs you have, and looking at energy GPUs, you could completely change the computational paradigm. So hence, but that needs fundamental science, research, physics. Like no one wants to study physics. If you go back 20 years back in this country, everyone wanted to go and do coding. So those are the fundamental skills. So what we’re doing, in a very small way, we are acquiring, we are building talent and research pools.
So 50 % of HCL software product business is in India, engineering. But my second largest engineering center is in Rome. Third is in Israel. Then I’m in Perth, Austin, Chemsford outside of Boston. Why? Because if global companies can come to India and acquire talent to build and research, and then build an IP and take it to US, I’m doing the reverse. So AppScan, which is a code security product, the security heuristics is built in Israel. The, SAS UX is built in Boston. but the core engineering is in Bangalore but the IP is registered in India which is where we are moving a very different way we are now tapping global talent to build for us so we are still a billion and a half we are not big but we have got 130 countries so we are a step in the change it’s a long journey it needs to get away from short term thinking hire people to get them built I think you have to go to a very different model I think that’s what we are starting within the larger scheme of HCM but I think we are walking the right path I think we are acquiring assets continuously and building that
so let me add probably what I am seeing in the skill level the persona at least what NASCOM is focused on is the developer and the way we code is changing so NASCOM has done concentrated effort to help developers learn the new way of coding redefine the whole SCLC as a target what I have taken my team has taken we have taken a target of you know enabling 150k developers across the country next six months. Make them AI enabled, AI ready. Help them change the whole, you know, or unlearn and learn the new way. I think that’s what, is one thing, right? But finally, I think, which I should make everyone aware, I think there will be announcements sometime soon.
But with the MIT and, you know, the education industry, we are rewriting the whole, you know, technical, BTEC, MTEC, MCA, BCA curriculum, right? I think we are adding more specialization, as rightly said. Because we need specialists. We don’t need journalists. As an engineer, he studies 48 subjects in four years. At the end, what is he specialized on? It is his luck, right? The group he gets, the project he takes, somehow, some job he gets, right? So, I think that’s what we are changing. Soon, there will be announcements happening. But again, I think that’s what is happening at the background. Coming back to Benno, Benno, you have a product which is so simple, anyone can use it and build agents through that, right?
And get, you know, benefit, benefit from it. that. Let me ask you this. I think the one big piece of AI to really be mature and impact is adoption, right? And you started with the 95 % project fail or probably don’t go to production, right? So if we have to really do adoption at scale, what are the top issues you see, right? And how do you suggest, you know, the companies or folks here can take some pointers to mitigate it in their life functionally?
Yeah. So I’ll give you three. One is very specific to India, actually. those are relatable to our solution, but I think those are real use cases because the proof is, like I said, the proof is in the pudding, right? One is like you got to solve a real use case, something that is actually changing in people’s life. So AI is complex and AI is people still like trying to figure out AI. So it needs to be something that is into people’s everyday life. So in our case, for example, let’s go back. So if you look at Cursor or Lovable, right, they changed the life of, you know, vibe coding, software engineers. In our situation here at GenSpark, we looked at people that were producing office work, right?
So people looking at producing Excel, PowerPoint, and essentially just like any mechanical work on the everyday office work, right? Because if you think about it, every time you office task, all of that office work is very mechanical, right? And that’s why we realized all this massive growth in our solution, right? So to your point, I think that adoption… comes from like something that is something that can change people’s life and something in a very simplistic way right I think the second the second thing is should be consolidation of tools right I think from the time that we wake up in the morning I think most of us pick up our phones and we have we inundate about messages and naps and then we go to our office work and then we have probably a hundred tools that we have to touch you know actually we looked at a you know draw our research at work you know people waste in average two and a half hours a day right just you know flipping between different solutions right so in that causes contacts loss of context right so if there’s a waking consolidate tools that also drives adoption right you know we have probably a hundred tools that we have to touch you know so I think the third one is especially in India is In fact, there’s a lot of different languages in this country, which you brought up, right?
So I think in this country, especially LLMs, I think really struggling with being able to drive the right language, especially with all the different dialects that this country has. So being able to really naturalize and be able to bring the sovereignty here, I think is very important. And I think last but not least, people are very scared about data, right? And how that data, once they bring data into AI, how is that data going to be treated, right? So I think the solution needs to bring that sense of security of how that data is going to be managed.
Great. Thank you, Breno. I think with the last segment, last question, 30 seconds each, right? Again, probably starting with Breno, since you have the mic, right? So AI is not a short game. It’s a game for the next five years, 10 years, decades. Probably centuries. you know what is the challenge as a humanity we have to mitigate you feel that you know we don’t align with something which is hazardous to us
yeah so I think it’s you know actually I was having breakfast the other day and actually a person I was serving asked me the exact same question and I think that it’s how human beings interact with AI I think we’re still trying to figure out how to properly interact with AI and I think the speed of AI is evolving I think we’re still uncertain how to manage that I think the line on the sand moves so fast that we can’t really catch up to that right and the interaction of AI and us no one really knows how to do it yet
so I’ll map the earlier part in this part. You know, a very specific use of AI for self to, you know, make, you know, your life simpler. We’ll adopt AI skill. And we have to build a certain, you know, the processes to interact with AI in the long run. Because AI is changing, things are changing. Thank you, Breno. Coming back to you, Professor Ganesh, right? Same question, 30 seconds. What’s the challenge you see if we make something, you know, not aligned?
I think the biggest challenge in not making AI aligned is that we will become products, not even consumers, right? We want to be in the steering wheel. I remember my very fondly, my first machine translation paper, I called it, you know, machine assisted human translation. Obviously, I can’t, I mean, that will sound too regressive. But the key is provenance. Right? I mean, how can you leave provenance? at every step in the stack, whether it’s data aggregation, which is again aligned with ecosystem. You need an ecosystem to leave provenance on the data part, whether it’s metadata refinement, data curation, provenance at the level of trading, tokenization, provenance at the observability, the other keyword, right? At the level of the way the model performs.
Models are glass boxes, because that gives you enough breathing space. Where do you, where should you actually yield your practices versus existing practices? So I think if you don’t have that view, the recipes, if they’re not made available, if the education isn’t there, I mean as a prof I always focus on the education part, I think we’ll become products.
Thank you, thank you. Sunil, you and then Kalyan.
No, I think I concur with the views that at the end of the day we should not do AI for the sake of doing AI. It is a means to achieve an end purpose and the end purpose is beneficial. for the masses. I remember I think I was seeing on a YouTube video when Prime Minister Sir met all the startups and Professor Ganesh was there and I think Prime Minister Sir said to everybody don’t create toys, don’t use make AI to make toys, right, and use AI which benefits the masses in the real problem which they face in their real lives. So that is something that that is where the name of this event also has come in the Impact Summit, right, that and I think yesterday also used one word that unlike the previous summit where we are too much concerned about security governance which are things to be done but at the same time, keval bhai nahi rekhna hai AI ka, AI se aap apna bhagya bana sakte, apna bhahisha bana sakte ho.
So kaise AI se how we sort of create an impact, we benefit the masses and also machine should not end up dictating our lives as again I would say ke we should not end up becoming product itself. As much AI makes improvements, it possibly will never reach a stage where it starts acquiring human’s emotions, it starts acquiring our sense of gut, it starts acquiring our sense of culture, it starts acquiring what we speak, our body language, not just with our words. So I think human in the loop and human remaining the master of AI is something we’ll have to guard against all the time.
Interaction, don’t become product, have human -centric development. Kalyan?
I would say, break this into four key areas. Professor mentioned, I think the consumer AI, so I’m going to break it into consumer, enterprise, government and critical national infrastructure defense. So let’s, the reasons, all fours are going to play, just like ten seconds. Consumer AI, you are the product, unfortunately. You now have to use data control to decide how much of what you give to get. It’s a give to get mode, correct? In the consumer AI. Because the day you click I agree on an Android 4 on an Apple intelligence, suddenly you are the product and you’re getting something back but that give to get balance and that’s where the role of the regulator in my opinion has a far more play than in the enterprise of regulation enterprise god made world in seven days because he had no installed base enterprise cios you go and talk to cios on the ground their reality is that they’ve got a big problem architectural problem their data landscape is broken so they have to pivot from process workflow to data first big shift so they need to start about lineage metadata most of these companies don’t have metadata correct metadata discovery use techniques acknowledge graph to understand the metadata and then you organize your data for so that AI can be benefited I think the big place in govtech government government citizen engagement g2c massive but that’s where I think that sovereign AI play comes in where the work which serve them is doing or or the whole bar agent important because that’s where you can host citizen service platform and the last is for critical national infrastructure air gap networks, private AI and defense.
So I think we need to also have a very broken up view of this whole thing rather than trying to have one brush to paint all of them. But I think the last is sovereignty is all about choice. Making choice. Like he walked here. It’s a great choice. I can run on hyperscaler A, B. I can run on IOTA. I can run on CIFI. I can run on any or I can run on my own infrastructure. Then I need to have choice of it’s all about choice. And second is please AI exists for human good. So put the people back into the center. Human because we suddenly have made human someone in the side and everything is about AI.
It’s about people using AI surrounding them. So that’s what my thought was.
Great. Thank you. I think we have had a lot of good nuggets from everyone. I think we’ll continue this conversation after this. As a part of NASCOM, I think 7 AI is a big initiative for us. I think we have been driving it since last three, three and a half years. Ganesh knows that. Sunil knows that. services companies, we have worked enough with them. To keep it on, I think it’s not an end point. We have to think about the sovereignty and we have to think about how India builds the AGI capability, quantum AGI capability. I think that’s the journey we are on as NASCOM. I think we are writing a current policy document for government on sovereign AI and AGI roadmap.
And I think the QR code is there. The QR code will be here and I want all of you to have a look. It’s a dark one. Please work on it. I think that’s that. Yeah, Ganesh?
I mean, the potential is so immense. We have not even scratched the surface, not even the tip of the iceberg we have touched. So, sovereignty is critical because the amount of inefficiency in that entire stack needs to be done away with. GPUs were never designed for building these models, right? Legacy and how can we use even the large work we are doing, workload to actually do better? A SIG design? can we use it to have better model serving engines? So, there’s so much to do. I think everyone should get inquisitive about the entire stack. That’s where sovereignty comes.
Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborator. We will have a QR code and please respond to that. Give your inputs. And with that, thank you to my panelists. I loved it and I think hope you also loved it. Thank you again.
Just one thing I want to just say. Watch on 21st, the PM is inaugurating a new JV which HCL is announcing with Foxconn. It’s called India Chips Limited. I would call it a patient capital. It’s about 16 and 32 nanometer fab which are creating. Basically it’s like a OSAT unit. It’s going to come out after 5 years. You have to build the whole thing. But also building that skill, correct? It’s a big important thing. And we have to start now. We cannot wait for 5 years on the line. So,
Thank you so much to our panelists I request the panelists to please stay back for a group photo right now You can also access the report that Ankit has been talking about in the QR code displayed on the digital background before and leave feedback I’m also happy to announce Thank you Thank you to our panelists I’m also happy to announce an MOU being signed with Amrita Vishwa Vidyapetam and NASCOM right now Thank you. Thank you. Thank you.
India possesses many essential ingredients for AI success: a robust software services industry, thriving startup ecosystem, exceptional mathematical and engineering talent, and a massive domestic mark…
EventThis comment reframed the entire sovereignty discussion by identifying compute infrastructure as the critical bottleneck rather than talent or market demand. It provided a concrete, actionable focus a…
EventThe third commitment centres on building India’s sovereign compute infrastructure through three interconnected initiatives. Jio Intelligence will construct gigawatt-scale data centres, beginning with …
EventDistinguished guests, my fellow Indians, namaste. The Global AI Impact Summit is a defining moment in India’s tech history. A moment when India pledges to make AI one of the driving forces to realize …
EventBuilding resilience through robust infrastructure and cybersecurity is essential
EventEnhancing the skills and capabilities of public administrations as they transition into the digital realm requires a strong focus on cybersecurity. This includes providing the necessary hardware and s…
EventData Foundations: Proper data infrastructure is essential, with most companies still needing to complete foundational work.
EventAnd it’s that kind of computing power that is essential. It’s essential for training large AI models. It’s essential for testing new ideas more quickly, as you mentioned. Subtitles by the Amara .org c…
Event_reportingMalawi: Thank you so much, Chair. Allow me to first thank the GFCE and UK government, through the Women in International Security and Cyberspace, for their continuous efforts in ensuring our partic…
EventPetros Galides: Thank you. Thank you very much, Moderator, dear Ahmed. Just a few words about eMERGE, as my colleague said. So eMERGE is a Euro-Mediterranean regulators group for electronic communicat…
EventCollaborative ecosystems and platforms are essential for scaling SME support globally
EventExisting initiatives like the Global Digital Compact and Open Government Partnership provide an opportunity to create commitment for the DPI Safeguards Initiative. Cooperation, sharing of technology, …
EventThe importance of interoperability in agriculture data systems is also highlighted. Interoperability refers to the ability of different systems and platforms to work together effectively and share inf…
Event– Moira De Roche- Liz Eastwood Havey believes that failures like the Post Office scandal result from poor implementation practices, inadequate testing, and lack of proper organizational standards. Sh…
EventRather than focusing on technological capabilities, there is recognition that the main challenges lie in organizational adaptation and human factors
EventExplanation:Rather than focusing on technological capabilities, there is recognition that the main challenges lie in organizational adaptation and human factors
EventThe overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitude towards the host country, Montenegro. There was an underlying sense of urgenc…
EventThe overall tone was formal yet appreciative. There was a sense of accomplishment and gratitude expressed throughout, with multiple speakers thanking organizers and participants. The tone became more …
EventThe tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciation and maintains an upbeat, accomplished atmosphere. The speakers express relief…
EventThe tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere acknowledging 20 years of progress while expressing serious concerns about curren…
EventThe tone throughout the ceremony was consistently celebratory, formal, and appreciative. It maintained a positive and congratulatory atmosphere from beginning to end, with speakers expressing gratitud…
EventThank you very much for having me. It’s always fun to listen to everyone here on this. I was hoping somebody was going to preempt some of the work that I was going to talk about, but lucky for me, I h…
EventThe main positive news is that the Eurozone has understood the need for extraordinary measures to be taken. The sense of urgency, so often denied by European leaders, is definitely present. It is an i…
BlogThis comment shifted the tone from technical solutions to strategic urgency, emphasizing the need for speed and coordination. It provided context for why initiatives like the Global Signal Exchange ar…
EventThe speaker commenced by acknowledging the Chair’s dedication in revising the Annual Progress Report, particularly the second revision (RAF2), and broadly accepted its content. However, they flagged p…
EventThailand: Thank you, Chair, for giving me the floor. Thailand supports the establishment of a Permanent Mechanism that will allow for the continuation of dialogue among states on cyber security and…
EventThe tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized opportunities rather than obstacles, with particular enthusiasm around technology’s p…
EventThe tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-looking perspective while emphasizing inclusivity and global cooperation. There is a s…
EventThe discussion began with an optimistic, collaborative tone as panelists shared their expertise and perspectives. However, the tone gradually became more realistic and somewhat pessimistic as speakers…
EventThe discussion maintained a consistently collaborative and solution-oriented tone throughout. Speakers were optimistic about the potential of digital technologies while being realistic about current c…
EventThe discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in Baroness Shields’ opening about AI engineering “simulated intimacy”), evolved int…
EventOverall Tone:The tone is diplomatic, optimistic, and collaborative throughout. It begins with ceremonial courtesy and appreciation, maintains an encouraging and partnership-focused approach when discu…
EventThe tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic and inclusive approach, emphasizing collective effort and shared responsibility…
EventRaghavan argues that while the world focuses on immediate metrics like largest models or fastest chips, these are transitory advantages. True long-term success comes from building sovereign capabiliti…
Event“The session opened with a formal inauguration of the Sovereign AI Research Report produced by Amrita Vishwa Vidya‑peetham, with senior representatives from Amrita – Pro‑Vice‑Chancellor Dr Manisha V Ramesh and Dr Shiva Ramakrishnan – attending.”
The knowledge base records that Amrita Vishwa Vidyapetam participated in the report launch ceremony, confirming the involvement of Amrita in the inauguration [S2].
“Compute scarcity was identified as the primary bottleneck for building sovereign AI capability in India.”
Multiple sources describe infrastructure and compute limitations as the critical bottleneck for AI development in India, confirming the report’s emphasis on compute scarcity [S105] and the national goal to deploy tens of thousands of GPUs [S58].
“The government has created a shared‑compute facility that aggregates capacity from multiple providers, currently totalling about 38,000 GPUs, with an additional 20,000 announced.”
The knowledge base notes India’s mission to deploy over 38,000 GPUs as public infrastructure, confirming the reported 38,000-GPU figure; the additional 20,000 announcement is not covered in the sources, so the 38,000 part is confirmed [S58].
“The shared‑compute facility is part of a collaborative framework called “Maitri” that provides shared access to compute, data, and AI models as digital public goods.”
S106 describes the Maitri platform as a collaborative framework offering shared compute, data, and model access, adding detail to the report’s description of the shared‑compute facility.
The panel shows strong convergence on three pillars: (1) massive, affordable compute infrastructure (including shared‑compute models and future domestic chip fab); (2) robust data governance and interoperable data stacks; (3) human‑centric, multilingual, voice‑first AI with clear alignment and executive sponsorship. Skill development, collaborative consortia and government support for early inferencing are also widely endorsed.
High consensus across industry, academia and policy makers, indicating a unified national agenda for sovereign AI. The alignment suggests that forthcoming policies are likely to focus on shared compute facilities, data sovereignty frameworks, and large‑scale skill‑building programmes, which could accelerate India’s AI capabilities while ensuring ethical and inclusive outcomes.
The panel converged on the need for a sovereign AI ecosystem but diverged on how to achieve it. Major friction points include the preferred method of scaling compute (centralised shared pool vs distributed edge‑centric scale‑out), the role of government funding versus regulatory or private‑sector mechanisms for early inferencing, and contrasting philosophies on workforce development (mass upskilling vs elite engineering). These disagreements reflect differing priorities between immediate infrastructure deployment and longer‑term strategic autonomy.
Moderate to high – while all participants share the overarching goal of sovereign AI, the contrasting approaches to compute provisioning, funding models, and talent strategy could impede coordinated policy implementation unless reconciled.
The discussion was driven forward by a series of pivotal insights that moved the conversation from high‑level enthusiasm to concrete challenges and solutions. Sunil Gupta’s focus on compute scarcity anchored the dialogue in infrastructure realities, while Kalyan Kumar’s emphasis on the data stack and skill transformation broadened the technical and talent dimensions. Brandon Mello shifted the lens to adoption barriers, prompting a consensus on the need for executive sponsorship. Ganesh Ramakrishnan’s calls for interoperability and interdisciplinary co‑design introduced a strategic, ecosystem‑wide perspective that tied together infrastructure, data, and talent. Together, these comments created a layered narrative: first identifying the foundational bottlenecks, then outlining the necessary technical and human infrastructure, and finally framing the ethical and policy imperatives for sovereign AI in India. This progression shaped a nuanced, actionable roadmap rather than a purely promotional dialogue.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

