Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit

20 Feb 2026 14:00h - 15:00h

Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, moderated by Amanraj Khanna, examined how India can turn its ambitious AI policy announcements into real-world adoption and scale [1][7-9][11-14][20-22]. Khanna highlighted recent commitments such as Microsoft’s $20 billion pledge for India, Google’s $15 billion, and partnerships like Anthropic-Infosys, noting that deployment, not just announcement, is the challenge [11-14][15-18][20-22]. He framed the discussion around two perspectives: the national R&D infrastructure led by CDAC and the needs of large Indian enterprises represented by Intel [21-24][27-35][36-38].


CDAC, operating under the Ministry of Electronics and IT, has built the PARAM family of supercomputers, delivering roughly 48 petaflops today and targeting 100 petaflops by year-end across 60 sites [45-52]. About 15 000 researchers and many MSMEs use these clusters for workloads such as drug discovery, protein folding, weather forecasting, oil exploration and computational fluid dynamics [53-60][56-61]. CDAC also provides hands-on support to government agencies and startups through initiatives like Paramuthkarsh in Bangalore [55-57][62].


Nitin Bajaj explained that Indian enterprises struggle to move from pilots to production because they must decide between on-prem, cloud, or edge deployments while balancing ROI, model selection and data quality [70-78][79-84][86-88]. He noted that many firms have already purchased GPUs but remain in pilot mode due to unclear cost-benefit calculations and rapidly evolving AI models [93-94][95-102]. Bajaj promoted “frugal AI” – leveraging Intel CPUs with integrated GPUs/NPU to run 7-20 billion-parameter models efficiently, reducing the need for dedicated GPUs in many use cases [156-162][158-161].


Vivek Kanneja argued that full technological sovereignty is unrealistic in the short term; India can import silicon (e.g., NVIDIA, Intel, AMD) while keeping the software stack, models and applications under domestic control [115-124][125-138]. He added that CDAC is developing a RISC-V based GPGPU expected by 2029-30, but until then reliance on external chips will continue [136-138]. Both speakers identified talent gaps: CDAC sees graduates strong in theory but lacking practical MLOps experience, suggesting curriculum reforms and capstone projects [173-182]. Energy efficiency was raised as a critical issue; CDAC employs liquid cooling and power-aware design to achieve PUE around 1.2, while Intel reports data-center PUE of 1.06 and 15 % power-efficiency gains from new packaging [188-199][207-212].


Kanneja envisions success as AI being embedded in many workflows to simplify lives, whereas Bajaj measures success by widespread, affordable AI use that even a street vendor can leverage, supported by robust Indic models [223][225-231]. The discussion concluded that coordinated advances in infrastructure, enterprise readiness, talent development and sustainable practices are essential for India’s AI ecosystem to mature over the next few years [20-22][173-182][188-205][223][225-231].


Keypoints


Major discussion points


India’s AI infrastructure and policy momentum – The panel opened by highlighting recent policy announcements and massive private-sector investments (e.g., Microsoft’s $20 bn, Google’s $15 bn) and the role of CDAC’s PARAM supercomputing series, which now provides about 48 petaflops and is slated to reach ~100 petaflops by year-end, serving researchers, MSMEs and national missions such as drug discovery and weather prediction[8-14][45-53].


Enterprise hurdles in scaling AI from pilots to production – Nitin explained that Indian firms wrestle with choosing the right deployment model (on-prem, cloud, edge), quantifying ROI, and handling data-quality issues that cause proof-of-concepts to stall. Both speakers stressed the need for robust MLOps, “frugal AI” solutions, and clearer cost-performance trade-offs before large-scale roll-out[70-84][95-102].


Sovereignty versus global technology dependence – Vivek addressed the practical limits of full domestic control, noting India lacks advanced-node fabs and GPU IP, so a pragmatic approach is to import silicon (NVIDIA, Intel, AMD) while keeping the software, model-orchestration and applications under sovereign control. He also mentioned CDAC’s own RISC-V-based GPGPU prototype expected around 2029-30[115-124][125-138].


Talent and capability gaps in AI deployment – Both panelists agreed that while India produces many bright engineers, curricula focus on theory rather than real-world MLOps, data cleaning, and large-model deployment, creating a bottleneck that must be addressed through hands-on capstone projects and industry-academia collaboration[173-182][184-185].


Energy and sustainability of AI compute – The discussion turned to the power demands of supercomputing and data-center AI workloads. Vivek highlighted power-aware chip design, liquid-cooling and low PUE (~1.2) for CDAC systems, while Nitin cited Intel’s ultra-efficient data-center PUE of 1.06 and newer ribbon-fed power-delivery technologies that improve efficiency by ~15%[188-199][207-212].


Overall purpose / goal of the discussion


The session was convened to “translate that vision into adoption and scale” – i.e., to bridge the gap between India’s ambitious AI policy and infrastructure (government R&D, supercomputing) and the practical needs and constraints of large Indian enterprises, identifying where the two tracks intersect or diverge and outlining what success should look like in the next few years[20-22][26-27].


Tone of the discussion


– The conversation began enthusiastic and forward-looking, celebrating recent policy wins and investment announcements[5-10].


– It quickly shifted to a pragmatic, candid tone, with speakers openly describing technical constraints, ROI dilemmas, data-quality challenges, and talent shortages[70-84][95-102][173-182].


– Towards the end, the tone became solution-focused and hopeful, emphasizing concrete steps (sovereign stack control, frugal AI hardware, energy-efficient designs) and a vision of widespread AI deployment across Indian society[115-138][188-212][223-231].


Overall, the dialogue moved from high-level optimism to a realistic appraisal of obstacles, and finally to constructive pathways for achieving scalable, sovereign, and sustainable AI in India.


Speakers

Amanraj Khanna


Area of Expertise: Technology policy, AI ecosystem bridging government and enterprise.


Role / Title: Partner and Managing Director for India at the Asia Group; Moderator of the panel. [S2]


Vivek Kanneja


Area of Expertise: High-performance computing, supercomputing infrastructure, AI research, cybersecurity, national R&D.


Role / Title: Executive Director, Center for Development of Advanced Computing (CDAC). [S3][S4]


Nitin Bajaj


Area of Expertise: Enterprise AI adoption, sales and technology leadership, cloud/edge/CPU/GPU solutions for large Indian enterprises.


Role / Title: Director of Sales for Conglomerate Accounts, Intel India. [S5][S6]


Additional speakers:


Sangeeta Reddy


Area of Expertise: Healthcare leadership, AI applications in health services.


Role / Title: Joint Managing Director, Apollo Hospitals.


Full session reportComprehensive analysis and detailed insights

India’s AI agenda was framed by moderator Amanraj Khanna as a shift from high-profile policy announcements to tangible, large-scale adoption. He opened by noting the “palpable” energy at the summit and highlighted recent commitments – Microsoft’s $20 bn pledge for India, Google’s $15 bn investment and the Anthropic-Infosys partnership – as evidence of a “truly fascinating moment” in the country’s AI ambitions [5-10][11-14][15-18]. He also noted the launch of Pax Silica earlier that morning, underscoring the pace of new AI initiatives [4-5].


The panel brought together Vivek Kanneja, representing CDAC, the national R & D hub under the Ministry of Electronics and IT, and Nitin Bajaj of Intel, speaking for large Indian enterprises [20-24][27-35][36-38].


CDAC’s compute infrastructure


CDAC’s mandate is to deliver super-computing capacity through the National Supercomputing Mission. It has built the PARAM family of machines, now delivering roughly 48 petaflops across the National Knowledge Network, with a target of about 100 petaflops by year-end through 60 installations [45-62]. Approximately 15 000 researchers run jobs on these clusters, and the infrastructure also supports MSMEs via the Paramuthkarsh centre in Bangalore [45-62]. Typical workloads include drug discovery, bioinformatics, protein folding, molecular modeling, weather prediction, oil exploration, finite-element modelling and computational fluid dynamics, with CDAC providing hands-on assistance to government agencies and startups [57-60][61].


Enterprise adoption challenges


Bajaj explained that Indian firms often stall at the proof-of-concept (POC) stage because they first need to identify concrete use-cases-such as smart manufacturing, retail analytics or document search-before confronting the “biggest gap” of choosing an appropriate deployment model (on-prem, cloud or edge) and quantifying return on investment [70-78][79-84]. He noted that even when organisations have purchased GPUs, they remain in pilot mode because the cost of full-scale deployment and the rapid evolution of models create uncertainty [93-102][95-100]. Kanneja highlighted that many projects stall after the POC stage due to data-cleaning and MLOps gaps, a view echoed by Bajaj’s comments on ROI and deployment-model uncertainty [210-215][70-78].


Both panelists stressed cost-effective deployment. Bajaj explicitly branded his approach “frugal AI,” advocating the use of Intel CPUs with integrated GPUs/NPUs to run 7-20 billion-parameter models efficiently, thereby reducing the need for dedicated GPUs [156-162]. Kanneja added that choosing between GPUs and simpler VM setups can also achieve cost-effective outcomes [210-215].


Sovereignty and chip strategy


When asked about AI sovereignty, Kanneja said that end-to-end independence is not feasible today. India lacks advanced-node fabs and GPU IP, so a pragmatic, short-term approach is to import silicon (e.g., NVIDIA, Intel, AMD) while retaining control over software, model orchestration and applications [115-124]. He noted a longer-term ambition to design a RISC-V-based GPGPU, expected around 2029-30, emphasizing that the interim focus must remain on sovereign control of the stack above the chip [136-138].


Data-sovereignty versus performance


Bajaj pointed out that data-sovereignty requirements differ by sector: banking and healthcare demand localisation, whereas manufacturing and retail often prioritise speed and accuracy by using cloud APIs, sometimes deploying at the edge for latency-sensitive tasks [149-156][157-166]. He illustrated “frugal AI” with a prompt-based engine that can handle 15-20 prompts per second on a CPU, avoiding the expense of a GPU-only solution [164-166].


Talent and skills considerations


Kanneja described a current talent gap: Indian graduates possess strong theoretical foundations but lack practical MLOps experience, exposure to messy real-world data and skills in large-model deployment; he called for curriculum reforms and capstone projects that simulate beta-scale data handling [173-182]. Bajaj, in contrast, highlighted India’s youthful demographic (average age 13-25) as a catalyst that will rapidly narrow the gap, noting his own learning from younger engineers [184-186]. Thus, both panelists discussed talent considerations, differing on the immediacy of the shortfall.


Energy consumption and sustainability


Kanneja explained that CDAC’s supercomputers employ power-aware VLSI techniques, clock-gating, and a mix of liquid and water cooling, achieving a Power Usage Effectiveness (PUE) of roughly 1.2-significantly better than conventional water-cooled systems [188-199]. He called for a benchmark of energy consumption per token for both training and inference [200-204]. Complementing this, Bajaj reported Intel’s data-centre PUE of 1.06, achieved through ribbon-fed power delivery and advanced packaging, and stressed that judicious model selection (e.g., using CPUs for 7-8 billion-parameter models) can curtail power demand [207-216].


Vision for the next three to five years


Kanneja envisioned AI “deployed in a lot of workflows and making life much simpler and enjoyable for us” [223]. Bajaj expanded the view to a societal scale, stating that India should move from being a top data-consumer to a leader where even a “Sabziwala” can leverage AI-driven insights, supported by Indic models and mass-scale deployments [225-231].


In summary, the panel identified four inter-linked pillars for India’s AI future: (1) expanding sovereign-controlled compute infrastructure (PARAM supercomputers and eventual domestic GPUs); (2) enabling enterprises to move beyond pilots through clear ROI frameworks, frugal hardware choices and robust MLOps; (3) addressing the talent pipeline with practical curriculum reforms while leveraging the country’s demographic dividend; and (4) ensuring energy-efficient, sustainable operations via low-PUE designs and energy-per-token benchmarks. Consensus emerged on the POC-to-production bottleneck, the primacy of ROI, and the need for energy-efficient designs, while disagreements persisted around the depth of the talent gap, the optimal path to chip sovereignty and the precise efficiency targets. The session closed with Amanraj thanking the panelists and inviting Sangeeta Reddy of Apollo Hospitals to speak, underscoring the broader health-sector interest in AI [236-239].


Session transcriptComplete transcript of the session
Amanraj Khanna

India’s AI stack, bridging government vision with enterprise needs. My name is Amanraj Khanna. I’m a partner and managing director for India at the Asia Group. Also my privilege to be a moderator here today. I have to say the energy here is still so palpable, even after five days of this. So it’s absolutely brilliant to be here with you all. We’ve had a truly fascinating moment in India’s AI ambitions. Some massive policy announcements. We just had Pax Silica announced this morning. Very exciting indeed. Significant infrastructure investments. Brad Smith, if you heard him yesterday, Microsoft announced $50 billion in the global south alone. After $20 billion, that has been announced for India. Google, as you know, $15 billion. And growing enterprise adoption.

Anthropic announced its partnership with Infosys, Tata, OpenAI. I’m sure you’re all watching this. But announcement is one thing. Deployment and then achieving scale quite enough. So that’s why this panel matters. Today’s conversation brings two critical perspectives to this fundamental question. What does it take to translate that vision into adoption and scale? One of my two distinguished speakers brings unparalleled insight into infrastructure being built through national R &D institutions. The other sees the reality of what India’s largest enterprises actually need when they deploy AI. So we are here to have an honest conversation about these two tracks. Where do they connect and perhaps where they don’t. So with that let me introduce my distinguished panelists. First I have to my immediate left Mr.

Vivek Kanneja. Vivek is the executive director of the Center for Development of Advanced Computing, CDAC as it’s known. And CDAC has built the Parham supercomputing series providing AI compute infrastructure for government departments and national missions. It conducts cutting -edge research into high -performance computing and cybersecurity. and also trains thousands of engineers annually in advanced computing and AI. Vivek, of course, has held multiple senior leadership positions within CDAC and has guided national initiatives in these critical areas. Welcome, Vivek. I also have to my far left, Nitin Bajaj. Nitin is Director of Sales for Conglomerate Accounts at Intel India. He leads Intel’s engagement with India’s largest enterprises on their digital transformation and AI adoption journeys. He has over 28 years of global experience in sales and technology leadership and orchestrates a broad partner ecosystem.

These include system integrators, ISVs, cloud providers to deliver Intel -based solutions spanning cloud, AI, HPC, 5G, edge, and end -user computing. So in summary, he sees firsthand what drives enterprise infrastructure decisions and what actually prevents companies from moving AI from pilot to true scale. So with those introductions, let’s get right into it. Vivek, why don’t I start with you? So here’s my first question Vivek. CDAC has built param supercomputing series and AI compute infrastructure. What compute capabilities does CDAC actually provide today? Who uses them? For what workloads? And what are the key constraints that you see operating within?

Vivek Kanneja

Okay, thanks. So as you know, CDAC is a scientific society under the Ministry of Electronics and IT. And one of the mandates that we have is to build supercomputing capacity in the country. This is the mandate which is given to us under the National Supercomputing Mission where we have developed a series of supercomputers under the brand name PARAM. We started in late 80s, starting with the PARAM 8000. And now we have the PARAM series of supercomputers which are installed with our own software. About 48 petaflops of supercomputers are installed in the country. And we have been able to build some of these supercomputers And we have been able to build some of these supercomputers and we have been able to build some of these supercomputers and we have been able to build some of these supercomputers and we have been able to build some of these supercomputers of overall performance connected over the National Knowledge Network or the NKN as it is called.

This capacity is going to be augmented to about 100 pitaflops by the end of this year with 60 installations. Most of these installations are today being used by researchers. About 15 ,000 researchers fire jobs across these machines on the National NKN. A lot of it is being also used by MSMEs. For example, we have opened Paramuthkarsh which is housed in our city at Bangalore Centre for use by start -ups and MSMEs. The kind of applications that run here include drug discovery, bioinformatics, we have protein folding. We have molecular modeling. You have weather prediction. So I mean almost all of these are being used. number crunching problems, oil exploration, finite element modeling, computational fluid dynamic problems. So all such problems are being run across these clusters by researchers.

We also have a lot of expertise in various domains which we have developed in -house over a period of years where we are hand -holding a lot of these agencies, a lot of government agencies are working with us on this.

Amanraj Khanna

Thanks so much, Vivek. Lots of threads to pull on there, but for a moment, let me go to Nitin. Nitin, you work with some of India’s largest enterprises, some of our national champions. So when these enterprises commit to AI, which you must have seen increasingly so, what are some of the actual barriers that prevent them to move from pilot projects to some of those production -scale deployments that everyone envisions, especially at events like this?

Nitin Bajaj

Thank you. First of all, thank you for inviting me. It’s a privilege to be here and talking in front of such an esteemed crowd. So basically, I think I’ll kind of break this into two, three pieces. One, be it Indian enterprises or be it global enterprises, I think everybody is grappling with the same sort of problem statement. And everybody is trying to find those use cases which are very, very pertinent for their own enterprises. And some certain enterprises are into manufacturing domain, some are there in the retail side of it. And so the typical use cases that they have could be around smart manufacturing, smart retail, or specific generic use cases where they want to do a lot of document search.

They have to have fine -tuned search on the policies, the TNCs and things of that sort. So essentially, the way you see it, I see it in twofold. One, They are looking at the speed, and everybody is trying to figure out the best ROI. So I think the biggest gap today is what to use, whether to use it on -prem or whether to go on cloud, use open APIs which are available to them. Then once those use cases are ready, as you said, from pilot to production, what is the final cost of that deployment? And then the third angle that comes in is whether to kind of centralize all of this or to take it to the edge.

So there is no single answer. And when you think of all the ecosystem providers that are there today, be it from silicon to ISVs to system integrators, everybody has a pocket of expertise in their own sense. But today there is no single formula. And the entire. AI journey is changing so rapidly models are being dropped at a speed of light and then the entire from silicon to OS to everything else, the whole ecosystem is changing so fast that even the enterprises are trying to figure out what is the best deployment model for them, what is the best ROI that they can get out of it but in the midst of all of this, in pockets, lot of these enterprises are trying to see what are the specific use cases that they can bring to fore that can bring some incremental benefit to whatever operations that they are running in their organization.

Now I can give multiple examples to it for example in a manufacturing domain, there could be surveillance kind of use cases multi -modal use cases, there could be use cases around how to look at inventory and then how to look at complete digital twin dark factory kind of a scenario in case of retail it is all about say it could be around preventing the thefts, the pilferation that happens or doing customer analytics. And then, as I said, a lot of document search kind of examples are going on. But in most of those cases, the whole decision -making is between edge versus on -prem, cloud versus sovereign data centers, what kind of models they should use, and how to find out the right ROI, which is where we feel frugal AI is what we kind of propose to the industry.

And I’ll talk more about that maybe later. But what is the best deployment model that can really help an enterprise scale at a level, at a cost point, which is really making sense to them?

Amanraj Khanna

Let me ask you a very quick follow -on question on that. Do you see that Indian enterprises are increasingly sophisticated in making these choices? And, of course, Intel works globally, right? So how does it compare in terms of its maturity, its sophistication, compared to the other markets where Intel also operates?

Nitin Bajaj

Clearly, in terms of use cases, I think we have the edge. But again, the veracity of data. it’s a big problem second i would say that these enterprises when you think of it a year back almost everybody wanted to buy gpus and a lot of these enterprises have bought systems which are powerful enough to run all kinds of use cases but still they are in that pilot phase because again because of that roi factor so things are maturing of course enterprises are becoming smarter from llms we are now looking at slms so they are trying to kind of figure out the right silicon where they have to kind of land their workloads on so as things are emerging i think things are getting better but yeah it’s still some time before you see those live deployments coming out

Vivek Kanneja

for the benefit of large larger audience sorry for interjecting i think just just to add to add to what he just said we are seeing a lot of places where people are not being able to come out of the POCs. I think one of the major reasons that, at least personally I have seen, is that people are very happy with the POCs. They can train them on curated data sets. But once it actually goes and hits real -life situations where the data needs to be cleaned up, it’s not clean, you have no proper experience in actual deployments of the MLOps, you have done it in a canned manner, then suddenly the reality hits that, no, it’s not that simple.

And as you talked about the ROI, then you have to make those choices, whether I should have an on -prem this thing, do I really need a GPU for my problem? Or can I work across multiple VMs on a simple, simple IT infrastructure? So those are the choices that hopefully in the next coming… years, intelligent choices will be made. People, we will start to see more MLOps, engineers coming out and deploying these things at scale because POCs is fine, but the real revenue will come only once you actually deploy it at scale.

Amanraj Khanna

Understood. Thank you so much for those perspectives. Unlike yourselves, you know, you’re both technologists. I’m a little bit in the policy space. I’m in the fringes of this. I work on tech policy, and I haven’t had a single conversation here with, you know, a foreign investor which hasn’t talked about sovereignty or dependency. So I’m going to bring my next question. I’ll bring you over to my area because I’d love to pick both your brains on this. So Vivek, let me talk to you about this first, right? So India talks about sovereignty, but SEDAC still relies on global technology, right, whether that’s chips, systems, you know, software stacks. So realistically, what can India… India built domestically versus what will we always need to source globally, right?

and where should we focus our capability? So question to you as someone who’s really on the cutting edge.

Vivek Kanneja

Okay, so let me answer that both technically and politically correct way. See, when you talk of sovereignty, let’s see what does it really mean. Do you want to be completely independent in the entire vertical, right from silicon up to the application? Is that really possible, right? Let’s say I need to design a GPU. It’s a good aspirational goal. But do I have the wherewithal today to do the entire thing in -house? Probably not. I don’t have the IPs. We can start, definitely. We don’t even have today a fab that can give me a 3 nanometer or below production capabilities and package capabilities. So I think a more pragmatic approach is to have maybe the silicon coming from outside and everything.

Everything above that should be under my control. So you should be able to control all your… critical choke points. So that’s the model that India AI has taken. For example, they have created a farm of GPUs which are available under the control freely or at a reasonable cost for the developers. The models which are being built on top of that are under your control. How those models are going to be orchestrated is under your control. The application that will use those models is under your control. So for me, that is the sovereignty where you are getting the maximum ROI. Should I really be competing against an H100 or a B200 from NVIDIA? Probably not in the short term.

But yes, as an aspirational goal, just to let you know, CEDAC is designing its own GPGPU based on RISC -V. We will probably have something by the end of 2029 -30. But then, till that time, we really need to have a lot of this entire stack under sovereign control. maybe using chips from outside, whether it is NVIDIA, whether it is Sapphire Rapid or Granite Rapid from Intel, or from AMD. But everything above that should be, all my critical choke points should be under my control.

Amanraj Khanna

So I have to compliment you. That was a very candid and pragmatic answer. So compliments to you. Nitin, can I reframe that question a little bit for you? So, you know, there’s, as you know, in the same vein, there’s a significant policy focus on data sovereignty and data localization as well, right? So when your enterprise customers are making AI infrastructure decisions, how much do these factor into those choices? And how do they compare with cost and performance considerations as well? So has this calculus shifted with some of the policy developments that we’ve seen over the past couple of years in AI?

Nitin Bajaj

Well, I would say not really. So it all depends on the industry and the market. So it all depends on the industry that enterprise is in. for a banking industry, for healthcare industry, data sovereignty is very, very important. For a manufacturing industry or for any other, say, retail or any other industry, they are trying to build use cases on the cloud because that’s where they feel that they can build those use cases very, very quickly. They can simply call the APIs available there and then they see that the performance is much better and they get better accuracy. So it’s a mix of both. Now, then comes again, even in a manufacturing environment, as I was calling out, in an OT environment, a lot of these manufacturing firms would want their data to reside within their perimeter because it is so, so close to them.

Then again, when it is coming to deployment side of it, they’re again looking at an edge deployment which becomes within the perimeter. So it’s a mix of both. Now, finally, again to the point that I was making in the beginning anything that has to scale the price becomes a key driver so it is about okay I have a model that I have kind of fine tuned on the cloud now when I have to take it to deployment can I use something locally which is available today with me without making a lot of investment and then can I scale it which is where when Intel is talking about it we basically kind of focus on frugal AI today the Intel core and ultra core CPUs they have a GPU, NPU and a CPU all combined in a single processor which allows you enough capability to run maybe a 7 or 8 billion parameter model so the typical requirements that you will have on the edge are very well suited on this CPU itself and when it comes to a data center side the Xeon 6 processors today are able to run a 20 billion parameter very easily.

They can go up to 80 billion parameters depending on the specific use case. So do I need a GPU in every instance is the first question that we are trying to ask. And today everybody has a CPU in their environment. So can they reutilize that? And can they test it out? Can they look at performance levels? One example that I can kind of discuss with my customers is if you’re looking at, say, a prompt -based engine where you need to do a document search, typically a human eye can read about 10 prompts in a second if somebody is a fast reader. Now if a processor can give you 15 to 20 prompts, is this good enough as a performance for you or do you want 200 prompts in that particular second?

Maybe 15, 20 prompts is good enough. So that’s where the cost versus performance comes in. That’s where one has to be very, very… calculated in terms of what is their end use that they are looking at and what would best suffice that particular usage so again it’s a mix of uh this uh localized uh data versus what can go on to cloud and once you look at uh the scale of it again you have to look at the cost because when you’re scaling it on the cloud the cost may be very very different from what you can get on the on -prem side of it i’m not a kind of proponent of on -prem versus cloud for me i think both of them bring their advantages but what i’m trying to say is the customer has to really look at what is their end use and at what cost they want to

Amanraj Khanna

understood so we need to diversify our approaches and have a product to mission fit which is absolutely critical think of it in the past everybody used to think only mainframes can solve the problem today the applications are not only running are no more running mainframe, they came on to CPU level things and then today we are looking at micro services. So everything has kind of evolved over time and same things are happening on the AI side of it today. Understood. So one quick question that’s on everyone’s mind and to both of you. From each of your respective perspectives, whether that’s government R &D infrastructure or enterprise deployment, is talent and the capability gap still a critical choke point from each of your perspectives?

So Vivek, why don’t I start with you on this?

Vivek Kanneja

It is. It’s unfortunate but yes it is. I mean we do find that we have a set of very bright engineers coming out but most of them are trained in good theoretical understanding of what machine learning is. They are good at mathematics, basic understanding but when it comes to actual deployments on field. I think that’s where we are lacking. Maybe we need to have a serious look at our curriculum in the colleges to see how do I train large models, how do I deploy large models using MLOps, because today, as I said, most of these kids are working on curated data sets. They are working on some standard test cases and test and validation cases.

But when it comes to real life, life is not that rosy. I mean, you have data which is missing, you have data which is skewed, you have data which needs to be cleaned, I have real time constraints, I have other security considerations. So those are not a part of the curriculum. So that’s where I think some capstone projects where you are able to handle beta scale data needs to be put in place. Theory is fine, but still there’s a lot to learn on the practical side.

Nitin Bajaj

I’ll give a different perspective. The way I look at India versus all other countries. in other countries it’s an aging population for us it is a booming population at this point in time which with average age of 13 to 25 or so which is kind of exposed to AI so maybe today we may see some sort of gap in terms of AI capabilities but 2 years down the line or 4 years down the line I think that we bridged very very quickly so we have that benefit of demography here in the short run of course as an individual myself I am also learning from the kids today how AI can be deployed so gaps will be there but I think this is a learning curve for everybody

Amanraj Khanna

Thanks Nitin I want to get to just a couple more questions and then we’ll try and be very quick one I’ve been wanting to ask you and one that I get asked often is the energy and sustainability question you know super computing uses huge amounts of energy can have societal impacts right CDACs super computing whether the you know large data centers that Intel perhaps you know works with So how do we think about energy and sustainability implications, especially in the Indian context? So Vivek, perhaps, with you.

Vivek Kanneja

So from my perspective, I look at it to be addressed at two levels. One is something that we have been doing. Coming from a VLSI background, I can tell you that there are today’s standard techniques which are being used in all associate designs which are power -aware designs. So you have multiple power islands, you have clock tree gating that happens, you switch off those cores which are not being used. So that’s from a design perspective. But when it comes to the platform design, there are smart choices which are being made. For example, if I talk about CDAC solutions, today we are using liquid cooling as well as water cooling in a ratio of almost 70 to 30.

And we are slowly moving the entire thing to a pure liquid pool thing. There are other advanced techniques which use only air cooling. So, ultimately what is your PUE will actually determine. So, typically we see a PUE of around 1 .2 or so as compared to a conventional water -cooled thing which is about 1 .4, 1 .5. There are definitely green norms which are being proposed. I think the question here to be asked and we need to do this kind of a benchmarking across all the models is what is the energy that I spend per token for training or for inferencing. I think there should be critical benchmark. We need to seriously look at how I can optimize my models to be more power aware.

Can I have compressed models which take less energy? So, yes, energy is one of the critical factors and especially hyperscalers would need huge amount of power. I mean, there is this joke that we keep on saying that *** But along with the hyperscaler, you also need to have a small power plant which needs to be designed together with it.

Amanraj Khanna

Thank you. Nitin, to you.

Nitin Bajaj

I’ll have two, three points here. One, from manufacturing point of view, Intel is utilizing the latest technologies like ribbon fed and power via, which improves the power efficiency by 15%. So these are the latest technologies available. Second, I would say that we are running our own data centers. Which are running at a PUE of 1 .06, which is the most efficient power data center PUE that you could see. So there’s a white paper on intel .com. I would appreciate if those interested can look at it. Third, I would say, again, power is a problem. Or is the first and the foremost ingredient to running those data centers. So one has to be, again, very cautious of what kind of models you are running and then where it is landing.

So if you. If you’re able to be, if you’re more judicious in terms of our selection. then of course we can save power in some ways.

Amanraj Khanna

So one final question. I realize that we’re time’s up. One question, let’s put this in a quick sentence if you can. When we assess India’s progress over the next three, five years, what does success look like to each of you? So maybe a sentence.

Vivek Kanneja

Success for me would be AI being actually deployed in a lot of workflows and making the life much simpler and enjoyable for us.

Amanraj Khanna

Thank you.

Nitin Bajaj

For me it is like when we were like about 150 in terms of data usage. Today we are number one in terms of data usage. When it comes to increasing the generic intelligence of people, so today we are using media for media and for entertainment, all the data that we are consuming. If we can consume the data for improving or improving the general intelligence of public. that will make a large scale impact on the society and India at large. So in two or three years, if a Sabziwala can figure out how they can kind of up -level their state, that would be the best way. And then all the Indic models and all the other use cases that are coming out should be able to support those.

And then there will be a mass -scale deployment of AI across the board.

Amanraj Khanna

Thanks so much, Nitin. And I see that time’s up. I wish we could pick your brain further. This has been a truly fascinating conversation. And thank you for being so candid with all your responses. Thank you. So please join me in thanking both Vivek and Nitin. Thank you. Thank you. I would now like to invite Sangeeta Reddy, Joint Managing Director, Apollo Hospitals, to give her remarks. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (26)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Anthropic‑Infosys partnership announced “just yesterday” to serve Indian enterprises”

The Fireside Conversation notes that Anthropic and Infosys announced a partnership “just yesterday” to serve Indian enterprises, confirming the report’s statement [S73].

Additional Contextmedium

“Microsoft pledged $20 bn for India’s AI agenda”

The knowledge base records Microsoft’s commitment to train 20 million Indians by 2030, which is a skills-focused initiative rather than a $20 bn financial pledge, providing additional nuance to the claim [S71].

Confirmedhigh

“Panel featured Vivek Kanneja representing CDAC and Nitin Bajaj of Intel”

Vivek Kanneja is identified as the Executive Director of CDAC and Nitin Bajaj as Director, Sales and Marketing at Intel in the source material, confirming their roles on the panel [S1] and [S4].

Confirmedhigh

“CDAC has built the PARAM family of super‑computers providing AI compute infrastructure”

The source states that CDAC has built the Parham (PARAM) supercomputing series that provides AI compute infrastructure for government departments and national missions, confirming the report’s claim [S1].

Additional Contextmedium

“CDAC’s super‑computing infrastructure supports government departments and national missions”

The knowledge base adds that the PARAM series is specifically used by government departments and national missions, giving extra detail to the report’s description of CDAC’s mandate [S1].

External Sources (79)
S1
https://dig.watch/event/india-ai-impact-summit-2026/fireside-chat-intel-tata-electronics-cdac-asia-group-_-india-ai-impact-summit — And growing enterprise adoption. Anthropic announced its partnership with Infosys, Tata, OpenAI. I’m sure you’re all wat…
S2
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — India’s AI stack, bridging government vision with enterprise needs. My name is Amanraj Khanna. I’m a partner and managin…
S3
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — First I have to my immediate left Mr. Vivek Kanneja. Vivek is the executive director of the Center for Development of Ad…
S4
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — “For the next session, we have a fireside chat between Mr. Vivek Kaneja, Executive Director, CDAT, Mr. Nitin Bajaj, Dire…
S5
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — -Nitin Bajaj: Director, Sales and Marketing, Intel (mentioned for upcoming fireside chat session) Thank you. Thank you …
S6
https://dig.watch/event/india-ai-impact-summit-2026/fireside-chat-intel-tata-electronics-cdac-asia-group-_-india-ai-impact-summit — First I have to my immediate left Mr. Vivek Kanneja. Vivek is the executive director of the Center for Development of Ad…
S7
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — – Dr. Vivek Khaneja- Nitin Bajaj Dr. Khaneja advocates for a uniform approach to sovereignty focusing on software contr…
S8
Contents — – 2 Incentivise start-ups and support the research base to spin out more quantum businesses, in line with specific growt…
S9
https://dig.watch/event/india-ai-impact-summit-2026/driving-indias-ai-future-growth-innovation-and-impact — Thank you, Mridu, and thank you, everyone, for joining us for the unveiling of this important blueprint. As we have hear…
S10
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And then we are focusing on applications. Applications is AI for everyday tasks, for making things better for people. An…
S11
Efforts to improve energy efficiency in high-performance computing for a Sustainable Future — The demand for high-performance computing (HPC) has surged due to technological advancements like machine learning, geno…
S12
Conversation: 01 — “locally in the UAE the main focus is AI for quality of life improvement … we believe that this will translate into ev…
S13
From KW to GW Scaling the Infrastructure of the Global AI Economy — So I think that early on AI got a bad rap. It was going to be the computers were going to take over and blow up the eart…
S14
Workers report major gains from AI use — ChatGPT nowreaches more than 800 million userseach week, and this rapid uptake is fuelling a surge in enterprise AI adop…
S15
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Data governance, security concerns, and potential token pricing shocks are major barriers preventing pilot projects from…
S16
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S17
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — And then the biggest, one of the biggest barriers to scale has been the lack of discipline or willingness to say, I’m go…
S18
Panel Discussion Data Sovereignty India AI Impact Summit — This example demonstrates what Gupta termed “partnership not dependence” – utilizing “the best of foreign technologies” …
S19
Building Indias Digital and Industrial Future with AI — This comment shifted the discussion from abstract policy concepts to concrete technical and operational realities. It pr…
S20
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion reveals strong consensus on key strategic directions: comprehensive ecosystem development beyond chip man…
S21
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Anita Gurumurthy emphasised that despite improvements in chip efficiency, energy demand from data centres continues crea…
S22
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — To address this, companies are exploring innovative solutions such aspower capping(limiting processor power to 60-80% of…
S23
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S24
Fireside Chat Intel Tata Electronics CDAC &amp; Asia Group _ India AI Impact Summit — High level of consensus with complementary perspectives rather than conflicting viewpoints. The government R&D and enter…
S25
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S26
How Small AI Solutions Are Creating Big Social Change — African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so lan…
S27
Driving Indias AI Future Growth Innovation and Impact — And lastly, goes back to the same thing. And maybe I’ll use the same example. You know, we had the UPI of money. We need…
S28
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — And I have a deep belief that the entrepreneurial ecosystem in India is going to deliver some incredible global leaders …
S29
AI Innovation in India — Bagla articulated a compelling vision of India’s unique advantages in the global AI landscape, asserting that India will…
S30
Agents of Change AI for Government Services &amp; Climate Resilience — Srinivas Tallapragada introduced an important distinction between strategic sovereignty and technical sovereignty that p…
S31
Building Indias Digital and Industrial Future with AI — Deepak Maheshwari from the Centre for Social and Economic Progress provided historical context, tracing India’s digital …
S32
AI as critical infrastructure for continuity in public services — “Data is siloed, data is not ready for AI scale.”[71]. “So almost 80 % of those pilots don’t make it to production.”[98]…
S33
Practical Toolkits for AI Risk Mitigation for Businesses — Soujanya Sridharan:Thank you very much, Nusrat. We were indeed very excited to have participated in doing this piece of …
S34
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — This question addresses the economic viability and strategic considerations for businesses choosing between different mo…
S35
Enterprise AI adoption stalls despite heavy investment — AI has moved from experimentation to expectation, yet many enterprise AI rolloutscontinue to stall. Boards demand return…
S36
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — The discussion revealed a common theme across different contexts: the gap between policy ambition and implementation cap…
S37
Powering AI Global Leaders Session AI Impact Summit India — “And what that really means is the technology continues to accelerate.”[14]. “going to become even faster and faster.”[1…
S38
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — I know we’re going to have a little bit of time for questions, I hope, at the end. What is IAS? So I’ve talked about wha…
S39
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Energy efficiency improvements offer significant opportunities for reducing environmental impact while controlling opera…
S40
The Foundation of AI Democratizing Compute Data Infrastructure — A lot of engineers working on AI in industry these days, even in academia, are actually focusing on how can I make this …
S41
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — “AI capability and resilience increasingly depend on where trusted compute is physically located and how it is governed”…
S42
Panel Discussion Data Sovereignty India AI Impact Summit — High level of consensus with complementary perspectives rather than conflicting viewpoints. The implications suggest a m…
S43
Building Sovereign and Responsible AI Beyond Proof of Concepts — Sovereignty dimension focuses on control over data, models, and security measures
S44
Fireside Chat Intel Tata Electronics CDAC &amp; Asia Group _ India AI Impact Summit — Dr. Khaneja outlined CDAC’s substantial progress in building India’s supercomputing backbone through the PARAM series. T…
S45
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Jensen at Davos called this the largest infrastructure build -out in human history. Two weeks ago, 54 countries launched…
S46
India allocates $1.24 billion for AI infrastructure boost — India’s government has greenlit a ₹10,300 Crore ($1.24 billion) fundingprojectto enhance the country’s AI infrastructure…
S47
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Data governance, security concerns, and potential token pricing shocks are major barriers preventing pilot projects from…
S48
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — And then the biggest, one of the biggest barriers to scale has been the lack of discipline or willingness to say, I’m go…
S49
AI as critical infrastructure for continuity in public services — “Data is siloed, data is not ready for AI scale.”[71]. “So almost 80 % of those pilots don’t make it to production.”[98]…
S50
https://dig.watch/event/india-ai-impact-summit-2026/fireside-chat-intel-tata-electronics-cdac-asia-group-_-india-ai-impact-summit — But yes, as an aspirational goal, just to let you know, CEDAC is designing its own GPGPU based on RISC -V. We will proba…
S51
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — This comment reframed the entire sovereignty discussion by identifying compute infrastructure as the critical bottleneck…
S52
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion reveals strong consensus on key strategic directions: comprehensive ecosystem development beyond chip man…
S53
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — To address this, companies are exploring innovative solutions such aspower capping(limiting processor power to 60-80% of…
S54
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Anita Gurumurthy emphasised that despite improvements in chip efficiency, energy demand from data centres continues crea…
S55
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S56
Power demands reshape future of data centres — As AI and cloud computingdemand surges, Siemens is tackling critical energy and sustainability challenges facing the dat…
S57
Partner2Connect High-Level Dialogue — The tone was consistently optimistic and collaborative throughout the discussion. It began with celebratory announcement…
S58
Next-Gen Industrial Infrastructure / Davos 2025 — The tone was largely optimistic and forward-looking, with speakers enthusiastically sharing their visions and initiative…
S59
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S60
Keynote-Rishi Sunak — The tone was consistently optimistic and inspirational throughout. Sunak maintained an enthusiastic, forward-looking per…
S61
Comprehensive Report: “Converging with Technology to Win” Panel Discussion — The discussion began with an optimistic, exploratory tone as panelists shared different models and success stories. The …
S62
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S63
Day 0 Event #257 Enhancing Data Governance in the Public Sector — The discussion maintained a pragmatic and collaborative tone throughout, with speakers acknowledging both opportunities …
S64
Transforming Health Systems with AI From Lab to Last Mile — The discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s…
S65
Host Country Open Stage — Low to moderate disagreement level. The speakers were largely aligned on identifying problems (aging populations, health…
S66
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S67
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S68
Indias AI Leap Policy to Practice with AIP2 — The discussion maintained a constructive and collaborative tone throughout, with speakers building on each other’s point…
S69
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S70
Keynote-Ankur Vora — This comment provides crucial context that legitimizes India’s leadership role in AI governance and demonstrates how pas…
S71
Welfare for All Ensuring Equitable AI in the Worlds Democracies — -Democratizing AI Access and Preventing Digital Divide: Concerns about AI’s economic value concentrating in Western econ…
S72
Keynote Adresses at India AI Impact Summit 2026 — And critically, India brings strength. Peace doesn’t come from hoping adversaries will play fair. We all know they won’t…
S73
Fireside Conversation: 01 — The conversation revealed concrete collaborative initiatives, including a partnership between Anthropic and Infosys anno…
S74
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — And as far as the question about data center, I think the enablement of the data centers or AI is hardware driven. Becau…
S75
OpenAI turns to Google Cloud in shift from solo AI race — OpenAIhas enteredinto an unexpected partnership with Google, using Google Cloud to support its growing AI infrastructure…
S76
UNSC meeting: Artificial intelligence, peace and security — Switzerland:Thank you, Madam President. We are grateful to the Secretary General, Antonio Guterres, for participating in…
S77
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — Economic development and social growth. and the Three Sutras of People, Planet and Progress. This summit is focusing ver…
S78
From India to the Global South_ Advancing Social Impact with AI — Minister Chaudhary announced the PM Setu scheme, allocating 60,000 crores to transform India’s Industrial Training Insti…
S79
Quantum for Good: Shaping the future of quantum – What happens next? — Leandro Aolita: Good morning, everybody. My name is Leandro Aolita. I am the chief researcher of the Quantum Research Ce…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
V
Vivek Kanneja
8 arguments157 words per minute1448 words551 seconds
Argument 1
Overview of PARAM supercomputers and current capacity (Vivek Kanneja)
EXPLANATION
Vivek explained that CDAC, under the Ministry of Electronics and IT, has built a series of PARAM supercomputers since the late 1980s. The current installed capacity across India totals about 48 petaflops, providing AI compute resources for various national missions.
EVIDENCE
He described CDAC’s mandate to develop supercomputing capacity under the National Supercomputing Mission, noting the evolution from the PARAM 8000 to the current PARAM series and stating that roughly 48 petaflops of supercomputers are installed in the country [45-50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Fireside Chat notes that CDAC’s PARAM series provides about 48 petaflops of computing capacity across the National Knowledge Network, serving roughly 15,000 researchers [S2].
MAJOR DISCUSSION POINT
AI compute infrastructure overview
Argument 2
Expansion plans to 100 PFLOPS and support for researchers, MSMEs, and startups (Vivek Kanneja)
EXPLANATION
Vivek outlined plans to double the nation’s supercomputing power to about 100 petaflops by the end of the year, with 60 installations. He highlighted that the infrastructure serves researchers, MSMEs, and startups through initiatives like Paramuthkarsh, enabling applications ranging from drug discovery to weather prediction.
EVIDENCE
He said the capacity will be augmented to about 100 petaflops by year-end with 60 installations, that about 15,000 researchers run jobs on the National Knowledge Network, and that the Paramuthkarsh facility in Bangalore is open to startups and MSMEs for workloads such as drug discovery, bioinformatics, protein folding, molecular modeling, weather prediction, oil exploration, and CFD [52-60].
MAJOR DISCUSSION POINT
Supercomputing expansion and user base
Argument 3
Lack of real‑world MLOps expertise and data‑quality challenges cause POC‑to‑production gaps (Vivek Kanneja)
EXPLANATION
Vivek argued that many AI projects stall at the proof‑of‑concept stage because they rely on curated datasets and lack practical MLOps experience. When confronted with messy real‑world data and ROI pressures, organizations struggle to move to production.
EVIDENCE
He noted that people are happy with POCs trained on curated data, but real-life deployments encounter unclean data, insufficient MLOps expertise, and ROI considerations that force difficult hardware and deployment choices [95-100].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
During the same session, participants highlighted that while POCs on curated data are easy, real-world deployments stumble over missing, skewed, or noisy data and the need for MLOps skills [S2].
MAJOR DISCUSSION POINT
Challenges transitioning from POC to production
AGREED WITH
Nitin Bajaj
Argument 4
Full silicon‑to‑application sovereignty is unrealistic now; focus on controlling models and applications while sourcing chips externally, with a long‑term RISC‑V GPU goal (Vivek Kanneja)
EXPLANATION
Vivek stated that achieving complete end‑to‑end sovereignty is not feasible because India lacks the IP and advanced fabs for cutting‑edge silicon. He suggested a pragmatic approach: source chips externally while retaining control over models, orchestration, and applications, and mentioned a plan to develop a RISC‑V based GPU by 2029‑30.
EVIDENCE
He explained that India does not currently have the capability to design and fabricate advanced GPUs, so the strategy is to use external silicon (e.g., NVIDIA, Intel, AMD) while keeping critical choke points under Indian control, and that CDAC aims to design its own RISC-V GPGPU by 2029-30 [115-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vivek emphasized a pragmatic sovereignty approach-using external silicon but retaining control over models-mirroring comments on sovereign model development in the summit keynote [S2][S10].
MAJOR DISCUSSION POINT
Sovereignty versus external technology dependence
AGREED WITH
Amanraj Khanna, Nitin Bajaj
Argument 5
Academic training is strong theoretically but weak in practical deployment, MLOps, and handling messy data; curriculum reform needed (Vivek Kanneja)
EXPLANATION
Vivek highlighted that Indian engineering graduates possess solid theoretical knowledge but lack hands‑on experience with large‑scale model deployment, MLOps, and real‑world data challenges. He called for curriculum changes and capstone projects that expose students to beta‑scale data and operational constraints.
EVIDENCE
He observed that while many engineers are bright and mathematically proficient, they are trained mainly on curated datasets and standard test cases, lacking exposure to missing, skewed, or noisy data, real-time constraints, and security considerations, suggesting the need for practical capstone projects [173-182].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion called for curriculum changes to give engineers hands-on experience with noisy data, real-time constraints, and MLOps pipelines [S2].
MAJOR DISCUSSION POINT
Talent and capability gap in AI education
AGREED WITH
Nitin Bajaj
DISAGREED WITH
Nitin Bajaj
Argument 6
Power‑aware design, liquid cooling, and low PUE (~1.2) are being used; need benchmarks for energy per token and model compression (Vivek Kanneja)
EXPLANATION
Vivek described how CDAC incorporates power‑aware VLSI techniques, multiple power islands, and clock‑gating, along with liquid and water cooling to achieve a PUE around 1.2. He emphasized the need for benchmarks on energy per token and model compression to further improve efficiency.
EVIDENCE
He explained that modern designs use power islands and clock-tree gating, that CDAC solutions employ a 70/30 liquid-to-water cooling mix moving toward pure liquid cooling, resulting in a PUE of roughly 1.2, and called for benchmarking energy per token and developing compressed, power-aware models [188-205].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vivek described CDAC’s power-aware VLSI, liquid-cooling strategy achieving a PUE around 1.2, and the need for energy-per-token benchmarks; broader HPC sustainability concerns are discussed in the energy-efficiency review [S2][S11].
MAJOR DISCUSSION POINT
Energy efficiency and sustainability of AI compute
AGREED WITH
Nitin Bajaj
Argument 7
Widespread deployment of AI across workflows, improving everyday life (Vivek Kanneja)
EXPLANATION
Vivek summarized his vision of success as AI being integrated into many workflows, making daily life simpler and more enjoyable for citizens.
EVIDENCE
He succinctly stated, “Success for me would be AI being actually deployed in a lot of workflows and making the life much simpler and enjoyable for us” [223].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External commentary notes AI’s role in simplifying daily tasks and enhancing quality of life, echoing the vision of broad workflow integration [S13][S14].
MAJOR DISCUSSION POINT
Vision of AI success
AGREED WITH
Nitin Bajaj
Argument 8
CDAC provides not only compute capacity but also domain expertise and hands‑on support to government agencies and startups, facilitating AI project implementation.
EXPLANATION
Beyond building supercomputers, Vivek emphasizes that CDAC offers specialized knowledge across multiple domains and actively assists agencies and startups in applying AI to real‑world problems.
EVIDENCE
He notes, “We also have a lot of expertise in various domains which we have developed in-house… we are hand-holding a lot of these agencies, a lot of government agencies are working with us on this” [61-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel highlighted CDAC’s hands-holding of government agencies and startups, offering domain expertise beyond raw compute resources [S2].
MAJOR DISCUSSION POINT
Domain support and consultancy role of CDAC
N
Nitin Bajaj
7 arguments163 words per minute1810 words665 seconds
Argument 1
ROI uncertainty, deployment model choices (on‑prem, cloud, edge) hinder scaling from pilot to production (Nitin Bajaj)
EXPLANATION
Nitin explained that enterprises face uncertainty about return on investment and must decide among on‑prem, cloud, or edge deployment models. This indecision, combined with cost considerations, prevents many pilots from reaching production scale.
EVIDENCE
He noted that the biggest gap is deciding what to use-on-prem, cloud, or open APIs-and once use cases are defined, the final deployment cost and ROI become critical factors that hinder moving from pilot to production [76-79].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Consulting insights identify ROI concerns, data-governance and token-pricing as key barriers that keep pilots from reaching production scale [S15]; the Fireside Chat also mentions cost-vs-performance decisions affecting deployment choices [S2].
MAJOR DISCUSSION POINT
Enterprise AI adoption barriers
AGREED WITH
Vivek Kanneja
Argument 2
Data‑sovereignty importance varies by industry; decisions balance cost, performance, and edge vs cloud considerations (Nitin Bajaj)
EXPLANATION
Nitin said data sovereignty is crucial for sectors like banking and healthcare, while manufacturing and retail often prioritize speed and performance by using cloud services. Enterprises therefore weigh cost, performance, and deployment location (edge vs cloud) when making decisions.
EVIDENCE
He described how banking and healthcare require strict data sovereignty, whereas manufacturing and retail favor cloud APIs for speed and accuracy, and highlighted edge deployments for OT environments, noting that cost and performance drive the final choice [149-156].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion cited banking and healthcare as sectors where data sovereignty is critical, with cost-performance trade-offs driving cloud versus edge decisions [S2].
MAJOR DISCUSSION POINT
Industry‑specific data sovereignty considerations
AGREED WITH
Vivek Kanneja, Amanraj Khanna
Argument 3
India’s young demographic will rapidly close the AI skill gap; personal upskilling is ongoing (Nitin Bajaj)
EXPLANATION
Nitin pointed out that India’s large, young population (average age 13‑25) provides a demographic advantage that can quickly bridge the AI talent gap. He also mentioned his own efforts to learn from younger colleagues.
EVIDENCE
He stated that India’s booming young population will help close the AI capability gap within a few years and that he personally is learning from the younger generation about AI deployment [184-186].
MAJOR DISCUSSION POINT
Demographic advantage for talent development
AGREED WITH
Vivek Kanneja
DISAGREED WITH
Vivek Kanneja
Argument 4
Intel’s data centers achieve very low PUE (1.06) and employ efficient packaging; careful model selection further reduces power use (Nitin Bajaj)
EXPLANATION
Nitin highlighted Intel’s use of advanced packaging technologies that improve power efficiency by 15% and reported that Intel data centers operate at a PUE of 1.06, the industry’s best. He added that selecting appropriate models for edge or data‑center deployment can further lower power consumption.
EVIDENCE
He mentioned Intel’s ribbon-fed and power-via technologies, a PUE of 1.06 documented in an Intel white paper, and emphasized that judicious model selection helps conserve energy [207-216].
MAJOR DISCUSSION POINT
Energy efficiency in Intel’s AI infrastructure
AGREED WITH
Vivek Kanneja
Argument 5
Mass‑scale AI adoption enabling even small vendors to leverage Indic models and increase public intelligence (Nitin Bajaj)
EXPLANATION
Nitin envisioned a future where India moves from low data usage to leading the world, allowing even small vendors like a street vegetable seller to benefit from AI. He emphasized the role of Indic models and widespread AI deployment in raising public intelligence.
EVIDENCE
He noted that India has risen from rank 150 to number one in data usage, and that when small vendors can up-level using AI, along with Indic models supporting diverse use cases, mass-scale AI deployment will have a large societal impact [225-231].
MAJOR DISCUSSION POINT
Vision of AI success for the broader economy
AGREED WITH
Vivek Kanneja
Argument 6
Intel’s “frugal AI” strategy leverages CPU‑centric architectures with integrated GPU/NPU to run large language models efficiently, reducing reliance on dedicated GPUs.
EXPLANATION
Nitin describes how Intel’s CPUs can handle 7‑8 billion‑parameter models on the edge and up to 20‑80 billion‑parameter models in data centres, offering a cost‑effective alternative to GPU‑only solutions.
EVIDENCE
He explains that “the Intel core and ultra core CPUs they have a GPU, NPU and a CPU all combined in a single processor which allows you enough capability to run maybe a 7 or 8 billion parameter model… Xeon processors today are able to run a 20 billion parameter very easily, up to 80 billion depending on the use case” [156-160].
MAJOR DISCUSSION POINT
Frugal AI hardware approach
Argument 7
The rapid turnover of AI models and APIs creates uncertainty for enterprises, making it hard to select stable deployment models and contributing to pilot stagnation.
EXPLANATION
Nitin points out that the speed at which models evolve forces enterprises to constantly reassess deployment choices (on‑prem, cloud, edge), which hampers progress from pilot to production.
EVIDENCE
He remarks, “the entire AI journey is changing so rapidly models are being dropped at a speed of light… enterprises are trying to figure out what is the best deployment model…” [83-84] and later adds that “people are happy with the POCs… but once it actually hits real-life situations… the reality hits that, no, it’s not that simple” [95-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists observed that AI models are being deprecated at “speed of light,” causing enterprises to hesitate on stable deployment pathways [S2].
MAJOR DISCUSSION POINT
Model volatility and deployment uncertainty
A
Amanraj Khanna
4 arguments154 words per minute1231 words478 seconds
Argument 1
India’s AI strategy must translate massive policy announcements and infrastructure investments into real enterprise adoption and scale.
EXPLANATION
Amanraj points out that while the government has announced significant initiatives such as Pax Silica, multi‑billion‑dollar commitments from Microsoft and Google, and partnerships like Anthropic‑Infosys, the critical challenge remains moving from announcement to deployment at scale.
EVIDENCE
He lists the policy announcements and investment figures (e.g., Pax Silica, $50 billion from Microsoft, $15 billion from Google) and then stresses that “announcement is one thing. Deployment and then achieving scale quite enough” [8-19].
MAJOR DISCUSSION POINT
Policy‑deployment gap
Argument 2
Data sovereignty and technology dependence are central concerns for foreign investors, highlighting the need for a balanced domestic‑global sourcing approach.
EXPLANATION
Amanraj observes that every conversation with foreign investors inevitably raises issues of sovereignty and dependence on external technology, suggesting that India’s AI roadmap must address these concerns explicitly.
EVIDENCE
He states, “I haven’t had a single conversation here with a foreign investor which hasn’t talked about sovereignty or dependency” [107-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Every conversation with foreign investors reportedly raises sovereignty and dependency issues, underscoring the need for a balanced strategy [S2].
MAJOR DISCUSSION POINT
Sovereignty and dependency concerns
Argument 3
Energy consumption and sustainability of AI compute infrastructure must be addressed as part of India’s AI rollout.
EXPLANATION
Amanraj raises the question of how supercomputing and large data‑center operations impact energy use and sustainability, prompting a discussion on cooling techniques and power efficiency.
EVIDENCE
He asks, “How do we think about energy and sustainability implications, especially in the Indian context?” directed to Vivek [186-187].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The sustainability of high-performance AI compute, including power-aware designs and cooling techniques, is discussed in the HPC energy-efficiency review [S11] and echoed by the summit’s PUE discussion [S2].
MAJOR DISCUSSION POINT
Energy and sustainability of AI infrastructure
Argument 4
Success for India’s AI over the next three to five years should be measured by widespread AI deployment across workflows that tangibly improve everyday life and societal intelligence.
EXPLANATION
In his closing question, Amanraj asks panelists to define success succinctly, implying that the benchmark for progress is broad, practical AI integration rather than isolated pilots.
EVIDENCE
He frames the final query: “When we assess India’s progress over the next three, five years, what does success look like to each of you?” [220-222].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources describe AI’s growing impact on daily workflows and societal intelligence, aligning with the panel’s success metric of broad, tangible AI integration [S13][S14].
MAJOR DISCUSSION POINT
Defining AI success metrics
Agreements
Agreement Points
AI projects frequently stall at the proof‑of‑concept stage because of data‑quality issues, lack of MLOps expertise and ROI pressures, hindering movement to production scale.
Speakers: Vivek Kanneja, Nitin Bajaj
Lack of real‑world MLOps expertise and data‑quality challenges cause POC‑to‑production gaps (Vivek Kanneja) ROI uncertainty, deployment model choices (on‑prem, cloud, edge) hinder scaling from pilot to production (Nitin Bajaj)
Both panelists note that while pilots are easy on curated data, real-world deployments encounter messy data and insufficient operational skills, and the uncertainty around ROI and deployment choices prevents scaling [95-100][76-79][93-94].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple industry analyses note that up to 80 % of AI pilots fail to reach production due to data quality, governance gaps and lack of MLOps expertise, and boards demand clear ROI, confirming the stall at PoC stage [S32][S35][S24].
Cost and return‑on‑investment considerations are the primary drivers of enterprise AI deployment decisions.
Speakers: Nitin Bajaj, Vivek Kanneja
ROI uncertainty, deployment model choices (on‑prem, cloud, edge) hinder scaling from pilot to production (Nitin Bajaj) ROI considerations when choosing on‑prem vs GPU vs VM affect deployment choices (Vivek Kanneja)
Both agree that enterprises must evaluate the final cost and ROI before moving beyond pilots, influencing choices between on-prem, cloud, edge or GPU resources [76-79][99-100].
POLICY CONTEXT (KNOWLEDGE BASE)
Enterprise AI investment decisions are driven by cost-benefit analysis and board-level ROI expectations, as highlighted in studies on AI adoption economics and the need for model-size trade-offs [S34][S35][S32].
Energy efficiency and sustainability of AI compute infrastructure are critical, with both parties highlighting low PUE designs and the need for benchmarks.
Speakers: Vivek Kanneja, Nitin Bajaj
Power‑aware design, liquid cooling, and low PUE (~1.2) are being used; need benchmarks for energy per token and model compression (Vivek Kanneja) Intel’s data centers achieve very low PUE (1.06) and employ efficient packaging; careful model selection further reduces power use (Nitin Bajaj)
Vivek describes CDAC’s power-aware VLSI, liquid-cooling and PUE≈1.2, while Nitin cites Intel data-center PUE≈1.06 and efficient packaging, both calling for energy-per-token benchmarks and judicious model choices [188-205][207-216].
POLICY CONTEXT (KNOWLEDGE BASE)
The sustainability of AI compute is emphasized in the Green AI discourse and in national strategies calling for low PUE data-center designs and benchmark development for carbon-aware AI workloads [S25][S39][S41][S24].
There is a notable talent and capability gap in AI, especially in practical deployment and MLOps, though demographic factors may help close it soon.
Speakers: Vivek Kanneja, Nitin Bajaj
Academic training is strong theoretically but weak in practical deployment, MLOps, and handling messy data; curriculum reform needed (Vivek Kanneja) India’s young demographic will rapidly close the AI skill gap; personal upskilling is ongoing (Nitin Bajaj)
Vivek points to insufficient hands-on training for engineers, while Nitin highlights India’s youthful population as a catalyst for rapidly bridging the skill gap [173-182][184-186].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s large, young workforce is identified as a potential remedy for the AI talent shortage, yet current capacity gaps in MLOps and deployment skills are documented in multiple governance and skills-building panels [S28][S36][S37][S38].
Sovereignty concerns must be balanced with pragmatic reliance on global technology components; control over models and applications is emphasized.
Speakers: Vivek Kanneja, Amanraj Khanna, Nitin Bajaj
Full silicon‑to‑application sovereignty is unrealistic now; focus on controlling models and applications while sourcing chips externally, with a long‑term RISC‑V GPU goal (Vivek Kanneja) Data sovereignty and technology dependence are central concerns for foreign investors, highlighting the need for a balanced domestic‑global sourcing approach (Amanraj Khanna) Data‑sovereignty importance varies by industry; decisions balance cost, performance, and edge vs cloud considerations (Nitin Bajaj)
All three acknowledge that complete end-to-end sovereignty is not feasible; instead, India should retain control over critical layers (models, orchestration, applications) while using external silicon, with industry-specific data-sovereignty requirements shaping choices [115-138][111-114][149-156].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy frameworks distinguish strategic sovereignty (control over data, models, governance) from technical reliance on global components, reflecting a balanced approach endorsed in recent AI sovereignty discussions [S30][S42][S43][S41].
Success for India’s AI ecosystem is envisioned as mass‑scale deployment that improves everyday life and empowers even small enterprises.
Speakers: Vivek Kanneja, Nitin Bajaj
Widespread deployment of AI across workflows, improving everyday life (Vivek Kanneja) Mass‑scale AI adoption enabling even small vendors to leverage Indic models and increase public intelligence (Nitin Bajaj)
Vivek defines success as AI being embedded in many workflows, while Nitin envisions a future where even a street vendor can benefit from AI, both stressing broad societal impact [223][225-231].
Similar Viewpoints
Both see the transition from pilot to production as blocked by operational skill gaps and unclear ROI, requiring better MLOps capabilities and clearer cost models [95-100][76-79].
Speakers: Vivek Kanneja, Nitin Bajaj
Lack of real‑world MLOps expertise and data‑quality challenges cause POC‑to‑production gaps (Vivek Kanneja) ROI uncertainty, deployment model choices (on‑prem, cloud, edge) hinder scaling from pilot to production (Nitin Bajaj)
Both stress that energy efficiency is essential for AI infrastructure and that low PUE designs and careful model choices are key strategies [188-205][207-216].
Speakers: Vivek Kanneja, Nitin Bajaj
Power‑aware design, liquid cooling, low PUE (~1.2) and need for energy benchmarks (Vivek Kanneja) Intel’s data centers achieve PUE 1.06 and efficient packaging; model selection reduces power use (Nitin Bajaj)
Both acknowledge sovereignty concerns but agree that a pragmatic mix of domestic control and external technology is necessary [115-138][111-114].
Speakers: Vivek Kanneja, Amanraj Khanna
Full silicon‑to‑application sovereignty is unrealistic; pragmatic approach using external chips while retaining control (Vivek Kanneja) Data sovereignty and technology dependence are central concerns for foreign investors (Amanraj Khanna)
Unexpected Consensus
Both a government research institute (CDAC) and a private‑sector hardware vendor (Intel) prioritize ultra‑low PUE designs and cooling innovations despite operating in different domains.
Speakers: Vivek Kanneja, Nitin Bajaj
Power‑aware design, liquid cooling, and low PUE (~1.2) are being used; need benchmarks for energy per token and model compression (Vivek Kanneja) Intel’s data centers achieve very low PUE (1.06) and employ efficient packaging; careful model selection further reduces power use (Nitin Bajaj)
It is surprising that both a public supercomputing centre and a commercial data-center operator independently emphasize comparable PUE targets (≈1.2 vs 1.06) and similar cooling strategies, indicating a convergent view on sustainability across sectors [188-205][207-216].
POLICY CONTEXT (KNOWLEDGE BASE)
Both CDAC and Intel have publicly committed to ultra-low PUE cooling solutions, demonstrating cross-sector alignment on energy-efficient AI hardware as noted in the AI Impact Summit consensus [S24][S39][S41].
Overall Assessment

The panel shows a strong consensus across policy, research and industry on the core challenges of AI adoption in India: moving from pilots to production, managing cost/ROI, addressing talent gaps, ensuring energy‑efficient infrastructure, and balancing sovereignty with global technology. All agree that success will be measured by widespread, societally beneficial AI deployment.

High consensus – the alignment of viewpoints suggests that coordinated policy, capacity‑building and industry initiatives can be pursued with shared understanding of priorities and constraints.

Differences
Different Viewpoints
Extent and solution to the AI talent and capability gap in India
Speakers: Vivek Kanneja, Nitin Bajaj
Academic training is strong theoretically but weak in practical deployment, MLOps, and handling messy data; curriculum reform needed (Vivek Kanneja) India’s young demographic will rapidly close the AI skill gap; personal upskilling is ongoing (Nitin Bajaj)
Vivek stresses a serious, structural shortage of hands-on AI skills that requires changes to engineering curricula and capstone projects [173-182]. Nitin counters that the country’s large, youthful population will quickly bridge the gap, and he is personally learning from younger colleagues [184-186].
POLICY CONTEXT (KNOWLEDGE BASE)
The extent of India’s AI talent gap and proposed remediation pathways are debated, with reports highlighting current skill shortages versus demographic advantages for rapid upskilling [S28][S36][S37][S38].
Strategic path to AI sovereignty and chip design
Speakers: Vivek Kanneja, Nitin Bajaj
Full silicon‑to‑application sovereignty is unrealistic now; India should source chips externally while retaining control over models and applications, with a long‑term goal of a RISC‑V GPU by 2029‑30 (Vivek Kanneja) Intel’s “frugal AI” relies on existing CPU‑centric architectures with integrated GPU/NPU, suggesting a focus on leveraging current foreign silicon rather than developing a domestic GPU (Nitin Bajaj)
Vivek proposes a pragmatic approach that uses imported chips but keeps critical layers under Indian control, planning a home-grown RISC-V GPU in the future [115-138]. Nitin emphasizes using Intel’s current CPU-based solutions with integrated accelerators, implying reliance on existing foreign silicon without a domestic GPU roadmap [156-160].
POLICY CONTEXT (KNOWLEDGE BASE)
Strategic AI sovereignty and indigenous chip design are framed by policy papers distinguishing strategic control from technical implementation, and by industry calls for domestic AI-optimized silicon [S30][S38][S43].
Assessment of energy efficiency in AI compute infrastructure
Speakers: Vivek Kanneja, Nitin Bajaj
CDAC’s supercomputing solutions achieve a Power Usage Effectiveness (PUE) of around 1.2 using liquid and water cooling [198-199] Intel’s data centres operate at a PUE of 1.06, the most efficient reported, thanks to advanced packaging and design [211-212]
Vivek reports a PUE of roughly 1.2 for CDAC installations, indicating good but not best-in-class efficiency [198-199]. Nitin claims Intel’s own facilities reach a superior PUE of 1.06, suggesting a higher benchmark for sustainability [211-212].
POLICY CONTEXT (KNOWLEDGE BASE)
Assessments of AI compute energy efficiency reference the Green AI literature and national benchmarks for PUE and carbon-aware AI, underscoring the need for systematic measurement [S25][S39][S41].
Unexpected Differences
Role of data sovereignty in enterprise AI decisions
Speakers: Amanraj Khanna, Nitin Bajaj
Data sovereignty and localization are central policy concerns that shape AI infrastructure choices (Amanraj Khanna) Data‑sovereignty importance varies by industry; banking/healthcare need it, but manufacturing/retail often prioritize speed and cost, using cloud services (Nitin Bajaj)
The moderator frames data sovereignty as a dominant, overarching policy driver for all AI deployments [107-108]. Nitin, however, treats it as a sector-specific factor, arguing that many enterprises will choose cloud or edge solutions despite sovereignty concerns [149-156]. This divergence between a universal policy emphasis and a nuanced industry-specific view was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Data sovereignty’s impact on enterprise AI choices is addressed in summit panels that outline a balanced model of national data control coupled with global collaboration, informing policy guidance for Indian enterprises [S42][S30][S43].
Overall Assessment

The panel showed convergence on the existence of barriers to scaling AI—technical (MLOps, data quality) and economic (ROI, deployment choices). However, clear disagreements emerged around talent development, the path to technological sovereignty, and assessments of energy efficiency. An unexpected split appeared on how universally data sovereignty should influence enterprise decisions.

Moderate to high. While participants share a common goal of broader AI adoption, they differ on the root causes and optimal policy/technology pathways, indicating that coordinated action will need to reconcile divergent views on skill development, domestic chip strategy, and the weight of sovereignty versus practical performance considerations.

Partial Agreements
Both acknowledge that moving AI projects beyond pilot stages is difficult: Vivek points to technical hurdles such as unclean data and missing MLOps skills [95-100], while Nitin highlights business‑level concerns around ROI and choosing the right deployment model [76-79]. Together they agree that a combination of technical and economic factors stalls large‑scale adoption.
Speakers: Vivek Kanneja, Nitin Bajaj
Lack of real‑world MLOps expertise and data‑quality challenges cause POC‑to‑production gaps (Vivek Kanneja) ROI uncertainty and deployment‑model choices (on‑prem, cloud, edge) hinder scaling from pilot to production (Nitin Bajaj)
Takeaways
Key takeaways
India’s AI compute infrastructure is anchored by CDAC’s PARAM supercomputers, currently ~48 PFLOPS and projected to reach ~100 PFLOPS by year‑end, serving researchers, MSMEs, and startups. Enterprise AI scaling is hampered by unclear ROI, choice of deployment model (on‑prem, cloud, edge), and a gap between successful POCs and production‑grade MLOps capabilities. Sovereignty goals are realistic when focusing on control of models, data, and applications while still sourcing silicon externally; a long‑term RISC‑V GPU is planned for 2029‑30. Data‑sovereignty requirements vary by industry; cost, performance, and edge vs cloud trade‑offs drive infrastructure decisions. Talent gap exists: academic curricula are strong in theory but lack practical MLOps, data‑cleaning, and large‑model deployment training; demographic advantage may close the gap quickly. Energy efficiency is being addressed through power‑aware chip design, liquid cooling, and low PUE data centers (CDAC ~1.2, Intel ~1.06); benchmarking energy per token and model compression are identified as next steps. Success in the next 3‑5 years is envisioned as widespread AI deployment across workflows, enabling even small vendors to leverage Indic models and improve public intelligence.
Resolutions and action items
CDAC to expand PARAM capacity to ~100 PFLOPS with 60 installations by end of the year. CDAC to continue development of a home‑grown RISC‑V based GPGPU, target release around 2029‑30. Call for curriculum reform in Indian engineering colleges to include practical MLOps, data‑quality handling, and large‑model deployment. Proposal to establish benchmarks for energy consumption per token and promote model compression techniques. Intel to promote “frugal AI” solutions that leverage CPUs with integrated GPU/NPU for edge and data‑center workloads.
Unresolved issues
How enterprises will definitively choose between on‑prem, cloud, and edge deployments given rapidly evolving models and tooling. Quantitative framework for ROI calculation that balances cost, performance, and data‑sovereignty across different industry verticals. Standardized, industry‑wide MLOps practices and tooling to bridge the POC‑to‑production gap. Extent to which India can reduce dependence on foreign silicon in the near term while maintaining competitiveness. Concrete policies or incentives to accelerate talent up‑skilling and retention in AI deployment roles.
Suggested compromises
Adopt a pragmatic sovereignty model: source silicon globally but retain control over AI models, orchestration, and applications. Utilize existing CPU infrastructure for many inference workloads, reserving GPUs for only the most demanding tasks (frugal AI approach). Combine cloud services for rapid prototyping with on‑prem or edge deployments for data‑sensitive or latency‑critical workloads. Balance data‑localization mandates with cost‑performance considerations by allowing hybrid cloud‑edge architectures where appropriate.
Thought Provoking Comments
Announcement is one thing. Deployment and then achieving scale quite enough.
Sets the central tension of the panel, distinguishing hype from practical implementation and framing the need to move beyond policy announcements.
Guided the conversation toward concrete challenges of adoption, prompting both panelists to discuss real‑world barriers and infrastructure, and establishing the lens through which subsequent remarks were evaluated.
Speaker: Amanraj Khanna
The biggest gap today is what to use, whether to use it on‑prem or go on cloud, use open APIs… there is no single formula. The AI journey is changing so rapidly that enterprises are trying to figure out the best deployment model for ROI.
Highlights the core strategic dilemma for enterprises—balancing speed, cost, and technology choices amid fast‑evolving models—introducing the concept of “frugal AI.”
Shifted the discussion from infrastructure capacity to decision‑making complexity, leading Vivek to elaborate on POC challenges and prompting deeper analysis of cost‑performance trade‑offs.
Speaker: Nitin Bajaj
People are very happy with the POCs… but once it hits real‑life situations where the data needs to be cleaned, you have no proper experience in actual deployments of the MLOps… then the reality hits that, no, it’s not that simple.
Identifies the critical bottleneck where many projects stall—transition from proof‑of‑concept to production—emphasizing practical MLOps and data quality issues.
Created a turning point by pinpointing why pilots fail, which reinforced Nitin’s earlier ROI concerns and set the stage for discussing talent gaps and curriculum reforms.
Speaker: Vivek Kanneja
When you talk of sovereignty, do you want to be completely independent from silicon up to the application? Pragmatically, we can source silicon abroad but keep the stack above it under our control. We are designing our own RISC‑V GPGPU for 2029‑30.
Provides a nuanced, realistic perspective on AI sovereignty, balancing aspirational goals with current capabilities and outlining a concrete roadmap.
Redirected the policy‑technology debate from an abstract ideal to actionable steps, influencing Nitin’s later remarks on industry‑specific data sovereignty and reinforcing the need for a hybrid approach.
Speaker: Vivek Kanneja
For banking or healthcare data sovereignty is very important, but for manufacturing or retail many use cases run on the cloud for speed and accuracy. The key is frugal AI—using CPUs where possible and matching performance to cost, e.g., 15‑20 prompts per second may be sufficient.
Adds depth by showing how data‑localization concerns vary by sector and introduces a cost‑focused deployment strategy, linking back to the earlier “no single formula” point.
Expanded the conversation to sector‑specific strategies, illustrating how policy intersects with business decisions and reinforcing the earlier discussion on ROI and hardware choices.
Speaker: Nitin Bajaj
We have bright engineers with theoretical knowledge, but they lack hands‑on experience with real‑world data, MLOps, and deployment. Curriculum needs capstone projects that handle messy, large‑scale data.
Identifies a systemic talent gap that underpins many of the deployment challenges discussed, calling for educational reform.
Deepened the analysis by linking technical bottlenecks to workforce development, prompting Nitin to note demographic advantages and suggesting a longer‑term solution to the earlier POC‑to‑production issue.
Speaker: Vivek Kanneja
Energy efficiency must be benchmarked per token for training or inference. We’re moving to liquid cooling, achieving PUE around 1.2, and need green norms and power‑aware model design.
Introduces a concrete metric (energy per token) for sustainability, moving the sustainability conversation from vague concerns to measurable targets.
Shifted the dialogue toward quantifiable sustainability goals, leading Nitin to complement with Intel’s own PUE of 1.06 and reinforcing the theme of frugal, efficient AI.
Speaker: Vivek Kanneja
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from high‑level policy announcements to the gritty realities of AI adoption in India. Amanraj’s framing question set the stage, while Nitin’s articulation of the ROI and deployment‑model dilemma highlighted the strategic uncertainty enterprises face. Vivek’s stark description of the POC‑to‑production gap and his candid take on talent and sustainability turned the conversation toward practical bottlenecks and systemic solutions. Their complementary perspectives on sovereignty, sector‑specific data concerns, and frugal AI created a nuanced roadmap that linked policy, infrastructure, talent, and energy considerations. Collectively, these comments redirected the panel from abstract optimism to a grounded, multi‑dimensional view of what success will require in the next three to five years.

Follow-up Questions
What is the energy consumption per token for training and inference across different AI models?
Identifying a standard benchmark for energy per token would help assess and improve the sustainability of AI workloads.
Speaker: Vivek Kanneja
How can Indian engineering curricula be updated to include practical MLOps, large‑model deployment, and handling real‑world data issues?
Current curricula focus on theory; adding hands‑on projects would close the talent gap for AI deployments.
Speaker: Vivek Kanneja
What effective strategies can help enterprises move from POCs to production‑scale AI deployments, including building MLOps capabilities?
Enterprises struggle to scale beyond pilots; research into best practices would accelerate real‑world impact.
Speaker: Vivek Kanneja
How should enterprises decide the optimal deployment model (edge, on‑prem, cloud) while balancing cost, performance, and data‑sovereignty requirements?
Choosing the right deployment architecture is a key barrier; a decision framework would guide cost‑effective adoption.
Speaker: Nitin Bajaj
What are the best practices for ‘frugal AI’—running large models on CPUs versus GPUs—to achieve cost‑effective performance?
Understanding when CPUs suffice can reduce hardware costs and broaden access to AI capabilities.
Speaker: Nitin Bajaj
How can model compression and power‑aware model design reduce energy consumption while maintaining accuracy?
Power efficiency is critical for sustainability; research on compression techniques can lower the energy footprint of AI services.
Speaker: Nitin Bajaj
What metrics and benchmarks should be established to assess AI ROI for Indian enterprises?
A clear ROI framework would help enterprises justify investments and choose appropriate AI solutions.
Speaker: Nitin Bajaj
How will India’s domestic GPU development (RISC‑V based) timeline (2029‑30) impact the AI ecosystem, and what interim solutions are needed?
Understanding the transition path from reliance on foreign GPUs to indigenous ones will inform strategic planning.
Speaker: Vivek Kanneja
How can government and industry collaborate to scale MLOps engineer training and capstone projects for real‑world AI deployment?
Coordinated training initiatives can address the shortage of skilled practitioners needed for large‑scale AI.
Speaker: Vivek Kanneja
What further improvements can be made to supercomputing facility energy efficiency beyond the current PUE of ~1.2?
Even modest gains in PUE can significantly reduce operational costs and environmental impact of national AI infrastructure.
Speaker: Vivek Kanneja

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.