Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit
20 Feb 2026 14:00h - 15:00h
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit
Summary
The panel, moderated by Amanraj Khanna, examined how India can turn its ambitious AI policy announcements into real-world adoption and scale [1][7-9][11-14][20-22]. Khanna highlighted recent commitments such as Microsoft’s $20 billion pledge for India, Google’s $15 billion, and partnerships like Anthropic-Infosys, noting that deployment, not just announcement, is the challenge [11-14][15-18][20-22]. He framed the discussion around two perspectives: the national R&D infrastructure led by CDAC and the needs of large Indian enterprises represented by Intel [21-24][27-35][36-38].
CDAC, operating under the Ministry of Electronics and IT, has built the PARAM family of supercomputers, delivering roughly 48 petaflops today and targeting 100 petaflops by year-end across 60 sites [45-52]. About 15 000 researchers and many MSMEs use these clusters for workloads such as drug discovery, protein folding, weather forecasting, oil exploration and computational fluid dynamics [53-60][56-61]. CDAC also provides hands-on support to government agencies and startups through initiatives like Paramuthkarsh in Bangalore [55-57][62].
Nitin Bajaj explained that Indian enterprises struggle to move from pilots to production because they must decide between on-prem, cloud, or edge deployments while balancing ROI, model selection and data quality [70-78][79-84][86-88]. He noted that many firms have already purchased GPUs but remain in pilot mode due to unclear cost-benefit calculations and rapidly evolving AI models [93-94][95-102]. Bajaj promoted “frugal AI” – leveraging Intel CPUs with integrated GPUs/NPU to run 7-20 billion-parameter models efficiently, reducing the need for dedicated GPUs in many use cases [156-162][158-161].
Vivek Kanneja argued that full technological sovereignty is unrealistic in the short term; India can import silicon (e.g., NVIDIA, Intel, AMD) while keeping the software stack, models and applications under domestic control [115-124][125-138]. He added that CDAC is developing a RISC-V based GPGPU expected by 2029-30, but until then reliance on external chips will continue [136-138]. Both speakers identified talent gaps: CDAC sees graduates strong in theory but lacking practical MLOps experience, suggesting curriculum reforms and capstone projects [173-182]. Energy efficiency was raised as a critical issue; CDAC employs liquid cooling and power-aware design to achieve PUE around 1.2, while Intel reports data-center PUE of 1.06 and 15 % power-efficiency gains from new packaging [188-199][207-212].
Kanneja envisions success as AI being embedded in many workflows to simplify lives, whereas Bajaj measures success by widespread, affordable AI use that even a street vendor can leverage, supported by robust Indic models [223][225-231]. The discussion concluded that coordinated advances in infrastructure, enterprise readiness, talent development and sustainable practices are essential for India’s AI ecosystem to mature over the next few years [20-22][173-182][188-205][223][225-231].
Keypoints
Major discussion points
– India’s AI infrastructure and policy momentum – The panel opened by highlighting recent policy announcements and massive private-sector investments (e.g., Microsoft’s $20 bn, Google’s $15 bn) and the role of CDAC’s PARAM supercomputing series, which now provides about 48 petaflops and is slated to reach ~100 petaflops by year-end, serving researchers, MSMEs and national missions such as drug discovery and weather prediction[8-14][45-53].
– Enterprise hurdles in scaling AI from pilots to production – Nitin explained that Indian firms wrestle with choosing the right deployment model (on-prem, cloud, edge), quantifying ROI, and handling data-quality issues that cause proof-of-concepts to stall. Both speakers stressed the need for robust MLOps, “frugal AI” solutions, and clearer cost-performance trade-offs before large-scale roll-out[70-84][95-102].
– Sovereignty versus global technology dependence – Vivek addressed the practical limits of full domestic control, noting India lacks advanced-node fabs and GPU IP, so a pragmatic approach is to import silicon (NVIDIA, Intel, AMD) while keeping the software, model-orchestration and applications under sovereign control. He also mentioned CDAC’s own RISC-V-based GPGPU prototype expected around 2029-30[115-124][125-138].
– Talent and capability gaps in AI deployment – Both panelists agreed that while India produces many bright engineers, curricula focus on theory rather than real-world MLOps, data cleaning, and large-model deployment, creating a bottleneck that must be addressed through hands-on capstone projects and industry-academia collaboration[173-182][184-185].
– Energy and sustainability of AI compute – The discussion turned to the power demands of supercomputing and data-center AI workloads. Vivek highlighted power-aware chip design, liquid-cooling and low PUE (~1.2) for CDAC systems, while Nitin cited Intel’s ultra-efficient data-center PUE of 1.06 and newer ribbon-fed power-delivery technologies that improve efficiency by ~15%[188-199][207-212].
Overall purpose / goal of the discussion
The session was convened to “translate that vision into adoption and scale” – i.e., to bridge the gap between India’s ambitious AI policy and infrastructure (government R&D, supercomputing) and the practical needs and constraints of large Indian enterprises, identifying where the two tracks intersect or diverge and outlining what success should look like in the next few years[20-22][26-27].
Tone of the discussion
– The conversation began enthusiastic and forward-looking, celebrating recent policy wins and investment announcements[5-10].
– It quickly shifted to a pragmatic, candid tone, with speakers openly describing technical constraints, ROI dilemmas, data-quality challenges, and talent shortages[70-84][95-102][173-182].
– Towards the end, the tone became solution-focused and hopeful, emphasizing concrete steps (sovereign stack control, frugal AI hardware, energy-efficient designs) and a vision of widespread AI deployment across Indian society[115-138][188-212][223-231].
Overall, the dialogue moved from high-level optimism to a realistic appraisal of obstacles, and finally to constructive pathways for achieving scalable, sovereign, and sustainable AI in India.
Speakers
– Amanraj Khanna
– Area of Expertise: Technology policy, AI ecosystem bridging government and enterprise.
– Role / Title: Partner and Managing Director for India at the Asia Group; Moderator of the panel. [S2]
– Vivek Kanneja
– Area of Expertise: High-performance computing, supercomputing infrastructure, AI research, cybersecurity, national R&D.
– Role / Title: Executive Director, Center for Development of Advanced Computing (CDAC). [S3][S4]
– Nitin Bajaj
– Area of Expertise: Enterprise AI adoption, sales and technology leadership, cloud/edge/CPU/GPU solutions for large Indian enterprises.
– Role / Title: Director of Sales for Conglomerate Accounts, Intel India. [S5][S6]
Additional speakers:
– Sangeeta Reddy
– Area of Expertise: Healthcare leadership, AI applications in health services.
– Role / Title: Joint Managing Director, Apollo Hospitals.
India’s AI agenda was framed by moderator Amanraj Khanna as a shift from high-profile policy announcements to tangible, large-scale adoption. He opened by noting the “palpable” energy at the summit and highlighted recent commitments – Microsoft’s $20 bn pledge for India, Google’s $15 bn investment and the Anthropic-Infosys partnership – as evidence of a “truly fascinating moment” in the country’s AI ambitions [5-10][11-14][15-18]. He also noted the launch of Pax Silica earlier that morning, underscoring the pace of new AI initiatives [4-5].
The panel brought together Vivek Kanneja, representing CDAC, the national R & D hub under the Ministry of Electronics and IT, and Nitin Bajaj of Intel, speaking for large Indian enterprises [20-24][27-35][36-38].
CDAC’s compute infrastructure
CDAC’s mandate is to deliver super-computing capacity through the National Supercomputing Mission. It has built the PARAM family of machines, now delivering roughly 48 petaflops across the National Knowledge Network, with a target of about 100 petaflops by year-end through 60 installations [45-62]. Approximately 15 000 researchers run jobs on these clusters, and the infrastructure also supports MSMEs via the Paramuthkarsh centre in Bangalore [45-62]. Typical workloads include drug discovery, bioinformatics, protein folding, molecular modeling, weather prediction, oil exploration, finite-element modelling and computational fluid dynamics, with CDAC providing hands-on assistance to government agencies and startups [57-60][61].
Enterprise adoption challenges
Bajaj explained that Indian firms often stall at the proof-of-concept (POC) stage because they first need to identify concrete use-cases-such as smart manufacturing, retail analytics or document search-before confronting the “biggest gap” of choosing an appropriate deployment model (on-prem, cloud or edge) and quantifying return on investment [70-78][79-84]. He noted that even when organisations have purchased GPUs, they remain in pilot mode because the cost of full-scale deployment and the rapid evolution of models create uncertainty [93-102][95-100]. Kanneja highlighted that many projects stall after the POC stage due to data-cleaning and MLOps gaps, a view echoed by Bajaj’s comments on ROI and deployment-model uncertainty [210-215][70-78].
Both panelists stressed cost-effective deployment. Bajaj explicitly branded his approach “frugal AI,” advocating the use of Intel CPUs with integrated GPUs/NPUs to run 7-20 billion-parameter models efficiently, thereby reducing the need for dedicated GPUs [156-162]. Kanneja added that choosing between GPUs and simpler VM setups can also achieve cost-effective outcomes [210-215].
Sovereignty and chip strategy
When asked about AI sovereignty, Kanneja said that end-to-end independence is not feasible today. India lacks advanced-node fabs and GPU IP, so a pragmatic, short-term approach is to import silicon (e.g., NVIDIA, Intel, AMD) while retaining control over software, model orchestration and applications [115-124]. He noted a longer-term ambition to design a RISC-V-based GPGPU, expected around 2029-30, emphasizing that the interim focus must remain on sovereign control of the stack above the chip [136-138].
Data-sovereignty versus performance
Bajaj pointed out that data-sovereignty requirements differ by sector: banking and healthcare demand localisation, whereas manufacturing and retail often prioritise speed and accuracy by using cloud APIs, sometimes deploying at the edge for latency-sensitive tasks [149-156][157-166]. He illustrated “frugal AI” with a prompt-based engine that can handle 15-20 prompts per second on a CPU, avoiding the expense of a GPU-only solution [164-166].
Talent and skills considerations
Kanneja described a current talent gap: Indian graduates possess strong theoretical foundations but lack practical MLOps experience, exposure to messy real-world data and skills in large-model deployment; he called for curriculum reforms and capstone projects that simulate beta-scale data handling [173-182]. Bajaj, in contrast, highlighted India’s youthful demographic (average age 13-25) as a catalyst that will rapidly narrow the gap, noting his own learning from younger engineers [184-186]. Thus, both panelists discussed talent considerations, differing on the immediacy of the shortfall.
Energy consumption and sustainability
Kanneja explained that CDAC’s supercomputers employ power-aware VLSI techniques, clock-gating, and a mix of liquid and water cooling, achieving a Power Usage Effectiveness (PUE) of roughly 1.2-significantly better than conventional water-cooled systems [188-199]. He called for a benchmark of energy consumption per token for both training and inference [200-204]. Complementing this, Bajaj reported Intel’s data-centre PUE of 1.06, achieved through ribbon-fed power delivery and advanced packaging, and stressed that judicious model selection (e.g., using CPUs for 7-8 billion-parameter models) can curtail power demand [207-216].
Vision for the next three to five years
Kanneja envisioned AI “deployed in a lot of workflows and making life much simpler and enjoyable for us” [223]. Bajaj expanded the view to a societal scale, stating that India should move from being a top data-consumer to a leader where even a “Sabziwala” can leverage AI-driven insights, supported by Indic models and mass-scale deployments [225-231].
In summary, the panel identified four inter-linked pillars for India’s AI future: (1) expanding sovereign-controlled compute infrastructure (PARAM supercomputers and eventual domestic GPUs); (2) enabling enterprises to move beyond pilots through clear ROI frameworks, frugal hardware choices and robust MLOps; (3) addressing the talent pipeline with practical curriculum reforms while leveraging the country’s demographic dividend; and (4) ensuring energy-efficient, sustainable operations via low-PUE designs and energy-per-token benchmarks. Consensus emerged on the POC-to-production bottleneck, the primacy of ROI, and the need for energy-efficient designs, while disagreements persisted around the depth of the talent gap, the optimal path to chip sovereignty and the precise efficiency targets. The session closed with Amanraj thanking the panelists and inviting Sangeeta Reddy of Apollo Hospitals to speak, underscoring the broader health-sector interest in AI [236-239].
India’s AI stack, bridging government vision with enterprise needs. My name is Amanraj Khanna. I’m a partner and managing director for India at the Asia Group. Also my privilege to be a moderator here today. I have to say the energy here is still so palpable, even after five days of this. So it’s absolutely brilliant to be here with you all. We’ve had a truly fascinating moment in India’s AI ambitions. Some massive policy announcements. We just had Pax Silica announced this morning. Very exciting indeed. Significant infrastructure investments. Brad Smith, if you heard him yesterday, Microsoft announced $50 billion in the global south alone. After $20 billion, that has been announced for India. Google, as you know, $15 billion. And growing enterprise adoption.
Anthropic announced its partnership with Infosys, Tata, OpenAI. I’m sure you’re all watching this. But announcement is one thing. Deployment and then achieving scale quite enough. So that’s why this panel matters. Today’s conversation brings two critical perspectives to this fundamental question. What does it take to translate that vision into adoption and scale? One of my two distinguished speakers brings unparalleled insight into infrastructure being built through national R &D institutions. The other sees the reality of what India’s largest enterprises actually need when they deploy AI. So we are here to have an honest conversation about these two tracks. Where do they connect and perhaps where they don’t. So with that let me introduce my distinguished panelists. First I have to my immediate left Mr.
Vivek Kanneja. Vivek is the executive director of the Center for Development of Advanced Computing, CDAC as it’s known. And CDAC has built the Parham supercomputing series providing AI compute infrastructure for government departments and national missions. It conducts cutting -edge research into high -performance computing and cybersecurity. and also trains thousands of engineers annually in advanced computing and AI. Vivek, of course, has held multiple senior leadership positions within CDAC and has guided national initiatives in these critical areas. Welcome, Vivek. I also have to my far left, Nitin Bajaj. Nitin is Director of Sales for Conglomerate Accounts at Intel India. He leads Intel’s engagement with India’s largest enterprises on their digital transformation and AI adoption journeys. He has over 28 years of global experience in sales and technology leadership and orchestrates a broad partner ecosystem.
These include system integrators, ISVs, cloud providers to deliver Intel -based solutions spanning cloud, AI, HPC, 5G, edge, and end -user computing. So in summary, he sees firsthand what drives enterprise infrastructure decisions and what actually prevents companies from moving AI from pilot to true scale. So with those introductions, let’s get right into it. Vivek, why don’t I start with you? So here’s my first question Vivek. CDAC has built param supercomputing series and AI compute infrastructure. What compute capabilities does CDAC actually provide today? Who uses them? For what workloads? And what are the key constraints that you see operating within?
Okay, thanks. So as you know, CDAC is a scientific society under the Ministry of Electronics and IT. And one of the mandates that we have is to build supercomputing capacity in the country. This is the mandate which is given to us under the National Supercomputing Mission where we have developed a series of supercomputers under the brand name PARAM. We started in late 80s, starting with the PARAM 8000. And now we have the PARAM series of supercomputers which are installed with our own software. About 48 petaflops of supercomputers are installed in the country. And we have been able to build some of these supercomputers And we have been able to build some of these supercomputers and we have been able to build some of these supercomputers and we have been able to build some of these supercomputers and we have been able to build some of these supercomputers of overall performance connected over the National Knowledge Network or the NKN as it is called.
This capacity is going to be augmented to about 100 pitaflops by the end of this year with 60 installations. Most of these installations are today being used by researchers. About 15 ,000 researchers fire jobs across these machines on the National NKN. A lot of it is being also used by MSMEs. For example, we have opened Paramuthkarsh which is housed in our city at Bangalore Centre for use by start -ups and MSMEs. The kind of applications that run here include drug discovery, bioinformatics, we have protein folding. We have molecular modeling. You have weather prediction. So I mean almost all of these are being used. number crunching problems, oil exploration, finite element modeling, computational fluid dynamic problems. So all such problems are being run across these clusters by researchers.
We also have a lot of expertise in various domains which we have developed in -house over a period of years where we are hand -holding a lot of these agencies, a lot of government agencies are working with us on this.
Thanks so much, Vivek. Lots of threads to pull on there, but for a moment, let me go to Nitin. Nitin, you work with some of India’s largest enterprises, some of our national champions. So when these enterprises commit to AI, which you must have seen increasingly so, what are some of the actual barriers that prevent them to move from pilot projects to some of those production -scale deployments that everyone envisions, especially at events like this?
Thank you. First of all, thank you for inviting me. It’s a privilege to be here and talking in front of such an esteemed crowd. So basically, I think I’ll kind of break this into two, three pieces. One, be it Indian enterprises or be it global enterprises, I think everybody is grappling with the same sort of problem statement. And everybody is trying to find those use cases which are very, very pertinent for their own enterprises. And some certain enterprises are into manufacturing domain, some are there in the retail side of it. And so the typical use cases that they have could be around smart manufacturing, smart retail, or specific generic use cases where they want to do a lot of document search.
They have to have fine -tuned search on the policies, the TNCs and things of that sort. So essentially, the way you see it, I see it in twofold. One, They are looking at the speed, and everybody is trying to figure out the best ROI. So I think the biggest gap today is what to use, whether to use it on -prem or whether to go on cloud, use open APIs which are available to them. Then once those use cases are ready, as you said, from pilot to production, what is the final cost of that deployment? And then the third angle that comes in is whether to kind of centralize all of this or to take it to the edge.
So there is no single answer. And when you think of all the ecosystem providers that are there today, be it from silicon to ISVs to system integrators, everybody has a pocket of expertise in their own sense. But today there is no single formula. And the entire. AI journey is changing so rapidly models are being dropped at a speed of light and then the entire from silicon to OS to everything else, the whole ecosystem is changing so fast that even the enterprises are trying to figure out what is the best deployment model for them, what is the best ROI that they can get out of it but in the midst of all of this, in pockets, lot of these enterprises are trying to see what are the specific use cases that they can bring to fore that can bring some incremental benefit to whatever operations that they are running in their organization.
Now I can give multiple examples to it for example in a manufacturing domain, there could be surveillance kind of use cases multi -modal use cases, there could be use cases around how to look at inventory and then how to look at complete digital twin dark factory kind of a scenario in case of retail it is all about say it could be around preventing the thefts, the pilferation that happens or doing customer analytics. And then, as I said, a lot of document search kind of examples are going on. But in most of those cases, the whole decision -making is between edge versus on -prem, cloud versus sovereign data centers, what kind of models they should use, and how to find out the right ROI, which is where we feel frugal AI is what we kind of propose to the industry.
And I’ll talk more about that maybe later. But what is the best deployment model that can really help an enterprise scale at a level, at a cost point, which is really making sense to them?
Let me ask you a very quick follow -on question on that. Do you see that Indian enterprises are increasingly sophisticated in making these choices? And, of course, Intel works globally, right? So how does it compare in terms of its maturity, its sophistication, compared to the other markets where Intel also operates?
Clearly, in terms of use cases, I think we have the edge. But again, the veracity of data. it’s a big problem second i would say that these enterprises when you think of it a year back almost everybody wanted to buy gpus and a lot of these enterprises have bought systems which are powerful enough to run all kinds of use cases but still they are in that pilot phase because again because of that roi factor so things are maturing of course enterprises are becoming smarter from llms we are now looking at slms so they are trying to kind of figure out the right silicon where they have to kind of land their workloads on so as things are emerging i think things are getting better but yeah it’s still some time before you see those live deployments coming out
for the benefit of large larger audience sorry for interjecting i think just just to add to add to what he just said we are seeing a lot of places where people are not being able to come out of the POCs. I think one of the major reasons that, at least personally I have seen, is that people are very happy with the POCs. They can train them on curated data sets. But once it actually goes and hits real -life situations where the data needs to be cleaned up, it’s not clean, you have no proper experience in actual deployments of the MLOps, you have done it in a canned manner, then suddenly the reality hits that, no, it’s not that simple.
And as you talked about the ROI, then you have to make those choices, whether I should have an on -prem this thing, do I really need a GPU for my problem? Or can I work across multiple VMs on a simple, simple IT infrastructure? So those are the choices that hopefully in the next coming… years, intelligent choices will be made. People, we will start to see more MLOps, engineers coming out and deploying these things at scale because POCs is fine, but the real revenue will come only once you actually deploy it at scale.
Understood. Thank you so much for those perspectives. Unlike yourselves, you know, you’re both technologists. I’m a little bit in the policy space. I’m in the fringes of this. I work on tech policy, and I haven’t had a single conversation here with, you know, a foreign investor which hasn’t talked about sovereignty or dependency. So I’m going to bring my next question. I’ll bring you over to my area because I’d love to pick both your brains on this. So Vivek, let me talk to you about this first, right? So India talks about sovereignty, but SEDAC still relies on global technology, right, whether that’s chips, systems, you know, software stacks. So realistically, what can India… India built domestically versus what will we always need to source globally, right?
and where should we focus our capability? So question to you as someone who’s really on the cutting edge.
Okay, so let me answer that both technically and politically correct way. See, when you talk of sovereignty, let’s see what does it really mean. Do you want to be completely independent in the entire vertical, right from silicon up to the application? Is that really possible, right? Let’s say I need to design a GPU. It’s a good aspirational goal. But do I have the wherewithal today to do the entire thing in -house? Probably not. I don’t have the IPs. We can start, definitely. We don’t even have today a fab that can give me a 3 nanometer or below production capabilities and package capabilities. So I think a more pragmatic approach is to have maybe the silicon coming from outside and everything.
Everything above that should be under my control. So you should be able to control all your… critical choke points. So that’s the model that India AI has taken. For example, they have created a farm of GPUs which are available under the control freely or at a reasonable cost for the developers. The models which are being built on top of that are under your control. How those models are going to be orchestrated is under your control. The application that will use those models is under your control. So for me, that is the sovereignty where you are getting the maximum ROI. Should I really be competing against an H100 or a B200 from NVIDIA? Probably not in the short term.
But yes, as an aspirational goal, just to let you know, CEDAC is designing its own GPGPU based on RISC -V. We will probably have something by the end of 2029 -30. But then, till that time, we really need to have a lot of this entire stack under sovereign control. maybe using chips from outside, whether it is NVIDIA, whether it is Sapphire Rapid or Granite Rapid from Intel, or from AMD. But everything above that should be, all my critical choke points should be under my control.
So I have to compliment you. That was a very candid and pragmatic answer. So compliments to you. Nitin, can I reframe that question a little bit for you? So, you know, there’s, as you know, in the same vein, there’s a significant policy focus on data sovereignty and data localization as well, right? So when your enterprise customers are making AI infrastructure decisions, how much do these factor into those choices? And how do they compare with cost and performance considerations as well? So has this calculus shifted with some of the policy developments that we’ve seen over the past couple of years in AI?
Well, I would say not really. So it all depends on the industry and the market. So it all depends on the industry that enterprise is in. for a banking industry, for healthcare industry, data sovereignty is very, very important. For a manufacturing industry or for any other, say, retail or any other industry, they are trying to build use cases on the cloud because that’s where they feel that they can build those use cases very, very quickly. They can simply call the APIs available there and then they see that the performance is much better and they get better accuracy. So it’s a mix of both. Now, then comes again, even in a manufacturing environment, as I was calling out, in an OT environment, a lot of these manufacturing firms would want their data to reside within their perimeter because it is so, so close to them.
Then again, when it is coming to deployment side of it, they’re again looking at an edge deployment which becomes within the perimeter. So it’s a mix of both. Now, finally, again to the point that I was making in the beginning anything that has to scale the price becomes a key driver so it is about okay I have a model that I have kind of fine tuned on the cloud now when I have to take it to deployment can I use something locally which is available today with me without making a lot of investment and then can I scale it which is where when Intel is talking about it we basically kind of focus on frugal AI today the Intel core and ultra core CPUs they have a GPU, NPU and a CPU all combined in a single processor which allows you enough capability to run maybe a 7 or 8 billion parameter model so the typical requirements that you will have on the edge are very well suited on this CPU itself and when it comes to a data center side the Xeon 6 processors today are able to run a 20 billion parameter very easily.
They can go up to 80 billion parameters depending on the specific use case. So do I need a GPU in every instance is the first question that we are trying to ask. And today everybody has a CPU in their environment. So can they reutilize that? And can they test it out? Can they look at performance levels? One example that I can kind of discuss with my customers is if you’re looking at, say, a prompt -based engine where you need to do a document search, typically a human eye can read about 10 prompts in a second if somebody is a fast reader. Now if a processor can give you 15 to 20 prompts, is this good enough as a performance for you or do you want 200 prompts in that particular second?
Maybe 15, 20 prompts is good enough. So that’s where the cost versus performance comes in. That’s where one has to be very, very… calculated in terms of what is their end use that they are looking at and what would best suffice that particular usage so again it’s a mix of uh this uh localized uh data versus what can go on to cloud and once you look at uh the scale of it again you have to look at the cost because when you’re scaling it on the cloud the cost may be very very different from what you can get on the on -prem side of it i’m not a kind of proponent of on -prem versus cloud for me i think both of them bring their advantages but what i’m trying to say is the customer has to really look at what is their end use and at what cost they want to
understood so we need to diversify our approaches and have a product to mission fit which is absolutely critical think of it in the past everybody used to think only mainframes can solve the problem today the applications are not only running are no more running mainframe, they came on to CPU level things and then today we are looking at micro services. So everything has kind of evolved over time and same things are happening on the AI side of it today. Understood. So one quick question that’s on everyone’s mind and to both of you. From each of your respective perspectives, whether that’s government R &D infrastructure or enterprise deployment, is talent and the capability gap still a critical choke point from each of your perspectives?
So Vivek, why don’t I start with you on this?
It is. It’s unfortunate but yes it is. I mean we do find that we have a set of very bright engineers coming out but most of them are trained in good theoretical understanding of what machine learning is. They are good at mathematics, basic understanding but when it comes to actual deployments on field. I think that’s where we are lacking. Maybe we need to have a serious look at our curriculum in the colleges to see how do I train large models, how do I deploy large models using MLOps, because today, as I said, most of these kids are working on curated data sets. They are working on some standard test cases and test and validation cases.
But when it comes to real life, life is not that rosy. I mean, you have data which is missing, you have data which is skewed, you have data which needs to be cleaned, I have real time constraints, I have other security considerations. So those are not a part of the curriculum. So that’s where I think some capstone projects where you are able to handle beta scale data needs to be put in place. Theory is fine, but still there’s a lot to learn on the practical side.
I’ll give a different perspective. The way I look at India versus all other countries. in other countries it’s an aging population for us it is a booming population at this point in time which with average age of 13 to 25 or so which is kind of exposed to AI so maybe today we may see some sort of gap in terms of AI capabilities but 2 years down the line or 4 years down the line I think that we bridged very very quickly so we have that benefit of demography here in the short run of course as an individual myself I am also learning from the kids today how AI can be deployed so gaps will be there but I think this is a learning curve for everybody
Thanks Nitin I want to get to just a couple more questions and then we’ll try and be very quick one I’ve been wanting to ask you and one that I get asked often is the energy and sustainability question you know super computing uses huge amounts of energy can have societal impacts right CDACs super computing whether the you know large data centers that Intel perhaps you know works with So how do we think about energy and sustainability implications, especially in the Indian context? So Vivek, perhaps, with you.
So from my perspective, I look at it to be addressed at two levels. One is something that we have been doing. Coming from a VLSI background, I can tell you that there are today’s standard techniques which are being used in all associate designs which are power -aware designs. So you have multiple power islands, you have clock tree gating that happens, you switch off those cores which are not being used. So that’s from a design perspective. But when it comes to the platform design, there are smart choices which are being made. For example, if I talk about CDAC solutions, today we are using liquid cooling as well as water cooling in a ratio of almost 70 to 30.
And we are slowly moving the entire thing to a pure liquid pool thing. There are other advanced techniques which use only air cooling. So, ultimately what is your PUE will actually determine. So, typically we see a PUE of around 1 .2 or so as compared to a conventional water -cooled thing which is about 1 .4, 1 .5. There are definitely green norms which are being proposed. I think the question here to be asked and we need to do this kind of a benchmarking across all the models is what is the energy that I spend per token for training or for inferencing. I think there should be critical benchmark. We need to seriously look at how I can optimize my models to be more power aware.
Can I have compressed models which take less energy? So, yes, energy is one of the critical factors and especially hyperscalers would need huge amount of power. I mean, there is this joke that we keep on saying that *** But along with the hyperscaler, you also need to have a small power plant which needs to be designed together with it.
Thank you. Nitin, to you.
I’ll have two, three points here. One, from manufacturing point of view, Intel is utilizing the latest technologies like ribbon fed and power via, which improves the power efficiency by 15%. So these are the latest technologies available. Second, I would say that we are running our own data centers. Which are running at a PUE of 1 .06, which is the most efficient power data center PUE that you could see. So there’s a white paper on intel .com. I would appreciate if those interested can look at it. Third, I would say, again, power is a problem. Or is the first and the foremost ingredient to running those data centers. So one has to be, again, very cautious of what kind of models you are running and then where it is landing.
So if you. If you’re able to be, if you’re more judicious in terms of our selection. then of course we can save power in some ways.
So one final question. I realize that we’re time’s up. One question, let’s put this in a quick sentence if you can. When we assess India’s progress over the next three, five years, what does success look like to each of you? So maybe a sentence.
Success for me would be AI being actually deployed in a lot of workflows and making the life much simpler and enjoyable for us.
Thank you.
For me it is like when we were like about 150 in terms of data usage. Today we are number one in terms of data usage. When it comes to increasing the generic intelligence of people, so today we are using media for media and for entertainment, all the data that we are consuming. If we can consume the data for improving or improving the general intelligence of public. that will make a large scale impact on the society and India at large. So in two or three years, if a Sabziwala can figure out how they can kind of up -level their state, that would be the best way. And then all the Indic models and all the other use cases that are coming out should be able to support those.
And then there will be a mass -scale deployment of AI across the board.
Thanks so much, Nitin. And I see that time’s up. I wish we could pick your brain further. This has been a truly fascinating conversation. And thank you for being so candid with all your responses. Thank you. So please join me in thanking both Vivek and Nitin. Thank you. Thank you. I would now like to invite Sangeeta Reddy, Joint Managing Director, Apollo Hospitals, to give her remarks. Thank you.
Dr. Khaneja outlined CDAC’s substantial progress in building India’s supercomputing backbone through the PARAM series. The current infrastructure provides 48 petaflops of computing capacity distribute…
EventJensen at Davos called this the largest infrastructure build -out in human history. Two weeks ago, 54 countries launched FORGE, the first global framework for the minerals that power AI. Yesterday, on…
EventIndia’s government has greenlit a ₹10,300 Crore ($1.24 billion) fundingprojectto enhance the country’s AI infrastructure, anticipating a collaborative effort involving public and private entities to b…
UpdatesData governance, security concerns, and potential token pricing shocks are major barriers preventing pilot projects from reaching production scale
EventAnd then the biggest, one of the biggest barriers to scale has been the lack of discipline or willingness to say, I’m going to put a, I have to get a value on this. I have to be able to see it in my P…
Event“Data is siloed, data is not ready for AI scale.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/fireside-chat-intel-tata-electronics-cdac-asia-group-_-india-ai-impact-summit?diplo-deep-l…
EventBut yes, as an aspirational goal, just to let you know, CEDAC is designing its own GPGPU based on RISC -V. We will probably have something by the end of 2029 -30. But then, till that time, we really n…
Event_reportingThis comment reframed the entire sovereignty discussion by identifying compute infrastructure as the critical bottleneck rather than talent or market demand. It provided a concrete, actionable focus a…
EventThe discussion reveals strong consensus on key strategic directions: comprehensive ecosystem development beyond chip manufacturing, broad talent development over narrow skills training, multi-stakehol…
EventTo address this, companies are exploring innovative solutions such aspower capping(limiting processor power to 60-80% of capacity) andcarbon-aware computing, which shifts workloads to times or locatio…
BlogAnita Gurumurthy emphasised that despite improvements in chip efficiency, energy demand from data centres continues creating environmental concerns. She noted that efficiency gains are being used to b…
EventAI is increasingly recognised for its transformative potential and growing environmental footprint across industries. The development and deployment of large-scaleAImodels require vast computational r…
UpdatesAs AI and cloud computingdemand surges, Siemens is tackling critical energy and sustainability challenges facing the data centre industry. With power densities surpassing 100kW per rack, traditional i…
UpdatesThe tone was consistently optimistic and collaborative throughout the discussion. It began with celebratory announcements of major financial commitments and maintained an encouraging, partnership-focu…
EventThe tone was largely optimistic and forward-looking, with speakers enthusiastically sharing their visions and initiatives for technological advancement. There was a collaborative spirit, with panelist…
EventThe tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusiastic and grateful atmosphere, with speakers expressing appreciation for partici…
EventThe tone was consistently optimistic and inspirational throughout. Sunak maintained an enthusiastic, forward-looking perspective that celebrated both technological progress and human potential. The sp…
EventThe discussion began with an optimistic, exploratory tone as panelists shared different models and success stories. The tone became more cautionary and analytical in the middle sections when addressin…
EventThe discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while acknowledging the complexity of the challenges. The tone was constructive but reali…
EventThe discussion maintained a pragmatic and collaborative tone throughout, with speakers acknowledging both opportunities and significant challenges in implementing data governance. The tone was notably…
EventThe discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s potential in healthcare but was tempered by acknowledgment of serious challenge…
EventLow to moderate disagreement level. The speakers were largely aligned on identifying problems (aging populations, healthcare shortages, need for better technology solutions) but differed in their appr…
EventThe discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI potential and collaborative problem-solving. Speakers demonstrated confidence in…
EventThe discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around infrastructure, energy, skills, and governance, speakers consistently emphasize…
EventThe discussion maintained a constructive and collaborative tone throughout, with speakers building on each other’s points rather than debating. There was a sense of urgency mixed with cautious optimis…
EventThe tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potential and India’s opportunities in the space. The discussion maintained an educati…
Event“Anthropic‑Infosys partnership announced “just yesterday” to serve Indian enterprises”
The Fireside Conversation notes that Anthropic and Infosys announced a partnership “just yesterday” to serve Indian enterprises, confirming the report’s statement [S73].
“Microsoft pledged $20 bn for India’s AI agenda”
The knowledge base records Microsoft’s commitment to train 20 million Indians by 2030, which is a skills-focused initiative rather than a $20 bn financial pledge, providing additional nuance to the claim [S71].
“CDAC has built the PARAM family of super‑computers providing AI compute infrastructure”
The source states that CDAC has built the Parham (PARAM) supercomputing series that provides AI compute infrastructure for government departments and national missions, confirming the report’s claim [S1].
“CDAC’s super‑computing infrastructure supports government departments and national missions”
The knowledge base adds that the PARAM series is specifically used by government departments and national missions, giving extra detail to the report’s description of CDAC’s mandate [S1].
The panel shows a strong consensus across policy, research and industry on the core challenges of AI adoption in India: moving from pilots to production, managing cost/ROI, addressing talent gaps, ensuring energy‑efficient infrastructure, and balancing sovereignty with global technology. All agree that success will be measured by widespread, societally beneficial AI deployment.
High consensus – the alignment of viewpoints suggests that coordinated policy, capacity‑building and industry initiatives can be pursued with shared understanding of priorities and constraints.
The panel showed convergence on the existence of barriers to scaling AI—technical (MLOps, data quality) and economic (ROI, deployment choices). However, clear disagreements emerged around talent development, the path to technological sovereignty, and assessments of energy efficiency. An unexpected split appeared on how universally data sovereignty should influence enterprise decisions.
Moderate to high. While participants share a common goal of broader AI adoption, they differ on the root causes and optimal policy/technology pathways, indicating that coordinated action will need to reconcile divergent views on skill development, domestic chip strategy, and the weight of sovereignty versus practical performance considerations.
The discussion was shaped by a series of pivotal insights that moved it from high‑level policy announcements to the gritty realities of AI adoption in India. Amanraj’s framing question set the stage, while Nitin’s articulation of the ROI and deployment‑model dilemma highlighted the strategic uncertainty enterprises face. Vivek’s stark description of the POC‑to‑production gap and his candid take on talent and sustainability turned the conversation toward practical bottlenecks and systemic solutions. Their complementary perspectives on sovereignty, sector‑specific data concerns, and frugal AI created a nuanced roadmap that linked policy, infrastructure, talent, and energy considerations. Collectively, these comments redirected the panel from abstract optimism to a grounded, multi‑dimensional view of what success will require in the next three to five years.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

