Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit
Summary
The panel, moderated by Amanraj Khanna, examined how India’s recent policy announcements and large-scale investments are shaping an “AI stack” that links government vision with enterprise needs [7-14][15-18]. Khanna highlighted $20 billion pledged for India by Microsoft, $15 billion from Google and partnerships such as Anthropic-Infosys-Tata as evidence of growing momentum, but stressed that translating announcements into deployment at scale remains the key challenge [11-16][17-21].
CDAC’s executive director Vivek Kanneja explained that the National Supercomputing Mission has deployed about 48 petaflops of PARAM machines, to be expanded to roughly 100 petaflops across 60 sites by year-end, primarily serving researchers and some MSMEs via the Paramuthkarsh hub in Bangalore [45-53][54-60]. These systems support workloads such as drug discovery, weather prediction, oil exploration and computational fluid dynamics, and CDAC also provides hands-on assistance to government agencies and start-ups [57-62].
Intel’s Nitin Bajaj described how Indian enterprises are still stuck in pilot phases because they must decide between on-prem, cloud, or edge deployments, evaluate ROI, and choose appropriate models, leading to a “no single formula” situation [70-78][79-84]. He noted that use-case selection (e.g., smart manufacturing, retail analytics, document search) is often driven by cost-performance trade-offs, and Intel promotes “frugal AI” solutions that can run sizable models on CPUs to reduce reliance on GPUs [85-88][158-166]. Nitin added that data-sovereignty requirements vary by sector-critical for banking and healthcare but less decisive for retail-so enterprises balance regulatory mandates against performance and cost considerations [147-156], and he called for benchmarks such as energy-per-token to guide model optimisation and ensure greener AI at scale [200-203].
When asked about AI sovereignty, Kanneja argued that full independence from global silicon is unrealistic; instead India should secure critical choke points above the chip layer, using foreign GPUs while retaining control over models, software and applications, and he mentioned a planned RISC-V-based GPGPU for 2029-30 [115-124][125-138]. Both speakers agreed that talent gaps hinder large-scale deployment: CDAC sees bright graduates lacking practical MLOps experience, while Bajaj points to India’s youthful demographic as a potential accelerator once training catches up [173-181][184-185]. Energy efficiency was raised as a concern; CDAC employs liquid-cooling and power-aware design to achieve PUE around 1.2, whereas Intel reports data-center PUE of 1.06 and emphasizes efficient processor architectures for edge and cloud workloads [188-199][207-212]. Looking ahead, Kanneja envisions success as AI being embedded in everyday workflows, simplifying life, while Bajaj envisions mass-scale AI adoption that elevates even small vendors and improves public intelligence through widespread data use [223][225-231]. The discussion concluded that coordinated progress in infrastructure, pragmatic sovereignty, talent development and sustainable energy will determine whether India can move from ambitious pilots to pervasive, cost-effective AI deployments [21][95-102][188-205].
Keypoints
Major discussion points
– India’s AI ambition and the need to move from policy announcements to real-world scale.
The moderator frames the session around “massive policy announcements” and huge private-sector investments, then asks what it takes to “translate that vision into adoption and scale” [7-14][20-22].
– CDAC’s supercomputing stack (PARAM) – current capacity, user base and workload types.
CDAC operates about 48 petaflops of PARAM machines, soon to reach ≈ 100 petaflops with 60 installations, serving ~15,000 researchers, MSMEs and start-ups for applications such as drug discovery, weather prediction, CFD, etc. [45-53][54-61][62-63].
– Enterprise hurdles in scaling AI from pilots to production.
Nitin highlights the “speed-ROI” dilemma, choices between on-prem, cloud, edge, data-cleanliness, and rapidly evolving models; both speakers note that successful pilots often stall because of data-quality and MLOps gaps [70-84][95-102].
– AI sovereignty versus reliance on global technology.
Vive Kanneja explains that full end-to-end independence (silicon to application) is not yet feasible; a pragmatic model uses imported GPUs while keeping the stack (models, orchestration, applications) under Indian control, and mentions a home-grown RISC-V GPGPU planned for 2029-30 [115-138].
– Talent and sustainability as cross-cutting constraints.
Both panelists stress a shortage of engineers who can move from theory to production-grade MLOps (curriculum gaps, need for capstone projects) [173-182]; they also discuss energy efficiency measures-liquid cooling, low PUE designs (≈ 1.2 for CDAC, 1.06 for Intel data centres) and the importance of power-aware model design [188-205][207-214].
Overall purpose / goal of the discussion
The panel was convened to provide a candid, dual-track examination of India’s AI ecosystem: one track examining the government-led R&D and supercomputing infrastructure, the other examining the practical needs and obstacles of large Indian enterprises. The aim was to identify where these tracks intersect, where gaps remain, and what concrete steps are needed for India’s AI vision to become a scalable reality.
Overall tone and its evolution
– Opening: Energetic and optimistic, celebrating recent policy wins and investment announcements. [5-11]
– Mid-session: Shifts to a pragmatic, problem-solving tone as panelists detail real-world constraints (capacity limits, pilot-to-production bottlenecks, sovereignty trade-offs). [45-84][115-138]
– Later: Becomes reflective and solution-oriented, acknowledging talent and energy challenges while offering concrete mitigation strategies (curriculum reform, frugal AI, efficient cooling). [173-182][188-214]
– Closing: Concludes on a hopeful, forward-looking note, summarising a vision of widespread AI deployment across workflows and society. [223-230]
The conversation thus moves from high-level enthusiasm to grounded analysis and ends with an aspirational yet realistic outlook.
Speakers
– Amanraj Khanna
– Role/Title: Partner and Managing Director for India at the Asia Group; Moderator of the panel.
– Area of Expertise: Technology policy, AI ecosystem strategy, bridging government vision with enterprise needs.
– Vivek Kanneja
– Role/Title: Executive Director, Center for Development of Advanced Computing (CDAC).
– Area of Expertise: National supercomputing infrastructure, high-performance computing (HPC), AI compute platforms, research and training in advanced computing.
– Nitin Bajaj
– Role/Title: Director of Sales for Conglomerate Accounts, Intel India.
– Area of Expertise: Enterprise AI adoption, sales and technology leadership for large Indian enterprises, solution architecture spanning cloud, edge, and on-prem AI workloads.
Additional speakers:
– Sangeeta Reddy
– Role/Title: Joint Managing Director, Apollo Hospitals.
– Area of Expertise: Healthcare leadership and digital transformation (invited to give remarks, not a panel participant).
– Source: (mentioned in transcript)
The session opened with moderator Amanraj Khanna framing India’s AI agenda as a “stack” that must link government vision with the practical needs of enterprises. He noted that the energy here is still so palpable, even after five days [1-4] and highlighted “Pax Silica announced this morning” as a fresh policy signal [5-7]. Khanna then pointed to a series of high-profile commitments – Microsoft’s $20 billion pledge, Google’s $15 billion investment, and the Anthropic-Infosys-Tata partnership – as evidence of a “truly fascinating moment” for the country’s AI ambitions [10-14].
Khanna introduced the two panelists. To his immediate left was Vivek Kanneja, executive director of the Centre for Development of Advanced Computing (CDAC), which runs the PARAM supercomputing series for government missions and national research [27-31]. To his far left was Nitin Bajaj, director of sales for conglomerate accounts at Intel India, overseeing the company’s engagement with the nation’s largest enterprises on digital transformation and AI adoption [32-38].
CDAC’s role and compute foundation
Kanneja explained that CDAC, a scientific society under the Ministry of Electronics and Information Technology, is tasked with building super-computing capacity through the National Supercomputing Mission. Since the late 1980s CDAC has evolved from the PARAM 8000 to a family of machines that together deliver roughly 48 PFLOPS, a figure slated to rise to about 100 PFLOPS across 60 installations by year-end [45-53]. These resources are accessed via the National Knowledge Network (NKN) and are currently used by around 15 000 researchers [55-56]. Typical workloads include drug discovery, bio-informatics, protein folding, molecular modelling, weather forecasting, oil exploration, finite-element analysis and computational fluid dynamics [57-61].
Enterprise adoption challenges
Bajaj described how Indian corporations are still largely stuck in pilot projects. Organisations first grapple with identifying high-impact use-cases – smart manufacturing, smart retail and document-search – and then face a “speed-ROI” dilemma that forces choices between on-prem, cloud or edge deployments, between open APIs or bespoke models, and between cost-effective production-scale roll-out [70-84]. He argued that there is “no single formula” because the AI ecosystem – from silicon to operating systems – evolves at “light-ning speed”, leaving enterprises uncertain about the optimal deployment model [81-84]. The lack of a clear ROI calculation, combined with rapidly changing model offerings, often leaves pilots in limbo [85-88].
Both panelists agreed that the transition from proof-of-concept (POC) to production is a major choke point. Kanneja noted that POCs succeed on curated data sets but falter when confronted with real-world data that is noisy, incomplete or un-labelled, and when organisations lack mature MLOps capabilities [95-102]. Bajaj echoed this, adding that the “speed-ROI” dilemma and the difficulty of choosing an appropriate deployment architecture further impede scaling [70-84]. This convergence underscores that data quality, operational expertise and cost-benefit analysis are the primary barriers to large-scale AI deployment [95-102][70-84].
AI sovereignty
When asked about AI sovereignty, Kanneja gave a nuanced answer. He argued that full independence from the global silicon supply chain is not feasible today because India does not possess the IP or a fab that can produce sub-3 nm chips [115-124]. Instead, a pragmatic model is to import GPUs (from NVIDIA, Intel, AMD, etc.) while retaining control over the GPU farm, model development, orchestration and applications, thereby securing the critical “choke points” of the stack [127-133]. CDAC is also pursuing a home-grown RISC-V-based GPGPU, expected around 2029-30 [136-138].
Bajaj added that data-sovereignty requirements differ by sector. For banking and healthcare localisation is essential, whereas manufacturing and retail often prefer cloud-based solutions for speed and performance [147-152]. In operational-technology settings firms may keep data on-premise or at the edge for security, but cost considerations dominate when scaling from a cloud-trained model to on-prem deployment [153-156]. He highlighted Intel’s “frugal AI” strategy, which leverages CPUs that integrate GPU, NPU and CPU cores to run 7-8 billion-parameter models at the edge, and Xeon processors that can handle up to 80 billion-parameter models in data centres, questioning whether a dedicated GPU is always necessary [158-166].
Talent gap
Kanneja lamented that while Indian engineering graduates possess strong theoretical foundations, curricula rarely cover practical MLOps, data-cleaning, real-time constraints or security considerations; he called for capstone projects that expose students to beta-scale data [173-182]. Bajaj, by contrast, pointed to India’s youthful demographic – an average age of 13-25 – as a natural accelerator that will close the skill gap within a few years, noting his own learning from younger colleagues [184-185]. Both perspectives were clearly attributed.
Energy and sustainability
Kanneja described CDAC’s power-aware design practices – power islands, clock-gating and a shift from water-cooling to liquid-cooling (approximately 70 % liquid, 30 % water) – achieving a Power Usage Effectiveness (PUE) of roughly 1.2, compared with 1.4 for conventional systems [190-194]. He advocated benchmarking “energy-per-token” for training and inference and exploring model compression to reduce power draw [200-204]. Bajaj complemented this by noting Intel’s own data-centre PUE of 1.06, the use of ribbon-fed “power-via” technology for a 15 % efficiency gain, and the importance of judicious model selection to curb power demand [207-210][215-217].
Points of agreement and divergence
Both speakers agreed that (i) POC-to-production bottlenecks stem from data quality, MLOps maturity and ROI uncertainty; (ii) energy efficiency is critical, with both CDAC and Intel demonstrating low-PUE designs; (iii) success will be measured by mass-scale AI integration that improves everyday life and empowers even informal-sector actors; and (iv) a talent gap exists but can be mitigated through curriculum reform and the country’s demographic dividend [95-102][70-84][190-194][207-210][173-185].
Disagreements emerged around the root cause of the talent gap (curriculum reform versus demographic momentum) and the primary obstacle to scaling AI (data-quality/MLOps versus deployment-model/ROI decisions) [173-185][70-84]. Their visions of success also differed: Kanneja summed it up as “AI being actually deployed in a lot of workflows and making life much simpler” [223], while Bajaj envisaged a future where India becomes the world’s leading consumer of data, enabling even a “Sabziwala” to up-skill through Indic language models and mass-scale AI deployment [225-231].
Open questions raised
The panel highlighted several open questions, including how to develop quantitative ROI models, how enterprises should choose between on-prem, cloud or edge deployments, how to standardise energy-efficiency benchmarks such as “energy-per-token”, how to achieve fuller data-sovereignty while still relying on imported silicon, and what scalable pathways can bridge the talent gap beyond curriculum changes [70-84][95-102][115-124][147-156][173-185][190-194][207-210].
In closing, both panelists expressed optimism. Kanneja envisaged a near-future where AI is woven into everyday workflows, simplifying life for citizens [223]; Bajaj projected that within two to three years India will lead in data usage, with AI-enabled tools uplifting even small vendors and supporting a broad ecosystem of Indic models [225-231]. The discussion underscored that realising India’s AI ambitions will require coordinated progress across infrastructure, pragmatic sovereignty, talent development and sustainable energy, turning policy announcements into tangible, cost-effective AI deployments at scale.
India’s AI stack, bridging government vision with enterprise needs. My name is Amanraj Khanna. I’m a partner and managing director for India at the Asia Group. Also my privilege to be a moderator here today. I have to say the energy here is still so palpable, even after five days of this. So it’s absolutely brilliant to be here with you all. We’ve had a truly fascinating moment in India’s AI ambitions. Some massive policy announcements. We just had Pax Silica announced this morning. Very exciting indeed. Significant infrastructure investments. Brad Smith, if you heard him yesterday, Microsoft announced $50 billion in the global south alone. After $20 billion, that has been announced for India. Google, as you know, $15 billion. And growing enterprise adoption.
Anthropic announced its partnership with Infosys, Tata, OpenAI. I’m sure you’re all watching this. But announcement is one thing. Deployment and then achieving scale quite enough. So that’s why this panel matters. Today’s conversation brings two critical perspectives to this fundamental question. What does it take to translate that vision into adoption and scale? One of my two distinguished speakers brings unparalleled insight into infrastructure being built through national R &D institutions. The other sees the reality of what India’s largest enterprises actually need when they deploy AI. So we are here to have an honest conversation about these two tracks. Where do they connect and perhaps where they don’t. So with that let me introduce my distinguished panelists. First I have to my immediate left Mr.
Vivek Kanneja. Vivek is the executive director of the Center for Development of Advanced Computing, CDAC as it’s known. And CDAC has built the Parham supercomputing series providing AI compute infrastructure for government departments and national missions. It conducts cutting -edge research into high -performance computing and cybersecurity. and also trains thousands of engineers annually in advanced computing and AI. Vivek, of course, has held multiple senior leadership positions within CDAC and has guided national initiatives in these critical areas. Welcome, Vivek. I also have to my far left, Nitin Bajaj. Nitin is Director of Sales for Conglomerate Accounts at Intel India. He leads Intel’s engagement with India’s largest enterprises on their digital transformation and AI adoption journeys. He has over 28 years of global experience in sales and technology leadership and orchestrates a broad partner ecosystem.
These include system integrators, ISVs, cloud providers to deliver Intel -based solutions spanning cloud, AI, HPC, 5G, edge, and end -user computing. So in summary, he sees firsthand what drives enterprise infrastructure decisions and what actually prevents companies from moving AI from pilot to true scale. So with those introductions, let’s get right into it. Vivek, why don’t I start with you? So here’s my first question Vivek. CDAC has built param supercomputing series and AI compute infrastructure. What compute capabilities does CDAC actually provide today? Who uses them? For what workloads? And what are the key constraints that you see operating within?
Okay, thanks. So as you know, CDAC is a scientific society under the Ministry of Electronics and IT. And one of the mandates that we have is to build supercomputing capacity in the country. This is the mandate which is given to us under the National Supercomputing Mission where we have developed a series of supercomputers under the brand name PARAM. We started in late 80s, starting with the PARAM 8000. And now we have the PARAM series of supercomputers which are installed with our own software. About 48 petaflops of supercomputers are installed in the country. And we have been able to build some of these supercomputers And we have been able to build some of these supercomputers and we have been able to build some of these supercomputers and we have been able to build some of these supercomputers and we have been able to build some of these supercomputers of overall performance connected over the National Knowledge Network or the NKN as it is called.
This capacity is going to be augmented to about 100 pitaflops by the end of this year with 60 installations. Most of these installations are today being used by researchers. About 15 ,000 researchers fire jobs across these machines on the National NKN. A lot of it is being also used by MSMEs. For example, we have opened Paramuthkarsh which is housed in our city at Bangalore Centre for use by start -ups and MSMEs. The kind of applications that run here include drug discovery, bioinformatics, we have protein folding. We have molecular modeling. You have weather prediction. So I mean almost all of these are being used. number crunching problems, oil exploration, finite element modeling, computational fluid dynamic problems. So all such problems are being run across these clusters by researchers.
We also have a lot of expertise in various domains which we have developed in -house over a period of years where we are hand -holding a lot of these agencies, a lot of government agencies are working with us on this.
Thanks so much, Vivek. Lots of threads to pull on there, but for a moment, let me go to Nitin. Nitin, you work with some of India’s largest enterprises, some of our national champions. So when these enterprises commit to AI, which you must have seen increasingly so, what are some of the actual barriers that prevent them to move from pilot projects to some of those production -scale deployments that everyone envisions, especially at events like this?
Thank you. First of all, thank you for inviting me. It’s a privilege to be here and talking in front of such an esteemed crowd. So basically, I think I’ll kind of break this into two, three pieces. One, be it Indian enterprises or be it global enterprises, I think everybody is grappling with the same sort of problem statement. And everybody is trying to find those use cases which are very, very pertinent for their own enterprises. And some certain enterprises are into manufacturing domain, some are there in the retail side of it. And so the typical use cases that they have could be around smart manufacturing, smart retail, or specific generic use cases where they want to do a lot of document search.
They have to have fine -tuned search on the policies, the TNCs and things of that sort. So essentially, the way you see it, I see it in twofold. One, They are looking at the speed, and everybody is trying to figure out the best ROI. So I think the biggest gap today is what to use, whether to use it on -prem or whether to go on cloud, use open APIs which are available to them. Then once those use cases are ready, as you said, from pilot to production, what is the final cost of that deployment? And then the third angle that comes in is whether to kind of centralize all of this or to take it to the edge.
So there is no single answer. And when you think of all the ecosystem providers that are there today, be it from silicon to ISVs to system integrators, everybody has a pocket of expertise in their own sense. But today there is no single formula. And the entire. AI journey is changing so rapidly models are being dropped at a speed of light and then the entire from silicon to OS to everything else, the whole ecosystem is changing so fast that even the enterprises are trying to figure out what is the best deployment model for them, what is the best ROI that they can get out of it but in the midst of all of this, in pockets, lot of these enterprises are trying to see what are the specific use cases that they can bring to fore that can bring some incremental benefit to whatever operations that they are running in their organization.
Now I can give multiple examples to it for example in a manufacturing domain, there could be surveillance kind of use cases multi -modal use cases, there could be use cases around how to look at inventory and then how to look at complete digital twin dark factory kind of a scenario in case of retail it is all about say it could be around preventing the thefts, the pilferation that happens or doing customer analytics. And then, as I said, a lot of document search kind of examples are going on. But in most of those cases, the whole decision -making is between edge versus on -prem, cloud versus sovereign data centers, what kind of models they should use, and how to find out the right ROI, which is where we feel frugal AI is what we kind of propose to the industry.
And I’ll talk more about that maybe later. But what is the best deployment model that can really help an enterprise scale at a level, at a cost point, which is really making sense to them?
Let me ask you a very quick follow -on question on that. Do you see that Indian enterprises are increasingly sophisticated in making these choices? And, of course, Intel works globally, right? So how does it compare in terms of its maturity, its sophistication, compared to the other markets where Intel also operates?
Clearly, in terms of use cases, I think we have the edge. But again, the veracity of data. it’s a big problem second i would say that these enterprises when you think of it a year back almost everybody wanted to buy gpus and a lot of these enterprises have bought systems which are powerful enough to run all kinds of use cases but still they are in that pilot phase because again because of that roi factor so things are maturing of course enterprises are becoming smarter from llms we are now looking at slms so they are trying to kind of figure out the right silicon where they have to kind of land their workloads on so as things are emerging i think things are getting better but yeah it’s still some time before you see those live deployments coming out
for the benefit of large larger audience sorry for interjecting i think just just to add to add to what he just said we are seeing a lot of places where people are not being able to come out of the POCs. I think one of the major reasons that, at least personally I have seen, is that people are very happy with the POCs. They can train them on curated data sets. But once it actually goes and hits real -life situations where the data needs to be cleaned up, it’s not clean, you have no proper experience in actual deployments of the MLOps, you have done it in a canned manner, then suddenly the reality hits that, no, it’s not that simple.
And as you talked about the ROI, then you have to make those choices, whether I should have an on -prem this thing, do I really need a GPU for my problem? Or can I work across multiple VMs on a simple, simple IT infrastructure? So those are the choices that hopefully in the next coming… years, intelligent choices will be made. People, we will start to see more MLOps, engineers coming out and deploying these things at scale because POCs is fine, but the real revenue will come only once you actually deploy it at scale.
Understood. Thank you so much for those perspectives. Unlike yourselves, you know, you’re both technologists. I’m a little bit in the policy space. I’m in the fringes of this. I work on tech policy, and I haven’t had a single conversation here with, you know, a foreign investor which hasn’t talked about sovereignty or dependency. So I’m going to bring my next question. I’ll bring you over to my area because I’d love to pick both your brains on this. So Vivek, let me talk to you about this first, right? So India talks about sovereignty, but SEDAC still relies on global technology, right, whether that’s chips, systems, you know, software stacks. So realistically, what can India… India built domestically versus what will we always need to source globally, right?
and where should we focus our capability? So question to you as someone who’s really on the cutting edge.
Okay, so let me answer that both technically and politically correct way. See, when you talk of sovereignty, let’s see what does it really mean. Do you want to be completely independent in the entire vertical, right from silicon up to the application? Is that really possible, right? Let’s say I need to design a GPU. It’s a good aspirational goal. But do I have the wherewithal today to do the entire thing in -house? Probably not. I don’t have the IPs. We can start, definitely. We don’t even have today a fab that can give me a 3 nanometer or below production capabilities and package capabilities. So I think a more pragmatic approach is to have maybe the silicon coming from outside and everything.
Everything above that should be under my control. So you should be able to control all your… critical choke points. So that’s the model that India AI has taken. For example, they have created a farm of GPUs which are available under the control freely or at a reasonable cost for the developers. The models which are being built on top of that are under your control. How those models are going to be orchestrated is under your control. The application that will use those models is under your control. So for me, that is the sovereignty where you are getting the maximum ROI. Should I really be competing against an H100 or a B200 from NVIDIA? Probably not in the short term.
But yes, as an aspirational goal, just to let you know, CEDAC is designing its own GPGPU based on RISC -V. We will probably have something by the end of 2029 -30. But then, till that time, we really need to have a lot of this entire stack under sovereign control. maybe using chips from outside, whether it is NVIDIA, whether it is Sapphire Rapid or Granite Rapid from Intel, or from AMD. But everything above that should be, all my critical choke points should be under my control.
So I have to compliment you. That was a very candid and pragmatic answer. So compliments to you. Nitin, can I reframe that question a little bit for you? So, you know, there’s, as you know, in the same vein, there’s a significant policy focus on data sovereignty and data localization as well, right? So when your enterprise customers are making AI infrastructure decisions, how much do these factor into those choices? And how do they compare with cost and performance considerations as well? So has this calculus shifted with some of the policy developments that we’ve seen over the past couple of years in AI?
Well, I would say not really. So it all depends on the industry and the market. So it all depends on the industry that enterprise is in. for a banking industry, for healthcare industry, data sovereignty is very, very important. For a manufacturing industry or for any other, say, retail or any other industry, they are trying to build use cases on the cloud because that’s where they feel that they can build those use cases very, very quickly. They can simply call the APIs available there and then they see that the performance is much better and they get better accuracy. So it’s a mix of both. Now, then comes again, even in a manufacturing environment, as I was calling out, in an OT environment, a lot of these manufacturing firms would want their data to reside within their perimeter because it is so, so close to them.
Then again, when it is coming to deployment side of it, they’re again looking at an edge deployment which becomes within the perimeter. So it’s a mix of both. Now, finally, again to the point that I was making in the beginning anything that has to scale the price becomes a key driver so it is about okay I have a model that I have kind of fine tuned on the cloud now when I have to take it to deployment can I use something locally which is available today with me without making a lot of investment and then can I scale it which is where when Intel is talking about it we basically kind of focus on frugal AI today the Intel core and ultra core CPUs they have a GPU, NPU and a CPU all combined in a single processor which allows you enough capability to run maybe a 7 or 8 billion parameter model so the typical requirements that you will have on the edge are very well suited on this CPU itself and when it comes to a data center side the Xeon 6 processors today are able to run a 20 billion parameter very easily.
They can go up to 80 billion parameters depending on the specific use case. So do I need a GPU in every instance is the first question that we are trying to ask. And today everybody has a CPU in their environment. So can they reutilize that? And can they test it out? Can they look at performance levels? One example that I can kind of discuss with my customers is if you’re looking at, say, a prompt -based engine where you need to do a document search, typically a human eye can read about 10 prompts in a second if somebody is a fast reader. Now if a processor can give you 15 to 20 prompts, is this good enough as a performance for you or do you want 200 prompts in that particular second?
Maybe 15, 20 prompts is good enough. So that’s where the cost versus performance comes in. That’s where one has to be very, very… calculated in terms of what is their end use that they are looking at and what would best suffice that particular usage so again it’s a mix of uh this uh localized uh data versus what can go on to cloud and once you look at uh the scale of it again you have to look at the cost because when you’re scaling it on the cloud the cost may be very very different from what you can get on the on -prem side of it i’m not a kind of proponent of on -prem versus cloud for me i think both of them bring their advantages but what i’m trying to say is the customer has to really look at what is their end use and at what cost they want to
understood so we need to diversify our approaches and have a product to mission fit which is absolutely critical think of it in the past everybody used to think only mainframes can solve the problem today the applications are not only running are no more running mainframe, they came on to CPU level things and then today we are looking at micro services. So everything has kind of evolved over time and same things are happening on the AI side of it today. Understood. So one quick question that’s on everyone’s mind and to both of you. From each of your respective perspectives, whether that’s government R &D infrastructure or enterprise deployment, is talent and the capability gap still a critical choke point from each of your perspectives?
So Vivek, why don’t I start with you on this?
It is. It’s unfortunate but yes it is. I mean we do find that we have a set of very bright engineers coming out but most of them are trained in good theoretical understanding of what machine learning is. They are good at mathematics, basic understanding but when it comes to actual deployments on field. I think that’s where we are lacking. Maybe we need to have a serious look at our curriculum in the colleges to see how do I train large models, how do I deploy large models using MLOps, because today, as I said, most of these kids are working on curated data sets. They are working on some standard test cases and test and validation cases.
But when it comes to real life, life is not that rosy. I mean, you have data which is missing, you have data which is skewed, you have data which needs to be cleaned, I have real time constraints, I have other security considerations. So those are not a part of the curriculum. So that’s where I think some capstone projects where you are able to handle beta scale data needs to be put in place. Theory is fine, but still there’s a lot to learn on the practical side.
I’ll give a different perspective. The way I look at India versus all other countries. in other countries it’s an aging population for us it is a booming population at this point in time which with average age of 13 to 25 or so which is kind of exposed to AI so maybe today we may see some sort of gap in terms of AI capabilities but 2 years down the line or 4 years down the line I think that we bridged very very quickly so we have that benefit of demography here in the short run of course as an individual myself I am also learning from the kids today how AI can be deployed so gaps will be there but I think this is a learning curve for everybody
Thanks Nitin I want to get to just a couple more questions and then we’ll try and be very quick one I’ve been wanting to ask you and one that I get asked often is the energy and sustainability question you know super computing uses huge amounts of energy can have societal impacts right CDACs super computing whether the you know large data centers that Intel perhaps you know works with So how do we think about energy and sustainability implications, especially in the Indian context? So Vivek, perhaps, with you.
So from my perspective, I look at it to be addressed at two levels. One is something that we have been doing. Coming from a VLSI background, I can tell you that there are today’s standard techniques which are being used in all associate designs which are power -aware designs. So you have multiple power islands, you have clock tree gating that happens, you switch off those cores which are not being used. So that’s from a design perspective. But when it comes to the platform design, there are smart choices which are being made. For example, if I talk about CDAC solutions, today we are using liquid cooling as well as water cooling in a ratio of almost 70 to 30.
And we are slowly moving the entire thing to a pure liquid pool thing. There are other advanced techniques which use only air cooling. So, ultimately what is your PUE will actually determine. So, typically we see a PUE of around 1 .2 or so as compared to a conventional water -cooled thing which is about 1 .4, 1 .5. There are definitely green norms which are being proposed. I think the question here to be asked and we need to do this kind of a benchmarking across all the models is what is the energy that I spend per token for training or for inferencing. I think there should be critical benchmark. We need to seriously look at how I can optimize my models to be more power aware.
Can I have compressed models which take less energy? So, yes, energy is one of the critical factors and especially hyperscalers would need huge amount of power. I mean, there is this joke that we keep on saying that *** But along with the hyperscaler, you also need to have a small power plant which needs to be designed together with it.
Thank you. Nitin, to you.
I’ll have two, three points here. One, from manufacturing point of view, Intel is utilizing the latest technologies like ribbon fed and power via, which improves the power efficiency by 15%. So these are the latest technologies available. Second, I would say that we are running our own data centers. Which are running at a PUE of 1 .06, which is the most efficient power data center PUE that you could see. So there’s a white paper on intel .com. I would appreciate if those interested can look at it. Third, I would say, again, power is a problem. Or is the first and the foremost ingredient to running those data centers. So one has to be, again, very cautious of what kind of models you are running and then where it is landing.
So if you. If you’re able to be, if you’re more judicious in terms of our selection. then of course we can save power in some ways.
So one final question. I realize that we’re time’s up. One question, let’s put this in a quick sentence if you can. When we assess India’s progress over the next three, five years, what does success look like to each of you? So maybe a sentence.
Success for me would be AI being actually deployed in a lot of workflows and making the life much simpler and enjoyable for us.
Thank you.
For me it is like when we were like about 150 in terms of data usage. Today we are number one in terms of data usage. When it comes to increasing the generic intelligence of people, so today we are using media for media and for entertainment, all the data that we are consuming. If we can consume the data for improving or improving the general intelligence of public. that will make a large scale impact on the society and India at large. So in two or three years, if a Sabziwala can figure out how they can kind of up -level their state, that would be the best way. And then all the Indic models and all the other use cases that are coming out should be able to support those.
And then there will be a mass -scale deployment of AI across the board.
Thanks so much, Nitin. And I see that time’s up. I wish we could pick your brain further. This has been a truly fascinating conversation. And thank you for being so candid with all your responses. Thank you. So please join me in thanking both Vivek and Nitin. Thank you. Thank you. I would now like to invite Sangeeta Reddy, Joint Managing Director, Apollo Hospitals, to give her remarks. Thank you.
“He highlighted “Pax Silica announced this morning” as a fresh policy signal.”
Pax Silica is referenced in the summit as a policy concept that amplifies India’s strategic strength, though the source does not specify an announcement that morning [S81].
“Vivek Kanneja is the executive director of the Centre for Development of Advanced Computing (CDAC).”
The session listing identifies Vivek Kanneja (spelled Kaneja) as Executive Director of CDAC (CDAT) [S3].
“Nitin Bajaj is director of sales for conglomerate accounts at Intel India.”
The knowledge base lists Nitin Bajaj as Director of Sales and Marketing at Intel, confirming his senior sales role at Intel India [S3].
The discussion shows strong convergence on four main fronts: (1) the difficulty of moving AI pilots to production due to data quality and ROI; (2) the centrality of cost‑effective, energy‑efficient infrastructure; (3) a shared vision that AI success means mass‑scale, everyday deployment; and (4) recognition of a talent gap that can be mitigated by curriculum reforms and demographic advantages. All speakers align on the need to turn policy and investment into tangible, sustainable AI outcomes.
High consensus – the speakers largely reinforce each other’s points, suggesting a unified understanding of the practical, economic, and sustainability challenges that must be addressed for India’s AI ambitions to materialise.
The panel largely concurs on the need for AI scaling, talent development, and sustainability, but they diverge on the perceived root causes and preferred solutions—Vivek emphasizes structural reforms (curriculum, MLOps, controlled sovereignty) while Nitin highlights market‑driven choices, demographic strengths, and hardware efficiencies.
Moderate disagreement: differences are more about emphasis and implementation pathways than fundamental contradictions, suggesting that coordinated policy and industry actions will need to reconcile education reform with leveraging demographic momentum and provide clear guidance on ROI, deployment models, and energy benchmarks.
The discussion pivoted around three core tensions: translating policy and infrastructure into enterprise scale, navigating sovereignty versus global dependence, and bridging talent and sustainability gaps. Nitin’s articulation of deployment‑model dilemmas and Vivek’s candid exposition of POC‑to‑production failures acted as catalysts, steering the conversation from high‑level announcements to gritty operational realities. Vivek’s pragmatic sovereignty answer and his emphasis on energy efficiency introduced new strategic dimensions, while Nitin’s frugal‑AI proposition and demographic optimism offered concrete pathways forward. Collectively, these comments deepened the analysis, reframed challenges as solvable trade‑offs, and shaped a narrative that India’s AI future hinges on smart infrastructure choices, adaptable talent pipelines, and cost‑effective, sustainable deployment models.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
